TY - JOUR
T1 - Human Posture Recognition Based on Images Captured by the Kinect Sensor
AU - Wang, Wen June
AU - Chang, Jun Wei
AU - Haung, Shih Fu
AU - Wang, Rong Jyue
N1 - Publisher Copyright:
© SAGE Publications Ltd, unless otherwise noted. Manuscript content on this site is licensed under Creative Commons Licenses.
PY - 2016/3/15
Y1 - 2016/3/15
N2 - In this paper we combine several image processing techniques with the depth images captured by a Kinect sensor to successfully recognize the five distinct human postures of sitting, standing, stooping, kneeling, and lying. The proposed recognition procedure first uses background subtraction on the depth image to extract a silhouette contour of a human. Then, a horizontal projection of the silhouette contour is employed to ascertain whether or not the human is kneeling. If the figure is not kneeling, the star skeleton technique is applied to the silhouette contour to obtain its feature points. We can then use the feature points together with the centre of gravity to calculate the feature vectors and depth values of the body. Next, we input the feature vectors and the depth values into a pre-trained LVQ (learning vector quantization) neural network; the outputs of this will determine the postures of sitting (or standing), stooping, and lying. Lastly, if an output indicates sitting or standing, one further, similar feature identification technique is needed to confirm this output. Based on the results of many experiments, using the proposed method, the rate of successful recognition is higher than 97% in the test data, even though the subjects of the experiments may not have been facing the Kinect sensor and may have had different statures. The proposed method can be called a "hybrid recognition method", as many techniques are combined in order to achieve a very high recognition rate paired with a very short processing time.
AB - In this paper we combine several image processing techniques with the depth images captured by a Kinect sensor to successfully recognize the five distinct human postures of sitting, standing, stooping, kneeling, and lying. The proposed recognition procedure first uses background subtraction on the depth image to extract a silhouette contour of a human. Then, a horizontal projection of the silhouette contour is employed to ascertain whether or not the human is kneeling. If the figure is not kneeling, the star skeleton technique is applied to the silhouette contour to obtain its feature points. We can then use the feature points together with the centre of gravity to calculate the feature vectors and depth values of the body. Next, we input the feature vectors and the depth values into a pre-trained LVQ (learning vector quantization) neural network; the outputs of this will determine the postures of sitting (or standing), stooping, and lying. Lastly, if an output indicates sitting or standing, one further, similar feature identification technique is needed to confirm this output. Based on the results of many experiments, using the proposed method, the rate of successful recognition is higher than 97% in the test data, even though the subjects of the experiments may not have been facing the Kinect sensor and may have had different statures. The proposed method can be called a "hybrid recognition method", as many techniques are combined in order to achieve a very high recognition rate paired with a very short processing time.
KW - Feature Extraction
KW - Image Processing
KW - Neural Network Application
KW - Posture Recognition
UR - http://www.scopus.com/inward/record.url?scp=85001945971&partnerID=8YFLogxK
U2 - 10.5772/62163
DO - 10.5772/62163
M3 - 期刊論文
AN - SCOPUS:85001945971
SN - 1729-8806
VL - 13
JO - International Journal of Advanced Robotic Systems
JF - International Journal of Advanced Robotic Systems
IS - 2
M1 - 62163
ER -