TY - GEN
T1 - HandKey
T2 - 5th International Conference on Information Technology, InCIT 2020
AU - Enkhbat, Avirmed
AU - Shih, Timothy K.
AU - Thaipisutikul, Tipajin
AU - Hakim, Noorkholis Luthfil
AU - Aditya, Wisnu
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/10/21
Y1 - 2020/10/21
N2 - This paper proposes an efficient framework that recognizes hand typing motions and gestures for making a virtual keyboard by using a single RGB camera. There are several works related to virtual keyboard in the Human-computer interaction (HCI) area. Most of them use hand pose estimation, hand shape and external equipment (depth sensor, leap motion, control glove, touch screen etc.). Whereas, our framework does not require additional equipment or prior experience from users, it works like a regular typing action in the air which is similar to typing on a real QWERTY keyboard. It uses convolutional neural networks (CNN) to classify 2 hand typing gestures (touch and non-touch). Also, we train 11 gestures which are non-touch and touching for each 10 fingers of two hands gestures. Proposed CNN model achieves a 99.2% classification accuracy for the 2 gestures case and a 91% classification accuracy for the 11 gestures case.
AB - This paper proposes an efficient framework that recognizes hand typing motions and gestures for making a virtual keyboard by using a single RGB camera. There are several works related to virtual keyboard in the Human-computer interaction (HCI) area. Most of them use hand pose estimation, hand shape and external equipment (depth sensor, leap motion, control glove, touch screen etc.). Whereas, our framework does not require additional equipment or prior experience from users, it works like a regular typing action in the air which is similar to typing on a real QWERTY keyboard. It uses convolutional neural networks (CNN) to classify 2 hand typing gestures (touch and non-touch). Also, we train 11 gestures which are non-touch and touching for each 10 fingers of two hands gestures. Proposed CNN model achieves a 99.2% classification accuracy for the 2 gestures case and a 91% classification accuracy for the 11 gestures case.
KW - convolutional neural network
KW - hand typing gesture recognition
KW - human-computer interaction
KW - motion history image
KW - virtual keyboard
UR - http://www.scopus.com/inward/record.url?scp=85100161742&partnerID=8YFLogxK
U2 - 10.1109/InCIT50588.2020.9310783
DO - 10.1109/InCIT50588.2020.9310783
M3 - 會議論文篇章
AN - SCOPUS:85100161742
T3 - InCIT 2020 - 5th International Conference on Information Technology
SP - 315
EP - 319
BT - InCIT 2020 - 5th International Conference on Information Technology
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 21 October 2020 through 22 October 2020
ER -