Using Hybrid Models for Action Correction in Instrument Learning Based on AI

Avirmed Enkhbat, Timothy K. Shih, Munkhjargal Gochoo, Pimpa Cheewaprakobkit, Wisnu Aditya, Thai Duy Quy, Hsinchih Lin, Yu Ting Lin

研究成果: 雜誌貢獻期刊論文同行評審

摘要

Human action recognition has recently attracted much attention in computer vision research. Its applications are widely found in video surveillance, human-computer interaction, entertainment, and autonomous driving. In this study, we developed a system for evaluating online music performances. This system conducts experiments to assess performance of playing the erhu, the most popular traditional stringed instrument in East Asia. Mastering the erhu poses a challenge, as players often struggle to enhance their skills due to incorrect techniques and a lack of guidance, resulting in limited progress. To address this issue, we propose hybrid models based on graph convolutional networks (GCN) and temporal convolutional networks (TCN) for action recognition to capture spatial relationships between different joints or keypoints in a human skeleton, and interactions between these joints. This can assist players in identifying errors while playing the instrument. In our research, we use RGB video as input, segmenting it into individual frames. For each frame, we extract keypoints, encompassing both image and keypoint information, which serve as input data for our model. Leveraging our innovative model architecture, we achieve an impressive accuracy rate exceeding 97% across various classes of hand error modules, thus providing valuable insights into the assessment of musical performances and demonstrates the potential of AI-based solutions to enhance the learning and correction of complex human actions in interactive learning environments.

原文???core.languages.en_GB???
頁(從 - 到)125319-125331
頁數13
期刊IEEE Access
12
DOIs
出版狀態已出版 - 2024

指紋

深入研究「Using Hybrid Models for Action Correction in Instrument Learning Based on AI」主題。共同形成了獨特的指紋。

引用此