Smart training: Mask R-CNN oriented approach

Mu Chun Su, Jieh Haur Chen, Vidya Trisandini Azzizi, Hsiang Ling Chang, Hsi Hsien Wei

研究成果: 雜誌貢獻期刊論文同行評審

2 引文 斯高帕斯(Scopus)

摘要

This paper is aimed at the usage of an augmented reality assisted system set up on the smart-glasses for training activities. Literature review leads us to a comparison among related technologies, yielding that Mask Regions with Convolutional Neural Network (R-CNN) oriented approach fits the study needs. The proposed method including (1) pointing gesture capture, (2) finger-pointing analysis, and (3) virtual tool positioning and rotation angle are developed. Results show that the recognition of object detection is 95.5%, the Kappa value of recognition of gesture detection is 0.93, and the average time for detecting pointing gesture is 0.26 seconds. Furthermore, even under different lighting, such as indoor and outdoor, the pointing analysis accuracy is up to 79%. The error between the analysis angle and the actual angle is only 1.32 degrees. The results proved that the system is well suited to present the effect of augmented reality, making it applicable for real world usage.

原文???core.languages.en_GB???
文章編號115595
期刊Expert Systems with Applications
185
DOIs
出版狀態已出版 - 15 12月 2021

指紋

深入研究「Smart training: Mask R-CNN oriented approach」主題。共同形成了獨特的指紋。

引用此