Smart training: Mask R-CNN oriented approach

Mu Chun Su, Jieh Haur Chen, Vidya Trisandini Azzizi, Hsiang Ling Chang, Hsi Hsien Wei

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

This paper is aimed at the usage of an augmented reality assisted system set up on the smart-glasses for training activities. Literature review leads us to a comparison among related technologies, yielding that Mask Regions with Convolutional Neural Network (R-CNN) oriented approach fits the study needs. The proposed method including (1) pointing gesture capture, (2) finger-pointing analysis, and (3) virtual tool positioning and rotation angle are developed. Results show that the recognition of object detection is 95.5%, the Kappa value of recognition of gesture detection is 0.93, and the average time for detecting pointing gesture is 0.26 seconds. Furthermore, even under different lighting, such as indoor and outdoor, the pointing analysis accuracy is up to 79%. The error between the analysis angle and the actual angle is only 1.32 degrees. The results proved that the system is well suited to present the effect of augmented reality, making it applicable for real world usage.

Original languageEnglish
Article number115595
JournalExpert Systems with Applications
Volume185
DOIs
StatePublished - 15 Dec 2021

Keywords

  • Augmented reality
  • Finger-pointing analysis
  • Hand gesture recognition
  • Mask Regions with Convolutional Neural Network (R-CNN)
  • Smart training

Fingerprint

Dive into the research topics of 'Smart training: Mask R-CNN oriented approach'. Together they form a unique fingerprint.

Cite this