每年專案
摘要
This research investigated real-time fingertip detection in frames captured from the increas-ingly popular wearable device, smart glasses. The egocentric-view fingertip detection and character recognition can be used to create a novel way of inputting texts. We first employed Unity3D to build a synthetic dataset with pointing gestures from the first-person perspective. The obvious benefits of using synthetic data are that they eliminate the need for time-consuming and error-prone manual labeling and they provide a large and high-quality dataset for a wide range of purposes. Following that, a modified Mask Regional Convolutional Neural Network (Mask R-CNN) is proposed, consist-ing of a region-based CNN for finger detection and a three-layer CNN for fingertip location. The process can be completed in 25 ms per frame for 640 × 480 RGB images, with an average error of 8.3 pixels. The speed is high enough to enable real-time “air-writing”, where users are able to write characters in the air to input texts or commands while wearing smart glasses. The characters can be recognized by a ResNet-based CNN from the fingertip trajectories. Experimental results demonstrate the feasibility of this novel methodology.
原文 | ???core.languages.en_GB??? |
---|---|
文章編號 | 4382 |
期刊 | Sensors (Switzerland) |
卷 | 21 |
發行號 | 13 |
DOIs | |
出版狀態 | 已出版 - 1 7月 2021 |
指紋
深入研究「Egocentric-view fingertip detection for air writing based on convolutional neural networks†」主題。共同形成了獨特的指紋。專案
- 1 已完成