Novel Spatio-Temporal Continuous Sign Language Recognition Using an Attentive Multi-Feature Network

Wisnu Aditya, Timothy K. Shih, Tipajin Thaipisutikul, Arda Satata Fitriajie, Munkhjargal Gochoo, Fitri Utaminingrum, Chih Yang Lin

研究成果: 雜誌貢獻期刊論文同行評審

13 引文 斯高帕斯(Scopus)

摘要

Given video streams, we aim to correctly detect unsegmented signs related to continuous sign language recognition (CSLR). Despite the increase in proposed deep learning methods in this area, most of them mainly focus on using only an RGB feature, either the full-frame image or details of hands and face. The scarcity of information for the CSLR training process heavily constrains the capability to learn multiple features using the video input frames. Moreover, exploiting all frames in a video for the CSLR task could lead to suboptimal performance since each frame contains a different level of information, including main features in the inferencing of noise. Therefore, we propose novel spatio-temporal continuous sign language recognition using the attentive multi-feature network to enhance CSLR by providing extra keypoint features. In addition, we exploit the attention layer in the spatial and temporal modules to simultaneously emphasize multiple important features. Experimental results from both CSLR datasets demonstrate that the proposed method achieves superior performance in comparison with current state-of-the-art methods by 0.76 and 20.56 for the WER score on CSL and PHOENIX datasets, respectively.

原文???core.languages.en_GB???
文章編號6452
期刊Sensors (Switzerland)
22
發行號17
DOIs
出版狀態已出版 - 9月 2022

指紋

深入研究「Novel Spatio-Temporal Continuous Sign Language Recognition Using an Attentive Multi-Feature Network」主題。共同形成了獨特的指紋。

引用此