Novel Spatio-Temporal Continuous Sign Language Recognition Using an Attentive Multi-Feature Network

Wisnu Aditya, Timothy K. Shih, Tipajin Thaipisutikul, Arda Satata Fitriajie, Munkhjargal Gochoo, Fitri Utaminingrum, Chih Yang Lin

Research output: Contribution to journalArticlepeer-review

11 Scopus citations


Given video streams, we aim to correctly detect unsegmented signs related to continuous sign language recognition (CSLR). Despite the increase in proposed deep learning methods in this area, most of them mainly focus on using only an RGB feature, either the full-frame image or details of hands and face. The scarcity of information for the CSLR training process heavily constrains the capability to learn multiple features using the video input frames. Moreover, exploiting all frames in a video for the CSLR task could lead to suboptimal performance since each frame contains a different level of information, including main features in the inferencing of noise. Therefore, we propose novel spatio-temporal continuous sign language recognition using the attentive multi-feature network to enhance CSLR by providing extra keypoint features. In addition, we exploit the attention layer in the spatial and temporal modules to simultaneously emphasize multiple important features. Experimental results from both CSLR datasets demonstrate that the proposed method achieves superior performance in comparison with current state-of-the-art methods by 0.76 and 20.56 for the WER score on CSL and PHOENIX datasets, respectively.

Original languageEnglish
Article number6452
JournalSensors (Switzerland)
Issue number17
StatePublished - Sep 2022


  • continuous sign language
  • keypoints
  • multi-feature
  • self-attention
  • spatial
  • temporal


Dive into the research topics of 'Novel Spatio-Temporal Continuous Sign Language Recognition Using an Attentive Multi-Feature Network'. Together they form a unique fingerprint.

Cite this