Using Deep Learning Techniques for Common Taiwan Sign Language Translation

Project Details

Description

The joint project entitled “Taiwan Sign Language Translation and Training Systems Base on Deep Learning Techniques” plans to investigate deep learning methods (including training data preparation, detailed feature extraction, base sign gesture recognition, and continuous sign language translation (and for training). The project includes four subprojects:Subproject one: Construction of Taiwan Sign Language Database for Deep Learning TrainingSubproject two: Lifelong-learning Based Sign Language Recognition Associated with Facial Expressions and Meaningful GesturesSubproject three: Using Deep Learning Techniques for Common Taiwan Sign Language TranslationSubproject four: A Deep-Learning-based Visual Recognition Scheme for Taiwan Sign Language TrainingThe project team includes many professors in Computer Engineering areas, including Human-Computer Interaction, Software Development, and Deep Learning. In addition, experts of Taiwan Sign Language Translation will define the scope of sentences that will be used in the translation system. Test data will be collected by the Association for Taiwan Sign Language Translation. The first subproject focuses on the scope definition for translation. And, a data collection tool will be implemented. A database management system will be integrated with the data collection tool. Thus, collected sign language data set can be retrieved in an easy manner. The second subproject focuses on semantic segmentation of sign language videos. Facial expressions will be recognized and used as a base in translation. The second subproject also proposes a new direction to use a minimal set of training data. Thus, the translation system can be used for other sign languages in the future. The third subproject deals with the main sign language translation modules. Base sign gestures will be recognized by CNN like neural networks. With the changing time duration of gestures in a typical sign sentence, the third subproject will also propose methods in sequence to sequence translation (e.g., RNN, LSTM, etc.), in order to precisely translate words and sentences. The fourth subproject further analyze skeleton of two hands, as well as features in user’s face. A training system will be implemented to give users a score, which reveals the degree of similarity w.r.t. a standard signa language sentence. The training system can be further implemented in a lightweight device for users. Automatic Taiwan Sign Language translation is a difficult and very important research area. If properly implemented, the contribution will benefit those who relies on sign language for communication, as well as those who use a similar technology for practical applications (e.g., gesture for interactive machine design). The systems that will be delivered can be used for technical transfer purposes in Taiwan’s industry. The Taiwan Sign Language dataset and its database management system can also benefit other researchers who are interested in Sign Language Translation.
StatusFinished
Effective start/end date1/08/2031/07/21

UN Sustainable Development Goals

In 2015, UN member states agreed to 17 global Sustainable Development Goals (SDGs) to end poverty, protect the planet and ensure prosperity for all. This project contributes towards the following SDG(s):

  • SDG 4 - Quality Education
  • SDG 11 - Sustainable Cities and Communities
  • SDG 17 - Partnerships for the Goals

Keywords

  • Taiwan Sign Language
  • Automatic Translation
  • Sign Language Training
  • Deep Learning

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.