Speech Emotion Recognition Based on Joint Self-Assessment Manikins and Emotion Labels

Jing Ming Chen, Pao Chi Chang, Kai Wen Liang

研究成果: 書貢獻/報告類型會議論文篇章同行評審

1 引文 斯高帕斯(Scopus)

摘要

In this work, we propose a system for speech emotion recognition based on regression models and classification models jointly. This speech emotion recognition technology can achieve the accuracy of 64.70% in the dataset of script and improvised mixed scenes. The accuracy can be up to 66.34% in the dataset with only improvised scenes. Compared to the state-of-art technology without the mental states, the accuracy of the proposed method is increased by 2.95% and 2.09% respect to improvised and mixed scenes. The results show that the characteristics of mental states can effectively improve the performance of speech emotion recognition.

原文???core.languages.en_GB???
主出版物標題Proceedings - 2019 IEEE International Symposium on Multimedia, ISM 2019
發行者Institute of Electrical and Electronics Engineers Inc.
頁面327-330
頁數4
ISBN(電子)9781728156064
DOIs
出版狀態已出版 - 12月 2019
事件21st IEEE International Symposium on Multimedia, ISM 2019 - San Diego, United States
持續時間: 9 12月 201911 12月 2019

出版系列

名字Proceedings - 2019 IEEE International Symposium on Multimedia, ISM 2019

???event.eventtypes.event.conference???

???event.eventtypes.event.conference???21st IEEE International Symposium on Multimedia, ISM 2019
國家/地區United States
城市San Diego
期間9/12/1911/12/19

指紋

深入研究「Speech Emotion Recognition Based on Joint Self-Assessment Manikins and Emotion Labels」主題。共同形成了獨特的指紋。

引用此