Speech emotion verification using emotion variance modeling and discriminant scale-frequency maps

Jia Ching Wang, Yu Hao Chin, Bo Wei Chen, Chang Hong Lin, Chung Hsien Wu

研究成果: 雜誌貢獻期刊論文同行評審

16 引文 斯高帕斯(Scopus)

摘要

This paper develops an approach to speech-based emotion verification based on emotion variance modeling and discriminant scale-frequency maps. The proposed system consists of two parts-feature extraction and emotion verification. In the first part, for each sound frame, important atoms from the Gabor dictionary are selected by using the matching pursuit algorithm. The scale, frequency, and magnitude of the atoms are extracted to construct a nonuniform scale-frequency map, which supports auditory discriminability by the analysis of critical bands. Next, sparse representation is used to transform scale-frequency maps into sparse coefficients to enhance the robustness against emotion variance and achieve error-tolerance improvement. In the second part, emotion verification, two scores are calculated. A novel sparse representation verification approach based on Gaussian-modeled residual errors is proposed to generate the first score from the sparse coefficients. Such a classifier can minimize emotion variance and improve recognition accuracy. The second score is calculated by using the emotional agreement index (EAI) from the same coefficients. These two scores are combined to obtain the final detection result. Experiments on an emotional database of spoken speech were conducted and indicate that the proposed approach can achieve an average equal error rate (EER) of as low as 6.61%. A comparison among different approaches reveals that the proposed method is superior to the others and confirms its feasibility.

原文???core.languages.en_GB???
文章編號7114224
頁(從 - 到)1552-1562
頁數11
期刊IEEE Transactions on Audio, Speech and Language Processing
23
發行號10
DOIs
出版狀態已出版 - 1 10月 2015

指紋

深入研究「Speech emotion verification using emotion variance modeling and discriminant scale-frequency maps」主題。共同形成了獨特的指紋。

引用此