Using Deep-Q Network to Select Candidates from N-best Speech Recognition Hypotheses for Enhancing Dialogue State Tracking

Richard Tzong Han Tsai, Chia Hao Chen, Chun Kai Wu, Yu Cheng Hsiao, Hung Yi Lee

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Scopus citations

Abstract

Most state-of-the-art dialogue state tracking (DST) methods infer the dialogue state based on ground-truth transcriptions of utterances. In real-world situations, utterances are transcribed by automatic speech recognition (ASR) systems, which output the n-best candidate transcriptions (hypotheses). In certain noisy environments, the best transcription is often imperfect, severely influencing DST accuracy and possibly causing the dialogue system to stall or loop. The missed or misrecognized words can often be found in the runner-up candidate transcriptions from 2 to n, which could be used to improve accuracy of DST. However, looking beyond the top-ranked ASR results poses a dilemma: going too far may introduce noise, while not going far enough may not uncover any useful information. In this paper, we propose a novel approach to automatically determine the optimal time to stop reexamining runner-up ASR transcriptions based on deep reinforcement learning. Our method outperforms the baseline system, which uses only the top-1 ASR result, by 3.1%. Then, we select the dialogue rounds with the top-10 largest word error rate (WER), our method can improve DST accuracy by 15.4%, which is five times the overall improvement rate (3.1%). This improvement was expected because our proposed method is able to select informative ASR results at any rank.

Original languageEnglish
Title of host publication2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages7375-7379
Number of pages5
ISBN (Electronic)9781479981311
DOIs
StatePublished - May 2019
Event44th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Brighton, United Kingdom
Duration: 12 May 201917 May 2019

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Volume2019-May
ISSN (Print)1520-6149

Conference

Conference44th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019
Country/TerritoryUnited Kingdom
CityBrighton
Period12/05/1917/05/19

Keywords

  • Automatic Speech Recognition
  • Deep Reinforcement Learning
  • Deep-Q Network
  • Dialogue State Tracking

Fingerprint

Dive into the research topics of 'Using Deep-Q Network to Select Candidates from N-best Speech Recognition Hypotheses for Enhancing Dialogue State Tracking'. Together they form a unique fingerprint.

Cite this