In recent years, many studies have proposed various neural network-based dialog systems, but generating convincing dialogs is still one of the most challenging tasks in the field of dialog generation. Dialogue systems can be divided into two categories. The first is a task-oriented or close-domain dialogue system, which has limited dialogue capabilities, but is task-oriented for performing specific tasks, such as a smart kiosk or customer service. The second type of dialogue system is a non-task-oriented or open-domain dialogue system. An open dialogue system aims to imitate the real human conversation, however the current techniques still cannot deliver human-like dialogue smoothly due to the fact that the user does not necessarily have a clear intention when engaging in an open conversation. Most dialogue systems and related work investigated the Sequence-to-sequence model based on the RNN architecture. Recent studies have shown that the Transformer model outperformed the Seq2Seq models in Neural Machine Translation (NMT) tasks, but the investigation on Transformer-based models for open dialogue systems is still lacking.Furthermore, the automatic evaluation metrics commonly used in machine translation and dialogue systems, such as BLEU and Perplexity, are inadequate for the evaluation of open dialogue systems. For example, Liu et al. (2016) showed that the evaluation metrics show a weak correlation with human judgment when training models using conversational data sets. Therefore, this project will use the RNN-based Seq2Seq model and the Transformer-based model, and use dialog-related data sets to train the model. This project will focus on the task of open dialogue, and use a variety of quantitative evaluation metrics as well as qualitative analysis to verify the suitability of the two model architectures in the field of open-domain dialogue generation. Human evaluation is an important evaluation method in a dialogue system, but many differences were observed in how to perform manual evaluation in various studies. We plan to analyze the interdependence and reliability between each automatic evaluation metrics and human evaluation. Finally, we plan to apply the research outcomes of this project to our ongoing 科技部產學計畫 where applications such as precision marketing and intelligent customer service are in discussion.
|Effective start/end date||1/08/20 → 31/07/21|
UN Sustainable Development Goals
In 2015, UN member states agreed to 17 global Sustainable Development Goals (SDGs) to end poverty, protect the planet and ensure prosperity for all. This project contributes towards the following SDG(s):
- Open dialogue systems
- text generation
- attention mechanism
- language models
- evaluation methods
- applications of dialogue systems
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.