Project Details
Description
Smartphones are what modern people carry with all the time. In many cases, users could not operate their mobile phone to reply emails, send messages, check itineraries, play music, etc. because they are driving, cooking, or jogging. Therefore, the demand of virtual assistant through spoken language is growing. For example, the advent of Actions on Google and Alexa Skills enable developers to more easily connect their services through a conversationalinterface. In this project, we will collect dialog of the following domains: including e-mail, calendar, messages, events, playing (finding) songs, etc. to establish a cross-domain dialogue system that supports daily life.The two-year plan is as follows: the first year plan mainly collect the crossdomain dialogue of the above-mentioned services through Human-to-Human (H2H) method. We will trains the NLU (natural language understanding) modulefor intent detection and slot filling, and integrates dialogue state tracking, dialogue policy decision, and natural language generation modules. In the second year, we will use adopt Machine-to-Machine (M2M) method to speed up the dialog collection process and use transfer learning in cross-domain intent detection and slot value identification, hoping to reduce the training cost of the model and expand the generalization ability of the model in cross-domain dialogue systems.
Status | Finished |
---|---|
Effective start/end date | 1/11/22 → 31/10/23 |
UN Sustainable Development Goals
In 2015, UN member states agreed to 17 global Sustainable Development Goals (SDGs) to end poverty, protect the planet and ensure prosperity for all. This project contributes towards the following SDG(s):
Keywords
- Task-Oriented Dialog
- Schema Guided Dialog
- Intention Detection
- Natural Language Generation
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.