Enhancing Multilingual Document-Grounded Dialogue Using Cascaded Prompt-Based Post-Training Models
Jun Liu, Shuang Cheng, Zineng Zhou, Yang Gu, Jian Ye, Haiyong Luo
Proceedings of the Third DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering Long - Paper
TLDR:
The Dialdoc23 shared task presents a Multilingual Document-Grounded Dialogue Systems (MDGDS) challenge, where system responses are generated in multiple languages using user's queries, historical dialogue records and relevant passages. A major challenge for this task is the limited training data ava
You can open the
#paper-DialDoc_6
channel in a separate window.
Abstract:
The Dialdoc23 shared task presents a Multilingual Document-Grounded Dialogue Systems (MDGDS) challenge, where system responses are generated in multiple languages using user's queries, historical dialogue records and relevant passages. A major challenge for this task is the limited training data available in low-resource languages such as French and Vietnamese. In this paper, we propose Cascaded Prompt-based Post-training Models, dividing the task into three subtasks: Retrieval, Reranking and Generation. We conduct post-training on high-resource language such as English and Chinese to enhance performance of low-resource languages by using the similarities of languages. Additionally, we utilize the prompt method to activate model's ability on diverse languages within the dialogue domain and explore which prompt is a good prompt. Our comprehensive experiments demonstrate the effectiveness of our proposed methods, which achieved the first place on the leaderboard with a total score of 215.40 in token-level F1, SacreBleu, and Rouge-L metrics.