ConvRGX: Recognition, Generation, and Extraction for Self-trained Conversational Question Answering
Tianhua Zhang, Liping Tang, Wei Fang, Hongyin Luo, Xixin Wu, Helen Meng, James Glass
Proceedings of the Third DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering Long - Paper
TLDR:
Collecting and constructing human-annotated corpora for training conversational question-answering (CQA) models has recently been shown to be inefficient and costly. To solve this problem, previous works have proposed training QA models with automatically generated QA data. In this work, we extend e
You can open the
#paper-DialDoc_14
channel in a separate window.
Abstract:
Collecting and constructing human-annotated corpora for training conversational question-answering (CQA) models has recently been shown to be inefficient and costly. To solve this problem, previous works have proposed training QA models with automatically generated QA data. In this work, we extend earlier studies on QA synthesis, and propose an efficient QA data generation algorithm under conversational settings. Our model recognizes potential dialogue topics, generates corresponding questions, and extracts answers from grounding passages. To improve the quality of generated QAs and downstream self-training of CQA models, we propose dropout and agreement-based QA selection methods. We conduct experiments on both data augmentation and domain adaptation settings. Experiments on the QuAC and Doc2Dial tasks show that the proposed method can significantly improve the quality of generated QA data, and also improves the accuracy of self-trained CQA models based on the constructed training corpora.