Discourse Structure Extraction from Pre-Trained and Fine-Tuned Language Models in Dialogues

Chuyuan Li, Patrick Huber, Wen Xiao, Maxime Amblard, Chlo{\'e} Braud, Giuseppe Carenini

4th Workshop on Computational Approaches to Discourse Extended abstract Paper

TLDR: [Finding paper]Discourse processing suffers from data sparsity, especially for dialogues. As a result, we explore approaches to build discourse structures for dialogues, based on attention matrices from Pre-trained Language Models (PLMs). We investigate multiple tasks for fine-tuning and show that t
You can open the #paper-CODI_19 channel in a separate window.
Abstract: [Finding paper]Discourse processing suffers from data sparsity, especially for dialogues. As a result, we explore approaches to build discourse structures for dialogues, based on attention matrices from Pre-trained Language Models (PLMs). We investigate multiple tasks for fine-tuning and show that the dialogue-tailored Sentence Ordering task performs best. To locate and exploit discourse information in PLMs, we propose an unsupervised and a semi-supervised method. Our proposals thereby achieve encouraging results on the STAC corpus, with F1 scores of 57.2 and 59.3 for the unsupervised and semi-supervised methods, respectively. When restricted to projective trees, our scores improved to 63.3 and 68.1.