Compositional Data Augmentation for Abstractive Conversation Summarization

Siru Ouyang, Jiaao Chen, Jiawei Han, Diyi Yang

Main: Summarization Main-poster Paper

Poster Session 5: Summarization (Poster)
Conference Room: Frontenac Ballroom and Queen's Quay
Conference Time: July 11, 16:15-17:45 (EDT) (America/Toronto)
Global Time: July 11, Poster Session 5 (20:15-21:45 UTC)
Keywords: abstractive summarisation, conversational summarization
TLDR: Recent abstractive conversation summarization systems generally rely on large-scale datasets with annotated summaries. However, collecting and annotating these conversations can be a time-consuming and labor-intensive task. To address this issue, in this work, we present a sub-structure level compo...
You can open the #paper-P3958 channel in a separate window.
Abstract: Recent abstractive conversation summarization systems generally rely on large-scale datasets with annotated summaries. However, collecting and annotating these conversations can be a time-consuming and labor-intensive task. To address this issue, in this work, we present a sub-structure level compositional data augmentation method, \textsc{Compo}, for generating diverse and high-quality pairs of conversations and summaries. Specifically, \textsc{Compo} first extracts conversation structures like topic splits and action triples as basic units. Then we organize these semantically meaningful conversation snippets compositionally to create new training instances. Additionally, we explore noise-tolerant settings in both self-training and joint-training paradigms to make the most of these augmented samples. Our experiments on benchmark datasets, SAMSum and DialogSum, show that \textsc{Compo} substantially outperforms prior baseline methods by achieving a nearly 10\% increase of ROUGE scores with limited data. Code is available at \texttt{https://github.com/ozyyshr/Compo}.