SWiPE: A Dataset for Document-Level Simplification of Wikipedia Pages

Philippe Laban, Jesse Vig, Wojciech Kryscinski, Shafiq Joty, Caiming Xiong, Chien-sheng Wu

4th Workshop on Computational Approaches to Discourse Regular long Paper

TLDR: Text simplification research has mostly focused on sentence-level simplification, even though many desirable edits --- such as adding relevant background information or reordering content --- may require document-level context.Prior work has also predominantly framed simplification as a single-step,
You can open the #paper-CODI_26 channel in a separate window.
Abstract: Text simplification research has mostly focused on sentence-level simplification, even though many desirable edits --- such as adding relevant background information or reordering content --- may require document-level context.Prior work has also predominantly framed simplification as a single-step, input-to-output task, only implicitly modeling the fine-grained, span-level edits that elucidate the simplification process.To address both gaps, we introduce the SWiPE dataset, which reconstructs the document-level editing process from English Wikipedia (EW) articles to paired Simple Wikipedia (SEW) articles. In contrast to prior work, SWiPE leverages the entire revision history when pairing pages in order to better identify simplification edits. We work with Wikipedia editors to annotate 5,000 EW-SEW document pairs, labeling more than 40,000 edits with proposed 19 categories.To scale our efforts, we propose several models to automatically label edits, achieving an F-1 score of up to 70.6, indicating that this is a tractable but challenging NLU task. Finally, we categorize the edits produced by several simplification models and find that SWiPE-trained models generate more complex edits while reducing unwanted edits.