Improving Factuality of Abstractive Summarization without Sacrificing Summary Quality
Tanay Dixit, Fei Wang, Muhao Chen
Main: Summarization Main-poster Paper
Poster Session 7: Summarization (Poster)
Conference Room: Frontenac Ballroom and Queen's Quay
Conference Time: July 12, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 12, Poster Session 7 (15:00-16:30 UTC)
Keywords:
abstractive summarisation, factuality
TLDR:
Improving factual consistency of abstractive summarization has been a widely studied topic. However, most of the prior works on training factuality-aware models have ignored the negative effect it has on summary quality. We propose \{pasted macro `MODEL'\}name (i.e. Effective Factual Summarization),...
You can open the
#paper-P2232
channel in a separate window.
Abstract:
Improving factual consistency of abstractive summarization has been a widely studied topic. However, most of the prior works on training factuality-aware models have ignored the negative effect it has on summary quality. We propose \{pasted macro `MODEL'\}name (i.e. Effective Factual Summarization), a candidate summary generation and ranking technique to improve summary factuality without sacrificing quality. We show that using a contrastive learning framework with our refined candidate summaries leads to significant gains on both factuality and similarity-based metrics. Specifically, we propose a ranking strategy in which we effectively combine two metrics, thereby preventing any conflict during training. Models trained using our approach show up to 6 points of absolute improvement over the base model with respect to FactCC on XSUM and 11 points on CNN/DM, without negatively affecting either similarity-based metrics or absractiveness.