Score It All Together: A Multi-Task Learning Study on Automatic Scoring of Argumentative Essays
Yuning Ding, Marie Bexte, Andrea Horbach
Findings: NLP Applications Findings Paper
Session 1: NLP Applications (Virtual Poster)
Conference Room: Pier 7&8
Conference Time: July 10, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 10, Session 1 (15:00-16:30 UTC)
Spotlight Session: Spotlight - Metropolitan East (Spotlight)
Conference Room: Metropolitan East
Conference Time: July 10, 19:00-21:00 (EDT) (America/Toronto)
Global Time: July 10, Spotlight Session (23:00-01:00 UTC)
Keywords:
educational applications, gec, essay scoring
TLDR:
When scoring argumentative essays in an educational context, not only the presence or absence of certain argumentative elements but also their quality is important.
On the recently published student essay dataset PERSUADE, we first show that the automatic scoring of argument quality benefits from a...
You can open the
#paper-P1890
channel in a separate window.
Abstract:
When scoring argumentative essays in an educational context, not only the presence or absence of certain argumentative elements but also their quality is important.
On the recently published student essay dataset PERSUADE, we first show that the automatic scoring of argument quality benefits from additional information about context, writing prompt and argument type. We then explore the different combinations of three tasks: automated span detection, type and quality prediction. Results show that a multi-task learning approach combining the three tasks outperforms sequential approaches that first learn to segment and then predict the quality/type of a segment.