A Call for Standardization and Validation of Text Style Transfer Evaluation
Phil Sidney Ostheimer, Mayank Kumar Nagda, Marius Kloft, Sophie Fellenz
Findings: Theme: Reality Check Findings Paper
Session 7: Theme: Reality Check (Virtual Poster)
Conference Room: Pier 7&8
Conference Time: July 12, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 12, Session 7 (15:00-16:30 UTC)
Spotlight Session: Spotlight - Metropolitan West (Spotlight)
Conference Room: Metropolitan West
Conference Time: July 10, 19:00-21:00 (EDT) (America/Toronto)
Global Time: July 10, Spotlight Session (23:00-01:00 UTC)
Keywords:
evaluation
TLDR:
Text Style Transfer (TST) evaluation is, in practice, inconsistent.
Therefore, we conduct a meta-analysis on human and automated TST evaluation and experimentation that thoroughly examines existing literature in the field.
The meta-analysis reveals a substantial standardization gap in human and auto...
You can open the
#paper-P3557
channel in a separate window.
Abstract:
Text Style Transfer (TST) evaluation is, in practice, inconsistent.
Therefore, we conduct a meta-analysis on human and automated TST evaluation and experimentation that thoroughly examines existing literature in the field.
The meta-analysis reveals a substantial standardization gap in human and automated evaluation.
In addition, we also find a validation gap: only few automated metrics have been validated using human experiments.
To this end, we thoroughly scrutinize both the standardization and validation gap and reveal the resulting pitfalls.
This work also paves the way to close the standardization and validation gap in TST evaluation by calling out requirements to be met by future research.