Extrinsic Evaluation of Machine Translation Metrics
Nikita Moghe, Tom Sherborne, Mark Steedman, Alexandra Birch
Main: Machine Translation Main-oral Paper
Session 2: Machine Translation (Oral)
Conference Room: Metropolitan West
Conference Time: July 10, 14:00-15:30 (EDT) (America/Toronto)
Global Time: July 10, Session 2 (18:00-19:30 UTC)
Keywords:
automatic evaluation
TLDR:
Automatic machine translation (MT) metrics are widely used to distinguish the
quality of machine translation systems across relatively large test sets (system-level evaluation).
However, it is unclear if automatic metrics are reliable at distinguishing good translations from bad translations at t...
You can open the
#paper-P1126
channel in a separate window.
Abstract:
Automatic machine translation (MT) metrics are widely used to distinguish the
quality of machine translation systems across relatively large test sets (system-level evaluation).
However, it is unclear if automatic metrics are reliable at distinguishing good translations from bad translations at the sentence level (segment-level evaluation). In this paper, we investigate how useful MT metrics are at detecting the segment-level quality by correlating metrics with how useful the translations are for downstream task.
We evaluate the segment-level performance of the most widely used MT metrics (chrF, COMET, BERTScore, etc.) on three downstream cross-lingual tasks (dialogue state tracking, question answering, and semantic parsing). For each task, we only have access to a monolingual task-specific model and a translation model. We calculate the correlation between the metric's ability to predict a good/bad translation with the success/failure on the final task for the machine translated test sentences. Our experiments demonstrate that all metrics exhibit negligible correlation with the extrinsic evaluation of the downstream outcomes. We also find that the scores provided by neural metrics are not interpretable, in large part due to having undefined ranges. We synthesise our analysis into recommendations for future MT metrics to produce labels rather than scores for more informative interaction between machine translation and multilingual language understanding.