Towards Higher Pareto Frontier in Multilingual Machine Translation
yichong huang, Xiaocheng Feng, Xinwei Geng, Baohang Li, Bing Qin
Main: Machine Translation Main-oral Paper
Session 6: Machine Translation (Oral)
Conference Room: Metropolitan West
Conference Time: July 12, 09:15-10:30 (EDT) (America/Toronto)
Global Time: July 12, Session 6 (13:15-14:30 UTC)
Keywords:
multilingual mt
TLDR:
Multilingual neural machine translation has witnessed remarkable progress in recent years.
However, the long-tailed distribution of multilingual corpora poses a challenge of Pareto optimization, i.e., optimizing for some languages may come at the cost of degrading the performance of others.
Existin...
You can open the
#paper-P2132
channel in a separate window.
Abstract:
Multilingual neural machine translation has witnessed remarkable progress in recent years.
However, the long-tailed distribution of multilingual corpora poses a challenge of Pareto optimization, i.e., optimizing for some languages may come at the cost of degrading the performance of others.
Existing balancing training strategies are equivalent to a series of Pareto optimal solutions, which trade off on a Pareto frontier{In Pareto optimization, Pareto optimal solutions refer to solutions in which none of the objectives can be improved without sacrificing at least one of the other objectives. The set of all Pareto optimal solutions forms a Pareto frontier.}.
In this work, we propose a new training framework, Pareto Mutual Distillation (Pareto-MD), towards pushing the Pareto frontier outwards rather than making trade-offs.
Specifically, Pareto-MD collaboratively trains two Pareto optimal solutions that favor different languages and allows them to learn from the strengths of each other via knowledge distillation.
Furthermore, we introduce a novel strategy to enable stronger communication between Pareto optimal solutions and broaden the applicability of our approach.
Experimental results on the widely-used WMT and TED datasets show that our method significantly pushes the Pareto frontier and outperforms baselines by up to +2.46 BLEU{Our code will be released upon acceptance.}.