Why Does Zero-Shot Cross-Lingual Generation Fail? An Explanation and a Solution
Tianjian Li, Kenton Murray
Findings: Multilingualism and Cross-Lingual NLP Findings Paper
Session 7: Multilingualism and Cross-Lingual NLP (Virtual Poster)
Conference Room: Pier 7&8
Conference Time: July 12, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 12, Session 7 (15:00-16:30 UTC)
Spotlight Session: Spotlight - Metropolitan West (Spotlight)
Conference Room: Metropolitan West
Conference Time: July 10, 19:00-21:00 (EDT) (America/Toronto)
Global Time: July 10, Spotlight Session (23:00-01:00 UTC)
Keywords:
cross-lingual transfer
TLDR:
Zero-shot cross-lingual transfer is when a multilingual model is trained to perform a task in one language and then is applied to another language.
Although the zero-shot cross-lingual transfer approach has achieved success in various classification tasks, its performance on natural language genera...
You can open the
#paper-P2288
channel in a separate window.
Abstract:
Zero-shot cross-lingual transfer is when a multilingual model is trained to perform a task in one language and then is applied to another language.
Although the zero-shot cross-lingual transfer approach has achieved success in various classification tasks, its performance on natural language generation tasks falls short in quality and sometimes outputs an incorrect language. In our study, we show that the fine-tuning process learns language invariant representations, which is beneficial for classification tasks but harmful for generation tasks. Motivated by this, we propose a simple method to regularize the model from learning language invariant representations and a method to select model checkpoints without a development set in the target language, both resulting in better generation quality. Experiments on three semantically diverse generation tasks show that our method reduces the accidental translation problem by 68\% and improves the ROUGE-L score by 1.5 on average.