On Evaluating Multilingual Compositional Generalization with Translated Datasets
Zi Wang, Daniel Hershcovich
Main: Multilingualism and Cross-Lingual NLP Main-oral Paper
Session 1: Multilingualism and Cross-Lingual NLP (Oral)
Conference Room: Pier 4&5
Conference Time: July 10, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 10, Session 1 (15:00-16:30 UTC)
Keywords:
cross-lingual transfer, multilingual benchmarks, multilingual evaluation
Languages:
chinese, japanese
TLDR:
Compositional generalization allows efficient learning and human-like inductive biases. Since most research investigating compositional generalization in NLP is done on English, important questions remain underexplored. Do the necessary compositional generalization abilities differ across languages?...
You can open the
#paper-P1915
channel in a separate window.
Abstract:
Compositional generalization allows efficient learning and human-like inductive biases. Since most research investigating compositional generalization in NLP is done on English, important questions remain underexplored. Do the necessary compositional generalization abilities differ across languages? Can models compositionally generalize cross-lingually? As a first step to answering these questions, recent work used neural machine translation to translate datasets for evaluating compositional generalization in semantic parsing. However, we show that this entails critical semantic distortion. To address this limitation, we craft a faithful rule-based translation of the MCWQ dataset from English to Chinese and Japanese. Even with the resulting robust benchmark, which we call MCWQ-R, we show that the distribution of compositions still suffers due to linguistic divergences, and that multilingual models still struggle with cross-lingual compositional generalization. Our dataset and methodology will serve as useful resources for the study of cross-lingual compositional generalization in other tasks.