Small Data, Big Impact: Leveraging Minimal Data for Effective Machine Translation

Jean Maillard, Cynthia Gao, Elahe Kalbassi, Kaushik Ram Sadagopan, Vedanuj Goswami, Philipp Koehn, Angela Fan, Francisco Guzman

Main: Linguistic Diversity Main-oral Paper

Session 3: Linguistic Diversity (Oral)
Conference Room: Pier 7&8
Conference Time: July 11, 09:00-10:30 (EDT) (America/Toronto)
Global Time: July 11, Session 3 (13:00-14:30 UTC)
Keywords: less-resourced languages, resources for less-resourced languages
Languages: acehnese, moroccan arabic, egyptian arabic, bambara, balinese, bhojpuri, banjar, buginese, crimean tatar, southwestern dinka, dzongkha, friulian, nigerian fulfulde, guarani, chhattisgarhi, kashmiri, central kanuri, ligurian, limburgish, lombard, latgalian, magahi, meitei, maori, nuer, dari, southern pashto, sicilian, shan, sardinian, silesian, tamasheq, central atlas tamazight, venetian
TLDR: For many languages, machine translation progress is hindered by the lack of reliable training data. Models are trained on whatever pre-existing datasets may be available and then augmented with synthetic data, because it is often not economical to pay for the creation of large-scale datasets. But fo...
You can open the #paper-P3832 channel in a separate window.
Abstract: For many languages, machine translation progress is hindered by the lack of reliable training data. Models are trained on whatever pre-existing datasets may be available and then augmented with synthetic data, because it is often not economical to pay for the creation of large-scale datasets. But for the case of low-resource languages, would the creation of a few thousand professionally translated sentence pairs give any benefit? In this paper, we show that it does. We describe a broad data collection effort involving around 6k professionally translated sentence pairs for each of 39 low-resource languages, which we make publicly available. We analyse the gains of models trained on this small but high-quality data, showing that it has significant impact even when larger but lower quality pre-existing corpora are used, or when data is augmented with millions of sentences through backtranslation.