Target-Side Augmentation for Document-Level Machine Translation
Guangsheng Bao, ZHIYANG TENG, Yue Zhang
Main: Machine Translation Main-poster Paper
    Session 1: Machine Translation (Virtual Poster)
    
Conference Room: Pier 7&8 
    Conference Time: July 10, 11:00-12:30 (EDT) (America/Toronto)
    Global Time: July 10, Session 1 (15:00-16:30 UTC)
    
    
  
          Keywords:
          mt theory
        
        
        
        
          TLDR:
          Document-level machine translation faces the challenge of data sparsity due to its long input length and a small amount of training data, increasing the risk of learning spurious patterns. To address this challenge, we propose a target-side augmentation method, introducing a data augmentation (DA) m...
        
  
    You can open the
    #paper-P1449
    channel in a separate window.
  
  
    
            Abstract:
            Document-level machine translation faces the challenge of data sparsity due to its long input length and a small amount of training data, increasing the risk of learning spurious patterns. To address this challenge, we propose a target-side augmentation method, introducing a data augmentation (DA) model to generate many potential translations for each source document. Learning on these wider range translations, an MT model can learn a smoothed distribution, thereby reducing the risk of data sparsity. We demonstrate that the DA model, which estimates the posterior distribution, largely improves the MT performance, outperforming the previous best system by 2.30 s-BLEU on News and achieving new state-of-the-art on News and Europarl benchmarks.
          
         Anthology
 Anthology
       Underline
 Underline