Consistency Regularization Training for Compositional Generalization
Yongjing Yin, Jiali Zeng, Yafu Li, Fandong Meng, Jie Zhou, Yue Zhang
Main: Machine Learning for NLP Main-oral Paper
Session 6: Machine Learning for NLP (Oral)
Conference Room: Metropolitan Centre
Conference Time: July 12, 09:00-10:30 (EDT) (America/Toronto)
Global Time: July 12, Session 6 (13:00-14:30 UTC)
Keywords:
generalization
TLDR:
Existing neural models have difficulty generalizing to unseen combinations of seen components. To achieve compositional generalization, models are required to consistently interpret (sub)expressions across contexts. Without modifying model architectures, we improve the capability of Transformer on c...
You can open the
#paper-P3374
channel in a separate window.
Abstract:
Existing neural models have difficulty generalizing to unseen combinations of seen components. To achieve compositional generalization, models are required to consistently interpret (sub)expressions across contexts. Without modifying model architectures, we improve the capability of Transformer on compositional generalization through consistency regularization training, which promotes representation consistency across samples and prediction consistency for a single sample. Experimental results on semantic parsing and machine translation benchmarks empirically demonstrate the effectiveness and generality of our method. In addition, we find that the prediction consistency scores on in-distribution validation sets can be an alternative for evaluating models during training, when commonly-used metrics are not informative.