PEIT: Bridging the Modality Gap with Pre-trained Models for End-to-End Image Translation
Shaolin Zhu, Shangjie Li, Yikun Lei, Deyi Xiong
Main: Machine Translation Main-poster Paper
Session 4: Machine Translation (Virtual Poster)
Conference Room: Pier 7&8
Conference Time: July 11, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 11, Session 4 (15:00-16:30 UTC)
Keywords:
multimodality
TLDR:
Image translation is a task that translates an image containing text in the source language to the target language.
One major challenge with image translation is the modality gap between visual text inputs and textual inputs/outputs of machine translation (MT).
In this paper, we propose PEIT, an e...
You can open the
#paper-P3043
channel in a separate window.
Abstract:
Image translation is a task that translates an image containing text in the source language to the target language.
One major challenge with image translation is the modality gap between visual text inputs and textual inputs/outputs of machine translation (MT).
In this paper, we propose PEIT, an end-to-end image translation framework that bridges the modality gap with pre-trained models.
It is composed of four essential components: a visual encoder, a shared encoder-decoder backbone network, a vision-text representation aligner equipped with the shared encoder and a cross-modal regularizer stacked over the shared decoder.
Both the aligner and regularizer aim at reducing the modality gap.
To train PEIT, we employ a two-stage pre-training strategy with an auxiliary MT task: (1) pre-training the MT model on the MT training data to initialize the shared encoder-decoder backbone network; and (2) pre-training PEIT with the aligner and regularizer on a synthesized dataset with rendered images containing text from the MT training data.
In order to facilitate the evaluation of PEIT and promote research on image translation, we create a large-scale image translation corpus ECOIT containing 480K image-translation pairs via crowd-sourcing and manual post-editing from real-world images in the e-commerce domain.
Experiments on the curated ECOIT benchmark dataset demonstrate that PEIT substantially outperforms both cascaded image translation systems (OCR+MT) and previous strong end-to-end image translation model, with fewer parameters and faster decoding speed.