NVIDIA NeMo Offline Speech Translation Systems for IWSLT 2023
Oleksii Hrinchuk, Vladimir Bataev, Evelina Bakhturina, Boris Ginsburg
The 20th International Conference on Spoken Language Translation Long Paper
TLDR:
This paper provides an overview of NVIDIA NeMo's speech translation systems for the IWSLT 2023 Offline Speech Translation Task. This year, we focused on end-to-end system which capitalizes on pre-trained models and synthetic data to mitigate the problem of direct speech translation data scarcity. Wh
You can open the
#paper-IWSLT_50
channel in a separate window.
Abstract:
This paper provides an overview of NVIDIA NeMo's speech translation systems for the IWSLT 2023 Offline Speech Translation Task. This year, we focused on end-to-end system which capitalizes on pre-trained models and synthetic data to mitigate the problem of direct speech translation data scarcity. When trained on IWSLT 2022 constrained data, our best En->De end-to-end model achieves the average score of 31 BLEU on 7 test sets from IWSLT 2010-2020 which improves over our last year cascade (28.4) and end-to-end (25.7) submissions. When trained on IWSLT 2023 constrained data, the average score drops to 29.5 BLEU.