CLIPTrans: Transferring Visual Knowledge with Pre-trained Models for Multimodal Machine Translation

08/29/2023
by   Devaansh Gupta, et al.
0

There has been a growing interest in developing multimodal machine translation (MMT) systems that enhance neural machine translation (NMT) with visual knowledge. This problem setup involves using images as auxiliary information during training, and more recently, eliminating their use during inference. Towards this end, previous works face a challenge in training powerful MMT models from scratch due to the scarcity of annotated multilingual vision-language data, especially for low-resource languages. Simultaneously, there has been an influx of multilingual pre-trained models for NMT and multimodal pre-trained models for vision-language tasks, primarily in English, which have shown exceptional generalisation ability. However, these are not directly applicable to MMT since they do not provide aligned multimodal multilingual features for generative tasks. To alleviate this issue, instead of designing complex modules for MMT, we propose CLIPTrans, which simply adapts the independently pre-trained multimodal M-CLIP and the multilingual mBART. In order to align their embedding spaces, mBART is conditioned on the M-CLIP features by a prefix sequence generated through a lightweight mapping network. We train this in a two-stage pipeline which warms up the model with image captioning before the actual translation task. Through experiments, we demonstrate the merits of this framework and consequently push forward the state-of-the-art across standard benchmarks by an average of +2.67 BLEU. The code can be found at www.github.com/devaansh100/CLIPTrans.

READ FULL TEXT

page 8

page 9

page 13

research
12/17/2022

Better Datastore, Better Translation: Generating Datastores from Pre-Trained Models for Nearest Neural Machine Translation

Nearest Neighbor Machine Translation (kNNMT) is a simple and effective m...
research
06/07/2021

BERTGEN: Multi-task Generation through BERT

We present BERTGEN, a novel generative, decoder-only model which extends...
research
05/22/2022

What Do Compressed Multilingual Machine Translation Models Forget?

Recently, very large pre-trained models achieve state-of-the-art results...
research
10/10/2022

Distill the Image to Nowhere: Inversion Knowledge Distillation for Multimodal Machine Translation

Past works on multimodal machine translation (MMT) elevate bilingual set...
research
12/13/2021

English2Gbe: A multilingual machine translation model for Fon/EweGbe

Language is an essential factor of emancipation. Unfortunately, most of ...
research
11/12/2022

AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities

In this work, we present a conceptually simple and effective method to t...
research
03/17/2022

On Vision Features in Multimodal Machine Translation

Previous work on multimodal machine translation (MMT) has focused on the...

Please sign up or login with your details

Forgot password? Click here to reset