UFO: A UniFied TransfOrmer for Vision-Language Representation Learning

by   Jianfeng Wang, et al.

In this paper, we propose a single UniFied transfOrmer (UFO), which is capable of processing either unimodal inputs (e.g., image or language) or multimodal inputs (e.g., the concatenation of the image and the question), for vision-language (VL) representation learning. Existing approaches typically design an individual network for each modality and/or a specific fusion network for multimodal tasks. To simplify the network architecture, we use a single transformer network and enforce multi-task learning during VL pre-training, which includes the image-text contrastive loss, image-text matching loss, and masked language modeling loss based on the bidirectional and the seq2seq attention mask. The same transformer network is used as the image encoder, the text encoder, or the fusion network in different pre-training tasks. Empirically, we observe less conflict among different tasks and achieve new state of the arts on visual question answering, COCO image captioning (cross-entropy optimization) and nocaps (in SPICE). On other downstream tasks, e.g., image-text retrieval, we also achieve competitive performance.



There are no comments yet.


page 3


UNITER: Learning UNiversal Image-TExt Representations

Joint image-text embedding is the bedrock for most Vision-and-Language (...

CoCa: Contrastive Captioners are Image-Text Foundation Models

Exploring large-scale pretrained foundation models is of significant int...

Paying Attention to Multiscale Feature Maps in Multimodal Image Matching

We propose an attention-based approach for multimodal image patch matchi...

Crossing the Format Boundary of Text and Boxes: Towards Unified Vision-Language Modeling

In this paper, we propose UNICORN, a vision-language (VL) model that uni...

OmniNet: A unified architecture for multi-modal multi-task learning

Transformer is a popularly used neural network architecture, especially ...

ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision

Vision-and-Language Pretraining (VLP) has improved performance on variou...

How to represent part-whole hierarchies in a neural network

This paper does not describe a working system. Instead, it presents a si...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recent years have seen tremendous progress in vision-language (VL) representation learning, where the model is designed to understand the vision/language signals and the relation between the modalities. Applications include image captioning [LinMBHPRDZ14, abs-1812-08658], visual question answering (VQA) [GoyalKSBP16], image-text retrieval, etc. Typical approaches first extract the features from each modality and then feed them to a fusion network to jointly learn the representation. Regarding how the fusion network is designed, we roughly group the existing approaches into two categories: light fusion and heavy fusion, as shown in Fig. 1(a) and (b).

Figure 1: Different network designs for VL representation learning. (a) Light fusion: few parameters are dedicated to multimodal fusion. (b) Heavy fusion: a transformer network is applied to fuse the multimodal inputs. Before fusion, the image can be encoded as region features, patch features, or CNN grid features, and the text can be encoded through an embedding layer or a transformer network. (c) Our UniFied transfOrmer (UFO): a single network is reused as image encoder, text encoder, and fusion network for different tasks.

Light fusion (Fig. 1(a)) adopts a separate encoder for both the image and the text, such as in CLIP [RadfordKHRGASAM21] and ALIGN [JiaYXCPPLSLD21]. The image encoder can be ResNet [HeZRS16] or vision transformer [DosovitskiyB0WZ21], while the text encoder is typically a transformer network [VaswaniSPUJGKP17]

. The fusion is the contrastive loss based on the cosine similarity such that the representation from the two modalities can be aligned into the same semantic space. A favorable application is the image-text retrieval, where each image or text description is represented as a fixed vector for fast similarity search.

While few parameters are allocated in the light fusion, heavy fusion approaches, shown in Fig. 1(b), applies a transformer network on top of the unimodal features to jointly learn the representation. The image can be encoded as object features [00010BT0GZ18, Li0LZHZWH0WCG20, ChenLYK0G0020, ZhouPZHCG20, LuBPL19, SuZCLLWD20, TanB19, abs-1908-03557, LiDFGJ20, Li0LZHZWH0WCG20, abs-2009-13682, abs-2101-00529, abs-2012-06946, abs-2104-02096] through a Faster RCNN [RenHGS15], as grid features [jiang2020defense, abs-2004-00849]

from a convolutional neural network, or as patch features

[KimSK21] from a linear projection on raw pixels. The text can be encoded as token representations by a transformer network as in [TanB19, abs-2107-07651] or by a simple embedding layer as in [Li0LZHZWH0WCG20, abs-2101-00529, abs-2012-06946, abs-2009-13682, KimSK21]. With a heavier fusion network, the final representation can better capture the contextual connection between the modalities. A typical application is the VQA, where the network predicts the answer based on both the image and the question.

Existing approaches design different network architectures for different tasks. As the transformer network can be used for all these components, in this paper we attempt to design a single UniFied transfOrmer (UFO) for both light-fusion and heavy-fusion scenarios, as shown in Fig. 1(c). For the light-fusion tasks, the transformer is used as both the image encoder and the text encoder. For the heavy-fusion tasks, the same transformer is reused as a fusion module to process the two signals together.

Before feeding into the transformer, the raw image pixels are grouped into patches and projected by a linear mapping. The text description is projected into the same dimension by an embedding layer. In this way, we allocate as few as possible learnable parameters in the modality-specific processing and devote the majority in the shared transformer network. The network is adjusted automatically on how much representation power should be allocated for each individual modality and how much for the joint representation.

To empower the network with the capability for unimodal inputs, we apply the image-text contrastive (ITC) loss on the outputs during VL pre-training (VLP). For multimodal fusion capability, we apply the image-text matching (ITM) loss and the masked language modeling (MLM) loss based on the bidirectional and seq2seq attention. To optimize the network with multiple tasks, we randomly choose one of the tasks in each iteration for efficiency and leverage a momentum teacher motivated by [abs-1911-05722, abs-2107-07651] to guide the learning. With extensive ablation study, we observe less conflict among these tasks. In certain cases, different tasks even help each other. For example, the MLM task improves the retrieval task based on ITC significantly. Meanwhile, we also achieve new state of the arts111As of 11/2021 among peer-reviewed publications. in VQA, COCO image captioning (cross-entropy optimization), and nocaps(in SPICE), and competitive performance on other downstream tasks, e.g., image-text retrieval.

2 Related Work

2.1 Light Fusion

CLIP [RadfordKHRGASAM21] aligns both the image and the text description through a contrastive loss. The image is encoded by ResNet [HeZRS16] or vision transformer [DosovitskiyB0WZ21] and is represented as a single vector after a pooling operator. The text description is encoded by a transformer network and is also represented by a single vector. During pre-training, the contrastive loss aligns the representations of the image-text pair to be similar, while the representations from mismatched image-text pairs to be dissimilar. ALIGN [JiaYXCPPLSLD21] further scales up the contrastive loss on the large-scale noisy image-text data and mainly adopts EfficientNet [TanL19] as the image encoder. Beyond the grid-level representation, LightningDOT [sun2021lightningdot] explores the region-level features through a Faster RCNN [RenHGS15] network to obtain the image representation. A favorable application is the image-text retrieval. For zero-shot image classification, the image classes can be interpreted as a text description and the problem can be converted as a text retrieval task.

VATT [abs-2104-11178] extends the contrastive learning from the image domain to the video domain and aligns the video frames, audios, and texts. A shared transformer network is also studied in [abs-2104-11178] as a modal-agnostic network. The difference with our work is that VATT [abs-2104-11178] belongs to the light-fusion, while we focus on unifying the light-fusion and the heavy-fusion.

2.2 Heavy Fusion

Heavy-fusion networks allocate much more parameters in the modality fusion. Before fusion, each modality is encoded into the same dimensional space. For the image, one widely-adopted approach is to use an off-the-shelf Faster RCNN model [RenHGS15, 00010BT0GZ18] to extract multiple region features, e.g., as in [00010BT0GZ18, Li0LZHZWH0WCG20, ChenLYK0G0020, ZhouPZHCG20, LuBPL19, SuZCLLWD20, TanB19, abs-1908-03557, LiDFGJ20, Li0LZHZWH0WCG20, abs-2009-13682, abs-2012-06946, abs-2104-02096]. VinVL [abs-2101-00529] explores an even stronger region detector to push the performance, while MiniVLM [abs-2012-06946] designs an efficient region detector for real-time VL applications. Instead of the region (sparse) features, another recent direction is to use the grid (dense) features, which can be extracted from ResNet as in [jiang2020defense, abs-2004-00849, abs-2104-12763], or from a vision transformer as in [abs-2107-07651, xue2021probing]. An advantage of the grid feature is that the image encoder can be trained or fine-tuned together with other network components without the bounding box annotations. For the text, a simple approach is to apply one embedding layer after tokenization, such as the models in [Li0LZHZWH0WCG20, abs-2101-00529, abs-2012-06946, abs-2009-13682, KimSK21, ChenLYK0G0020], or to leverage a specific transformer network as in [abs-2107-07651, TanB19].

With the extracted features, the fusion network is applied to learn the contextual representation. The structure can be based on the cross-attention module between different modalities as in [TanB19, LuBPL19], or on the self-attention module as in [ChenLYK0G0020, Li0LZHZWH0WCG20, abs-2101-00529, abs-2012-06946] where the features of multiple modalities are simply concatenated. In [cho2021unifying, abs-2104-12763], a transformer encoder and decoder are applied on the visual features and text prefix. In [cho2021unifying]

, multiple tasks are unified as a text generation process. This unification is still under the category of the heavy fusion. That is, the learned network cannot be used to process the unimodal inputs,

e.g., image only for light-fusion applications. We attempt to unify in the network level, while [cho2021unifying] in the task level.

3 UniFied TransfOrmer

Figure 2: Vision-language pre-training of our UniFied transfOrmer (UFO). A single transformer is learnt to behave as an image encoder, a text encoder and a fusion network. The pre-training losses include the image-text contrastive (ITC) loss, image-text matching (ITM) loss, masked language modeling loss based on the bidirectional (MLM) and seq2seq attention mask (S-MLM). ITC empowers the network to understand the unimodal inputs (image or text), while the rest three focus on the joint inputs. In each iteration, one of the losses is randomly selected and is guided by a momentum teacher if the loss is ITC/MLM/S-MLM.

The key idea is to leverage only one transformer network, which is reused as an image encoder, a text encoder, or a fusion encoder. We follow the widely-used pretraining-then-finetuning scheme to train the network. During the pre-training, we are given a large corpus of image-text pairs and multiple losses are enforced to empower the network for different roles. For unimodal signals, we apply the image-text contrastive loss [RadfordKHRGASAM21]. For multimodal fusion task, we apply the image-text matching loss (ITM) and the masked language modeling loss based on both the bidirectional (MLM) and unidirectional (S-MLM) attention. Fig. 2 illustrates the pre-training framework.

3.1 Network Structure

We adopt the transformer network as the backbone. The main reason is that the transformer network has been demonstrated to perform well on the image tasks [DosovitskiyB0WZ21], the language tasks [VaswaniSPUJGKP17], and VL tasks [KimSK21, Li0LZHZWH0WCG20, abs-2101-00529, ChenLYK0G0020, abs-2009-13682, abs-2012-06946]

. Other choices are convolutional neural network (CNN) or all-MLP 

[abs-2105-01601] structures, but it is unclear how to effectively apply such networks to all the three roles.

The input to the transformer network is a sequence of tokens, each of which is represented as a -dimensional vector. To tokenize the image, we split the raw image into disjoint patches, each of which is linearly projected into the -dimensional space with a learnable linear layer as in [DosovitskiyB0WZ21, KimSK21]. A learnable -D positional embedding is added to each patch representation and an image [CLS] token is pre-appended. The text description is first tokenized and then embedded to the -dimensional space through an embedding matrix. A starting token of [CLS] and an ending token of EOS are added to wrap the text sequence. A learnable -D positional embedding is added to each text token. Here, the image [CLS] and the text [CLS] are two different tokens. Before feeding the input to the transformer network, a modality-specific embedding is added to the corresponding input. For VL tasks, the two modality inputs are concatenated before sending to the transformer. Although the inputs may have different token lengths, the transformer network is naturally capable of handling variant input lengths.

3.2 Pre-training Tasks

Image-Text Contrastive Loss.

The image-text contrastive (ITC) loss is to train the network to process either the image or the text and aligns the matched pairs into similar representations. For the image, the network is used as an image encoder, and the output corresponding to the [CLS] token is chosen as the representation. For the text, the network is reused as a text encoder, and the one corresponding to the [EOS] token is as the representation. The text [CLS] is used for the image-text matching loss. Let and be the -th image and text representation (-normalized), respectively. Given pairs within a training batch, the loss is


where is initialized as 1 and is learnable as in [JiaYXCPPLSLD21]. The indicator of is 1 if the -th image is paired with the -th text, and otherwise. This is to handle the data where an image (or a text) is associated with multiple texts (or images), e.g., COCO [LinMBHPRDZ14].

Image-Text Matching Loss.

The network input is the concatenation of the matched or mismatched image-text pairs. The mismatched pairs are constructed by randomly selecting a text description in the dataset for a given image222In the implementation, we randomly select the mismatched pairs from the current training batch to reduce the disk I/O.. The network is required to predict whether it is matched or not, which is a binary classification task. Here, we use the representation of the text [CLS] as the joint representation, followed by a MLP layer for prediction. The cross-entropy loss is applied to penalize the incorrect prediction.

Motivated by [ChenLYK0G0020, KimSK21], we apply the image-patch alignment loss. The image and text tokens are first connected through solving an optimal transport problem by the inexact proximal point method [XieWWZ19]. The resulting distance is maximized for mismatched image-text pairs and minimized for the matched pairs. This loss is weighted by as in [ChenLYK0G0020, KimSK21]. Overall, we use ITM to denote the sum of the two losses.

Masked Language Modeling Loss.

The network input is the concatenation of the image tokens and the partially masked text tokens, while the transformer network is trained to predict the masked tokens. As a common practice [DevlinCLT19], of the text tokens are selected for prediction. Each selected token is replaced with a [MASK] token of the time; a random token of the time and the unchanged token of the time. The text is masked in the word level rather than the token level as in [KimSK21]. An MLM head is applied on the outputs of the masked tokens for prediction with a cross-entropy loss333Label smoothing is applied in experiments., denoted as .

When applying the pre-trained model to the image captioning task, we change the attention mask such that the current text token can only depend on the preceding tokens [Li0LZHZWH0WCG20, abs-2101-00529, abs-2012-06946]. Therefore, similar as in [ZhouPZHCG20, abs-1905-03197], we also incorporate another masked language modeling loss based on the seq2seq attention, and denote it as . Fig. 3 shows the attention mask for MLM and S-MLM.

(a) bidirectional (b) seq2seq
Figure 3: Bidirectional and seq2seq attention masks for MLM and S-MLM, respectively. If is 1, the -th output can depend on the -th input; otherwise, not.
Input matched (mis)matched masked masked
Network Role uni. multi. multi. multi.
Attention bi. bi. bi. seq.
Table 1: In different pre-training losses, the input is matched pairs, matched with mismatched pairs, or masked pairs. The network is used as either an unimodal (uni.) encoder or a multimodal (multi.) encoder. The attention mask is either bidirectional (bi.) or seq2seq (seq.).

3.3 Pre-training Strategies

One Loss per Iteration. Table 1 summarizes the characteristics of each pre-training loss. In different losses, the input format is different, the transformer is used as different roles, and the attention mask may also be different. For simplicity, in each iteration, we randomly sample one pre-training loss and calculate the gradient for parameter update. Empirically, we find this strategy is more effective than calculating all losses in each iteration when the total number of loss calculation is the same.

Momentum Teacher. Motivated by [abs-2107-07651, abs-1911-05722, abs-2106-09018], we add a momentum teacher to guide the pre-training. Specifically, the momentum teacher is a clone of the transformer network, and the parameter is updated as the exponential moving average of the target network’s parameter. Let be the target network’s parameter, and be the teacher’s parameter. Then, in each iteration we have


where is set as 0.999 in experiments. For the pre-training loss of ITC/MLM/S-MLM, the same input and the attention mask are fed to the momentum teacher as well, and the output is used as a soft target of the target network’s output. In ITC, let be the similarity between the -th image and the -th text, be the similarity from the momentum teacher network. Then, a distillation loss is added on top of ITC as



denotes the Kullback–Leibler divergence loss on the softmax of the inputs. For MLM loss, let

be the predicted logits corresponding to the masked token, and

be the logits from the momentum teacher. Then, the distillation loss is . Similarly, we have the distillation loss for S-MLM.

4 Experiments

4.1 Experimental Settings

Network Backbone.

We use ViT-B/32, ViT-L/32, and ViT-L/16 [DosovitskiyB0WZ21] as the backbones. The number (32 or 16) is the patch size. ViT-B/32 contains 12 transformer layers with 768 as the hidden size. ViT-L/32 and ViT-L/16 have 24 transformer layers with 1024 as the hidden size. Correspondingly, we name our models as UFO-B/32, UFO-L/32 and UFO-L/16. Only the base model (UFO-B/32) is used for ablation studies.

Vision-Language Pre-training.

We combine datasets for vision-language pre-training: MS COCO [LinMBHPRDZ14], Conceptual Captions (CC) [SoricutDSG18], SBU [OrdonezKB11], and Visual Genome (VG) [KrishnaZGJHKCKL16]. These datasets result in 4 million images with 10 million associated captions. We pre-train the model with epochs when comparing with the state-of-the-art methods. For ablation study, it is 40 epochs unless explicitly specified. The image is randomly cropped and resized into where ranges from 224 to 384 with the patch size as the step. The batch size is for UFO-B/32 and UFO-L/32, and for UFO-L/16. The weight decay is and is imposed on all parameters except the bias in linear layers and the layer norm layers. The learning rate first linearly warms up to and then linearly decreases to 0. The learning rate is set to for UFO-B/32 and UFO-L/32, and for UFO-L/16. During pre-training, the target model’s parameters are converted to float16 except the loss-related head layers. The momentum teacher’s parameters are kept as float32

, but the network forward is sped-up by the mixed precision calculation. The implementation is based on Pytorch

444https://github.com/pytorch/pytorch and Apex555https://github.com/NVIDIA/apex

. Our model is initialized with the ImageNet pre-trained model, and the last checkpoint is used to fine-tune for all downstream tasks.

Method COCO Flickr30k VQAv2 COCO nocaps NLVR SNLI-VE
TR TR-ZS TR test-dev test-std CIDEr CIDEr SPICE dev test-P dev test
(a) Light.DOT [sun2021lightningdot] - - - - - - - - - -
CLIP [RadfordKHRGASAM21] - - - - - - - - - - -
ALIGN [JiaYXCPPLSLD21] - - - - - - - - -
(b) ViLT [KimSK21] - - - - 75.70 76.13 - -
SOHO [abs-2104-03135] - - - - 76.37 77.32 85.0 84.95
OSCAR [Li0LZHZWH0WCG20] - - 80.9 11.7 79.12 80.37 - -
UNITER [ChenLYK0G0020] - - - 79.12 79.98 79.39 79.38
VisParsing [xue2021probing] - - - - - 77.61 78.05 84.75 85.08
VILLA [abs-2006-06195] - - - - - 79.76 81.47 80.18 80.02
ALBEF [abs-2107-07651](4M) - - - 80.24 80.50 - -
ALBEF [abs-2107-07651](14M) 77.6 94.1 95.9 - - - 82.55 83.14 - -
VinVL [abs-2101-00529] - - 92.46 13.07 82.67 83.98 - -
(c) UFO-B/32(4M) 74.32 78.79 12.47 76.35 76.79 77.90 77.41
UFO-L/32(4M) 75.73 88.54 13.31 78.27 78.37 78.13 77.65
UFO-L/16(4M) 76.64 76.76 131.2 92.26 13.61 78.76 79.55 78.46 78.56
Table 2: Compare our model UFO with the state-of-the-art methods. The retrieval task is reported with the top-1 recall on the text retrieval (TR). TR-ZS: zero-shot TR. Superscript of : in retrieval tasks, the candidates are filtered by the inner product based on unimodal encoders and then refined by the heavy fusion. Superscript of : the approach depends on an object detector. Superscript of : SCST [RennieMMRG16] is applied for nocaps. Rows of (a)/(b)/(c): corresponding to Fig. 1 (a)/(b)/(c), respectively. Methods in (b) are slower than those in (a) and (c) as they require to compute the similarity through a heavy-fusion network with each or filtered candidate. The number in the parenthesis is the number of images in VLP. nocapsis on the test set.
Loss VQA Zero-Shot TR@1
test-dev Flickr30k COCO
Full 70.23 64.5 55.5
Random 71.39 68.7 58.7
Table 3: Comparison between the full loss and randomly-selected loss in each iteration. Momentum teacher is disabled.
Downstream Evaluation.

To evaluate the performance on the light-fusion preferred tasks, we mainly focus on the image-text retrieval task based on the inner product as detailed below. For the heavy-fusion favored tasks, we evaluate the performance on VQA, image captioning, NLVR, and SNLI-VE.

1) Image-text retrieval. The task is to retrieve similar text descriptions based on the image or vice versa. The key is to score the similarity between an image and a text description. As our pre-training incorporates the ITC loss, the similarity can be calculated by the inner product without finetuning for zero-shot (ZS) application. In the finetuning stage, we simply continue to train the network with the ITC loss. The image encoder and the text encoder are not shared for higher accuracy and initialized from the same pre-trained model. Experiments are conducted on MS COCO [LinMBHPRDZ14] and Flicker30k [YoungLHH14] datasets with the Karpathy split [KarpathyL15]. The top- recall is reported for the corresponding test set.

2) Visual Question Answering (VQA). The task [GoyalKSBP16] is to answer a question with natural language based on the image context, and thus requires a deep understanding of the question and the image. As a common practice, we cast it as a classification problem where each class corresponds to one answer. The network input is the concatenation of the image and the question embeddings. The representation of the text [CLS] is used to predict the answer over a shared set of answers with an MLP layer. The loss is the binary cross-entropy loss, and the inference is to select the answer with the highest confidence.

3) Image Captioning. The task is to describe an image with a natural language sentence. As we have S-MLM loss, we reuse this cross-entropy loss to finetune the network on the downstream dataset. Instead of using the word-level masking in pre-training, we change it to the token-level masking. In inference, the [MASK] token is appended recursively to the generated tokens to predict the next token one by one. The beam search size is set as 1, and the accuracy is evaluated with BLEU@4 [PapineniRWZ02], METEOR [DenkowskiL14], CIDEr [VedantamZP15], and SPICE [AndersonFJG16]. No SCST [RennieMMRG16] and CBS [AndersonFJG16a] are applied. The dataset is COCO [LinMBHPRDZ14] with Karpathy split [KarpathyL15], and the model is also evaluated against the val and test sets of the nocaps [abs-1812-08658] benchmark.

4) Natural Language Visual Reasoning for Real (NLVR). The task’s input is a pair of images and a natural description, and the goal [SuhrZZZBA19] is to predict whether the description is true about the image pair. To fine-tune the network, we construct two input sequences, each containing the concatenation of the description and one image. Each sequence is fed to the transformer, and the two outputs corresponding to [CLS]

are concatenated as the joint representation for a binary linear classifier through an MLP layer.

5) Visual Entailment. The task is evaluated on SNLI-VE [abs-1901-06706] and is to predict the relation between a premise and a sentence hypothesis as one of entailment, neutral or contradiction. The premise here is an image for the VL task. To finetune the model, we append an MLP layer on top of the text [CLS] token as a three-way classification task. The network input is the concatenation of the image patch features and hypothesis embeddings.

Hyperparameters. The input image size is 384 in the ablation study as default and is increased properly in the comparison with the state-of-the-art methods. The batch size is 512 and the model is fine-tuned with

epochs. The last checkpoint is used for evaluation. Other hyperparameters are summarized in the supplementary materials.

Pre-training Task Zero-Shot Performance Finetune Performance
ITC ITM MLM S-MLM F. TR@1 C. Caption C. TR@1 F. TR@1 VQA C. Caption nocaps
(a) 54.5 0.0 65.5 83.6 68.96 108.7 55.98
(b) 0.1 0.0 - - 68.12 100.0 48.39
(c) 5.0 20.3 62.5 78.4 69.84 112.1 71.46
(d) 2.3 84.8 61.8 78.8 69.36 115.0 75.94
(e) 72.1 23.9 70.3 88.0 71.26 114.9 72.95
(f) 70.5 72.2 70.3 89.4 70.30 116.5 75.32
(g) 0.1 15.0 63.2 70.8 71.33 113.1 71.42
(h) 71.0 71.7 70.0 87.0 71.31 116.8 76.46
(i) 68.9 72.1 70.6 87.7 71.24 117.5 74.96
(j) 68.3 18.8 71.1 88.3 71.84 116.5 73.15
(k) 0.2 84.5 63.6 78.2 70.86 117.8 76.35
(l) 70.3 70.7 70.6 88.0 71.87 119.0 77.09
Table 4: Impact of different pre-training tasks on downstream tasks with 20 epochs for each pre-training loss. C. TR@1: Text Retrieval at top- on COCO; F. TR@1: Text Retrieval at top- on Flickr30k. C. Caption: captioning performance in CIDEr on COCO. VQA is on test-dev. nocapsis on val. Momentum teacher network is disabled. The highest number for each task is bolded.

4.2 Comparison with the State-of-the-art

Table 2 shows the comparison with the existing state-of-the-art methods, which are divided into two groups: light-fusion in rows of (a) and heavy-fusion in rows of (b). Based on the results, we make the following discussions.

Applicability on downstream tasks. The light-fusion approaches (in rows of (a)) achieve strong performance on the COCO and Flickr30k retrieval tasks, but are less applicable to other VL tasks such as VQA. In contrast, the heavy-fusion approaches (in rows of (b)) are more suitable for the understanding tasks, as multiple transformer layers are dedicated to learn the relationship between the modalities. Table 2 contains the results on retrieval tasks for heavy fusion, but the approaches use the fusion network to score the similarity between the query and each of the candidates. This process is time-consuming [sun2021lightningdot], especially when the dataset is large. Comparably, our model (in rows of (c)) can achieve competitive retrieval performance with fast speed similar to the light-fusion approaches, and also strong results for other VL understanding tasks.

Backbones. Our UFO-B/32 shares the same backbone with ViLT [KimSK21], but achieves significantly stronger performance except on NLVR, where our approach is slightly better. For example of the COCO retrieval task, our model improves ViLT [KimSK21] from to for the text retrieval at the top-. On VQA, the improvement is from to on test-dev. Most methods rely on an object detector to extract object features, while our approach removes this dependency and is also in line with other end-to-end approaches. Comparing UFO-B/32 with UFO-L/32, we can see a stronger backbone leads to a moderate or significant improvement on all tasks.

Retrieval tasks. Our best model (UFO-L/32 or UFO-L/16) achieves better or comparative results than all approaches except ALBEF with 14 million images, CLIP with 400 million image-text pairs, and ALIGN with 1.8 billion image-text pairs, all of which use a substantially larger pre-training dataset than ours. Compared to ALBEF [abs-2107-07651] with the same amount of images, our model achieves better retrieval performance (76.9 vs. 73.1) on COCO and comparable fine-tuned performance on Flickr30k (94.1 vs. 94.3). It is worth noting that ALBEF refines the retrieval results with a fusion network, while we simply use the inner product for fast retrieval speed.

Understanding tasks. On the challenging VQA task, our model achieves new state-of-the-art accuracy with 76.76 on test-std. This is better than VinVL [abs-2101-00529] (76.60), which relies on a strong object detector, and better than ALBEF [abs-2107-07651](14M) (76.04), which uses even more image-text pairs. On the COCO captioning task, we achieve 131.2 CIDEr score, slightly higher than the previous state of the art (130.8) with cross-entropy optimization. On nocaps, our model achieves the best performance in SPICE (13.61 vs. 13.07) and is competitive in CIDEr (92.26 vs. 92.46).

4.3 Ablation Study

Different Pre-training Strategies.

We randomly choose one task in each iteration. An alternative is to run all tasks in each iteration where the gradient is more stable. To make a fair comparison, we adjust the training epochs such that the total number of loss calculations is the same, in which the training cost is roughly the same. The comparison is shown in Table 3, and we can see that the accuracy with the randomly-selected loss shows better accuracy than full losses in each iteration. Therefore, the model may favor more iteration updates than more stable gradients.

w/ or w/o Momentum Teacher Downstream Performance
Pre-training Finetuning C. TR@1 F.TR@1 VQA C. Caption nocaps NLVR SNLI-VE
(a) 70.6 88.0 71.87 119.0 77.09 75.4 77.4
(b) 71.3 88.4 71.94 118.6 76.76 75.6 76.9
(c) 72.1 89.9 72.42 119.4 77.53 76.2 77.5
(d) 72.0 89.2 72.51 119.9 78.03 76.2 77.8
Table 5: Effectiveness of the momentum teacher in pre-training and finetuning stages. VQA is on test-dev. nocaps, NLVR and SNLI-VE are on the validation split. Pre-training is with 80 epochs.
Input size VQA COCO Caption NOCAPS Flickr-TR@1 COCO-TR@1 NLVR2 SNLI-VE
384 72.42 119.4 77.53 88.8 72.1 76.2 77.5
480 73.29 121.4 79.21 88.3 72.7 76.4 77.6
576 72.87 121.8 79.72 89.4 72.3 76.2 77.9
672 74.00 122.0 80.00 90.8 72.8 75.7 77.8
768 74.21 122.8 80.74 90.8 73.6 - 77.7
960 73.49 122.8 80.01 91.3 74.1 - 77.9
1024 73.07 - - 91.5 73.9 - -
1280 73.49 - - 90.5 74.0 - -
Table 6: Impact of different image sizes during finetuning. VQA is on test-dev; nocaps, NLVR and SNLI-VE are on val.
Different Pre-training Tasks.

We have multiple pre-training losses, and one question is that how each loss impacts the performance of the downstream tasks. We conduct experiments in two settings. The first is to run each loss epochs to study whether more losses help the downstream tasks, and results are shown in Table 4. On VQA, COCO Caption and nocaps, the one with all pre-training tasks leads to the highest accuracy after fine-tuning. For the other tasks, we can also observe competitive accuracy with all losses. Other observations are detailed below.

  1. [leftmargin=0.5cm]

  2. (a) vs. (b) vs. (c) vs. (d): With a single pre-training loss, S-MLM and ITC give the best performance on captioning and retrieval tasks, respectively. This is reasonable as the pre-training task is consistent with the corresponding downstream task. For VQA task, MLM gives the best accuracy.

  3. (a) vs. (e) or (f): MLM or S-MLM improves the ITC task and shows significant improvement on the retrieval task, e.g., from to for the text retrieval at the top- with finetuning on Flickr30k. With MLM or S-MLM, the captioning task and VQA can also be improved by a large margin on top of the ITC loss.

  4. (c) vs. (e); (d) vs. (f): On top of MLM or S-MLM, ITC gives clear improvement for VQA, e.g., 69.36 (b) to 70.30 (e).

  5. (l) vs. (h, i, j, k): For VQA, we can see significant accuracy drop by removing ITM, or MLM, or ITC, compared with all pre-training losses. In spite of less drop (71.87 71.84) on VQA by removing S-MLM, the captioning task drops a lot (77.09 to 73.15 on nocaps). This also shows all these pre-training losses help on the VL understanding tasks. For retrieval tasks, the accuracy is similar as long as ITC and at least MLM or S-MLM exists.

The second experiment setting is that the total number of pre-training epochs is such that the pre-training cost is roughly the same as only one loss is randomly selected in each iteration. Overall, we observe strong performance with all pre-training losses. The experiment details are in the supplementary materials.

Momentum Distillation.

We apply the momentum distillation [abs-2107-07651] to regularize the model training in the pre-training stage. Table 5 shows the ablation study by turning it on or off in both pre-training and finetuning stages. Comparing (a) with (c), we can see the performance improves significantly on VQA and retrieval tasks by enabling the momentum distillation in the pre-training stage, and shows slight improvement on other tasks. However, during the fine-tuning stage666We search the best weight among on the distillation loss in the fine-tuning. A finer hyperparameter search may potentially improve the results, it shows almost little improvement in our setting. The reason may be that the datasets in the downstream tasks are well annotated, while the massive pre-training dataset is noisy. As the momentum teacher can reduce the impact of the data noise [abs-2107-07651], it helps more the pretraining than the fine-tuning.

Multi-scale vs. single-scale.

In VLP, we use the multi-scale image inputs (). Another way is to use the single-scale input ( always). The former could reduce the training time as the size is smaller or equal to , and the model can be more robust for scale changes. Table 7 shows the comparison. On VQA, the accuracy with the multi-scale improves, while in the zero-shot retrieval task on Flickr30k, the performance drops. Considering the reduced training cost, we always choose the multi-scale.

Scale Hours VQA ZS Flickr TR@1
single 17 71.19 70.4
multi 14 71.39 68.7
Table 7: Comparison between the multi-scale and single-scale image inputs during VLP on 32 V100.
Increasing the image size

Table 6 shows the study with different input image sizes for each downstream task. On VQA, image captioning and retrieval tasks, the accuracy improves a lot with a larger input size. For example on VQA, the accuracy is improved from to by increasing the input size from 384 to 768. On NLVR and SNLI-VE, the improvement is minor. Meanwhile, the optimal input size is also different on different tasks, which indicates that different tasks may expect different granularity levels of image understanding.

5 Conclusion

We propose a single unified transformer (UFO), which is capable of processing both the unimodal input of images or text, and the multimodal input. During the vision-language pre-training, the network is learned to understand different signals through multiple losses, including the image-text contrastive loss, the image-text matching loss, and the masked language modeling loss based on the bidirectional and seq2seq masks. Extensive experiments show that our unified model can achieve competitive results compared to existing methods, which typically design specific networks for each modality and modality fusion. As our model is up to large-sized (24 layers) on only 4 million images in VLP, we expect it is beneficial to scale up both the model size and the pre-training data, which is left as future work.

Datasets COCO [LinMBHPRDZ14] CC [SoricutDSG18] SBU [OrdonezKB11] VG [KrishnaZGJHKCKL16] Total
#Images 113K 3.1M 875K 108K 4.2M
#Captions 567K 3.1M 875K 5.4M 9.9M
Table 8: Dataset statistics in VL pretraining.
Model UFO-B/32 UFO-L/32 UFO-L/16
Batch size 4096 4096 2048
Number of V100 32 64 64
Max GPU Mem (GB) 17 16 27
Time cost (hour) 32 60 177
Table 9: Vision language pretraining statistics for 80 epochs in our experiment. With different implementations and hardware settings, the cost can be greatly different.

The supplementary materials follow the same structure with the main paper, but provides more details and studies.

Appendix A Experiment

a.1 Settings

a.1.1 Vision-Language Pre-training

Dataset. Table 8 shows the statistics of each dataset.

Data Preprocessing. For faster data loading, we downsize each image such that the shorter side is not larger than and the longer side is not larger than while keeping the aspect ratio. Meanwhile, all images are re-compressed as the JPEG format with the quality being . In Visual Genome, as the caption is region-level, for each caption we crop a sub region which is times of the associated box annotation, and take this extended region as the input image.

Data Augmentation. We apply the multi-scale and random cropping as the data augmentation. Specifically, the cropped region area takes at least of the whole image777Implemented as RandomResizedCrop(scale=(0.8, 1.0),ratio=(1.0, 1.0)) in Pytorch. The region is resized into and is a random value ranging from to with the patch size as the step.

Pretraining Statistics. Table 9 shows the vision langauge pretraining statistics with epochs.

a.1.2 Downstream Evaluation

During fine-tuning, no preprocessing, e.g. JPEG re-comopression, is applied. The image is resized such that the shorter side is not larger than and the longer side is not longer than with the aspect ratio kept. No random crop and multi-scale augmentation are applied here. Following [KimSK21], we apply the RandAugment [abs-1909-13719] except the color inversion to the image. Table 10 shows the input size and the learning rate when comparing with the state-of-the-art approaches (shown in Table 2 of the main paper). The corresponding learning rate is the default setting for the ablation study, where the input size is as default.

Model Param Retrieval VQA Captioning NLVR2 SNLI-VE
UFO-B/32 Input size 960 1024 768 768 768 480 576
Learning rate 2.5e-5 2.5e-5 5e-5 5e-5 5e-5 5e-5 2.5e-5
UFO-L/32 Input size 768 768 768 768 768 384 480
Learning rate 1e-5 1e-5 5e-5 5e-5 5e-5 5e-5 1e-5
UFO-L/16 Input size 480 384 576 576 672 384 672
Learning rate 1e-5 1.25e-5 3e-5 1.5e-5 1e-5 5e-5 1e-5
Table 10: Input size and the peak learning rate in each downstream task when comparing with the state-of-the-art approaches in Table 2 of the main paper. The learning rate is also used as the default setting for all ablation studies. NOCAPS shares the same training data with the COCO captioning task.

a.2 Comparison with the State-of-the-art Approaches

In the Table 3 of the main paper, we present the comparison with the state-of-the-art approaches. For the retrieval task, the result is based on the text retrieval at the top- and for the caption task, it is the CIDEr score. Here, we show the results with other widely-used metrics. Table 11 illustrates the complete result on the retrieval task. The observation is consistent with the discussion in the main paper. Under the same pretraining data scale with ALBEF [abs-2107-07651], our approach achieves better accuracy on the COCO dataset and competitive results on Flickr30k. Meanwhile, our best model is also competitive compared with the best model with much larger pretraining data scale. Table 12 and 13 show the complete results on the COCO captioning task and nocapsdataset, respectively.

Method Flickr COCO
TR@1 TR@5 TR@10 IR@1 IR@5 IR@10 TR@1 TR@5 TR@10 IR@1 IR@5 IR@10
(a) Light.DOT [sun2021lightningdot] 69.9 91.1 95.2 60.1 85.1 91.8 45.8 74.6 83.8
Align [JiaYXCPPLSLD21](1.8B) 84.9 97.4 98.6 77.0 93.5 96.9 59.9 83.3 89.8
(b) VILT [KimSK21] 83.5 96.7 98.6 64.4 88.7 93.8 61.5 86.3 92.7 42.7 72.9 83.1
Light.DOT [sun2021lightningdot] 87.2 98.3 99.0 75.6 94.0 96.5 74.2 92.4 96.0 57.4 82.7 89.9
UNITER [ChenLYK0G0020] 87.3 98.0 99.2 75.6 94.1 96.8 65.7 88.6 93.8 52.9 79.9 88.0
VILLA [abs-2006-06195] 87.9 97.5 98.8 76.3 94.2 96.8 - - - - - -
OSCAR [Li0LZHZWH0WCG20] - - - - - - 73.5 92.2 96.0 57.5 82.8 89.8
ALBEF [abs-2107-07651](4M) 94.3 99.4 99.8 82.8 96.7 98.4 73.1 91.4 96.0 56.8 81.5 89.2
VinVL [abs-2101-00529] - - - - - - 75.4 92.9 96.3 58.8 83.5 90.3
ALBEF [abs-2107-07651](14M) 95.9 99.8 100.0 85.6 97.5 98.9 77.6 94.3 97.2 60.7 84.3 90.5
(c) UFO-B/32(4M) 91.5 99.2 99.7 79.0 95.2 97.6 74.1 93.2 96.8 56.4 82.4 89.5
UFO-L/32(4M) 93.6 99.0 99.8 81.1 95.9 98.1 76.9 94.1 97.1 59.1 83.7 90.4
UFO-L/16(4M) 94.1 99.5 99.9 80.7 96.7 98.3 75.7 93.7 97.1 59.2 83.6 90.5
Table 11: Compare our model UFO with the state-of-the-art approaches in the retrieval task after fine-tuning. Superscript of : the retrieval candidate is first filtered by the innner product and then refined by the heavy fusion. Superscript of

: the approach is based on an object detector. The number in parenthesis: the number of images in pretraining. TR: text retrieval. IR: image retrieval. The approaches in the rows (a) and (c) are based on the inner product and thus the retrieval speed can be fast.

MiniVLM [abs-2012-06946]
OSCAR [Li0LZHZWH0WCG20] 37.4 30.7 127.8 23.5
VinVL [abs-2101-00529] 38.5 30.4 130.8 23.4
UFO-B/32(4M) 36.0 28.9 122.8 22.2
UFO-L/32(4M) 37.6 29.7 128.5 23.0
UFO-L/16(4M) 38.7 30.0 131.2 23.3
Table 12: Compare our model UFO with the state-of-the-art approaches in the captioning task on COCO based on the cross-entropy loss.
Method Validataion set Test set
in-domain ne.-domain ou.-domain overall in-domain ne.-domain ou.-domain overall
OSCAR [Li0LZHZWH0WCG20] 85.4 11.9 84.0 11.7 80.3 10.0 83.4 11.4 84.8 12.1 82.1 11.5 73.8 9.7 80.9 11.3
Human [abs-1812-08658] 84.4 14.3 85.0 14.3 95.7 14.0 87.1 14.2 80.6 15.0 84.6 14.7 91.6 14.2 85.3 14.6
VIVO [abs-2009-13682] 92.2 12.9 87.8 12.6 87.5 11.5 88.3 12.4 89.0 12.9 87.8 12.6 80.1 11.1 86.6 12.4
VinVL [abs-2101-00529] 103.7 13.7 95.6 13.4 83.8 11.9 94.3 13.1 98.0 13.6 95.2 13.4 78.0 11.5 92.5 13.1
UFO-B/32 94.5 13.4 82.7 12.8 64.9 11.0 80.7 12.5 90.6 13.6 81.3 12.7 60.6 10.7 78.8 12.5
UFO-L/32 99.8 14.0 91.1 13.4 75.6 11.7 89.2 13.1 96.1 14.1 90.9 13.5 74.2 11.8 88.5 13.3
UFO-L/16 103.9 14.5 95.5 13.8 83.5 12.3 94.3 13.6 98.9 14.3 94.7 13.9 77.9 12.1 92.3 13.6
Table 13: Compare our model UFO with the state-of-the-art approaches in nocapswith finetuning. C: CIDEr. S: SPICE. ne.-domain: near-domain. ou.-domain: out-of-domain. The highest score is bolded (The human performance is excluded). Superscript of S: SCST [RennieMMRG16] is applied. Superscipt of C: CBS [AndersonFJG16a] is applied.

a.3 Ablation Study

Different Pretraining Tasks. In the main paper, we studied the results with different pretraining tasks when each task runs with the same number of iterations. Table 14 shows the results when the total number of iterations is the same, and thus the pretraining cost is similar. We can see that the pretraining with all losses can achieve descent performance on all downstream tasks compared with the best performance. It is noted that the row (j) gives the best VQA performance by removing S-MLM loss, but sacrifice a lot on the captioning tasks. Thus, we stick to apply all losses during pretraining.

Pretraining task Zero-Shot Performance Finetune Performance
ITC ITM MLM S-MLM F. TR@1 C. Caption C. TR@1 F. TR@1 VQA C. Caption NOCAPS
(d) 65.5 0.0 68.3 85.5 69.58 109.1 56.91
(a) 0.2 0.0 - - 68.37 99.0 45.86
(b) 5.0 21.9 62.8 78.3 70.42 113.2 71.46
(c) 0.8 78.6 61.6 77.9 69.38 117.0 77.06
(e) 72.1 23.9 70.3 88.0 71.26 114.9 72.95
(f) 70.5 72.2 70.3 89.4 70.30 116.5 75.32
(g) 0.1 15.0 63.2 70.8 71.33 113.1 71.42
(h) 69.0 72.2 69.8 86.0 70.93 116.1 74.65
(i) 66.3 72.7 70.2 87.1 71.11 117.1 73.64
(j) 68.7 25.4 69.8 87.0 71.65 114.7 71.26
(k) 0.1 76.3 63.6 78.2 70.77 117.0 74.31
(l) 68.7 72.7 69.1 87.6 71.39 116.7 74.77
Table 14: Impact of different pretraining tasks on downstream tasks with 40 epochs in total for all pretraining losses. C. TR@1: Text Retrieval at top- on COCO; F. TR@1: Text Retrieval at top- on Flickr. C. Caption: captioning performance in CIDEr on the COCO dataset. VQA is on test-dev. NOCAPS is on validation in terms of CIDEr. Momentum teacher network is disabled. The highest number for each task is bolded.

Weight of Distillation Losses. In the pretraining, we add the distillation loss. Table 15 shows the experiments with different weights on the loss items with 40 epochs. With a positive weight, the zero-shot retrieval task is improved about 1 to 3 points, but after the fine-tuning, the improvement vanishes. On VQA, it gives non-trivial points’ improvement. On COCO captioning task, we can observe about 1 points improvement when the weight is or . Overall, we consider the optimal weight is around . Table 16 shows the results with 80 pretraining epochs. As we can see, 1.0 gives the best accuracy over all tasks, and we also apply 1.0 for other network structures.

weight Retrieval Task VQA Captioning
ZS F. TR@1 F. TR@1 C. TR@1 test-dev COCO nocaps
0 68.7 87.2 69.1 71.39 116.7 74.77
0.4 71.5 86.5 67.1 71.43 116.4 74.44
0.6 72.2 87.6 68.9 71.58 118.1 74.65
0.8 70.2 87.4 67.7 71.77 116.7 75.24
1.0 69.5 87.3 68.5 71.75 117.5 74.11
Table 15: Different weights on the distillation loss in pretraining with 40 epochs. Retrieval task and COCO captioning task are on test split while nocapsis on validation split.
weight Retrieval Task VQA Captioning
ZS F. TR@1 F. TR@1 C. TR@1 test-dev COCO nocaps
0.6 69.4 89.0 71.3 72.26 118.9 77.07
0.8 71.6 88.8 71.6 72.24 119.4 76.55
1.0 71.6 89.9 72.1 72.42 119.4 77.53
Table 16: Different weights on the distillation loss in pretraining with 80 epochs. Retrieval task and COCO captioning task are on test split while nocaps is on validation split.

a.3.1 Temperature in Image-Text Contrastive loss

In the image-text contrastive loss, we set the temperature learnable as in [JiaYXCPPLSLD21]. Fig. 4 shows the comparison among different ways to set the temperature, including the manually tuned values and the learnable results. With an improper pre-defined temperature, the accuracy on the retrieval drops a lot. The VQA performance is relatively more stable. When the temperature is learnable, it can achieve a strong retrieval performance of and a descent performance on the VQA, and thus we always set the temperature as a learnable parameter.

Figure 4: Different ways to set the temperatures in the image-text contrastive loss, including the manually tuned values and the learnable. For the learnable, the final learned value is shown in the figure as a red star. The left -axis represents the text retrieval at the top- on Flickr in a zero-shot setting after pretraining. The right -axis represents the fine-tuned VQA performance on test-dev.
Loss VQA ZS Flickr TR@1
70.49 61.6
71.39 68.7
Table 17: Comparison between the in-batch image-text contrastive loss and the momemtum-based image-text contrastive loss .

a.3.2 Momentum-based Image-Text Contrastive Loss

For the image-text contrastive loss, our implementation is based on the in-batch samples, where the negative samples are from the current batch, as in [RadfordKHRGASAM21, JiaYXCPPLSLD21]. In [abs-2107-07651], the image-text contrastive loss is implemented with a momentum encoder [abs-1911-05722], which is a drop-in replacement of . The benefit is that the number of negative samples is independent with the batch size and can be very large, e.g. in [abs-2107-07651, abs-1911-05722]. However, the negative samplers are calculated with the momentum encoder which is less accurate. We use to denote this momentum-based loss.

The temperature is also learnable in for a fair comparision and the result is shown in Table 17. As we can see, achieves better performance on both VQA and the retrieval task in our setting.

a.3.3 Representative Text Token in Image-Text Contrastive Loss

The image input contains only one special [CLS] token, while the text contains two special tokens: [CLS] and EOS. As the text [CLS] is used in the ITM loss, we use the [EOS] token in the ITC loss. Table 18 shows the comparison of which special text token is used in the ITC loss. Compared with CLS, the EOS token achieves slightly better accuracy in VQA, but worse performance in the retrieval task. Thus, we conclude that both performs similarly and in all other experiments, we always use [EOS] to represent the text input for ITC.

Text token VQA ZS Flickr TR@1
[CLS] 71.26 70.9
[EOS] 71.39 68.7
Table 18: Comparision between which text token should be used in the image-text contrastive loss.

a.3.4 Modal-Type Embedding

Table 19 shows the result by removing the modal-specific embedding. From the result, the removal of the modal-specific embedding shows no improvement and thus we always add it in all other settings.

Embedding VQA ZS Flickr TR@1
No 71.33 69.7
Yes 71.58 72.2
Table 19: Comparison of the models with or without the modal-type embedding.