Compound Tokens: Channel Fusion for Vision-Language Representation Learning

12/02/2022
by   Maxwell Mbabilla Aladago, et al.
0

We present an effective method for fusing visual-and-language representations for several question answering tasks including visual question answering and visual entailment. In contrast to prior works that concatenate unimodal representations or use only cross-attention, we compose multimodal representations via channel fusion. By fusing on the channels, the model is able to more effectively align the tokens compared to standard methods. These multimodal representations, which we call compound tokens are generated with cross-attention transformer layers. First, vision tokens are used as queries to retrieve compatible text tokens through cross-attention. We then chain the vision tokens and the queried text tokens along the channel dimension. We call the resulting representations compound tokens. A second group of compound tokens are generated using an analogous process where the text tokens serve as queries to the cross-attention layer. We concatenate all the compound tokens for further processing with multimodal encoder. We demonstrate the effectiveness of compound tokens using an encoder-decoder vision-language model trained end-to-end in the open-vocabulary setting. Compound Tokens achieve highly competitive performance across a range of question answering tasks including GQA, VQA2.0, and SNLI-VE.

READ FULL TEXT
research
06/06/2023

Diversifying Joint Vision-Language Tokenization Learning

Building joint representations across images and text is an essential st...
research
07/16/2021

Align before Fuse: Vision and Language Representation Learning with Momentum Distillation

Large-scale vision and language representation learning has shown promis...
research
07/11/2023

Generative Pretraining in Multimodality

We present Emu, a Transformer-based multimodal foundation model, which c...
research
04/25/2023

Semantic Compression With Large Language Models

The rise of large language models (LLMs) is revolutionizing information ...
research
08/20/2023

Generic Attention-model Explainability by Weighted Relevance Accumulation

Attention-based transformer models have achieved remarkable progress in ...
research
08/07/2023

Redundancy-aware Transformer for Video Question Answering

This paper identifies two kinds of redundancy in the current VideoQA par...
research
04/12/2019

Evaluating the Representational Hub of Language and Vision Models

The multimodal models used in the emerging field at the intersection of ...

Please sign up or login with your details

Forgot password? Click here to reset