A Transformer-based Cross-modal Fusion Model with Adversarial Training for VQA Challenge 2021

06/24/2021
by   Ke-Han Lu, et al.
0

In this paper, inspired by the successes of visionlanguage pre-trained models and the benefits from training with adversarial attacks, we present a novel transformerbased cross-modal fusion modeling by incorporating the both notions for VQA challenge 2021. Specifically, the proposed model is on top of the architecture of VinVL model [19], and the adversarial training strategy [4] is applied to make the model robust and generalized. Moreover, two implementation tricks are also used in our system to obtain better results. The experiments demonstrate that the novel framework can achieve 76.72

READ FULL TEXT

page 1

page 2

page 3

research
01/22/2023

Champion Solution for the WSDM2023 Toloka VQA Challenge

In this report, we present our champion solution to the WSDM2023 Toloka ...
research
02/07/2023

Revisiting Pre-training in Audio-Visual Learning

Pre-training technique has gained tremendous success in enhancing model ...
research
09/01/2021

CTAL: Pre-training Cross-modal Transformer for Audio-and-Language Representations

Existing audio-language task-specific predictive approaches focus on bui...
research
07/09/2023

Cross-modal Orthogonal High-rank Augmentation for RGB-Event Transformer-trackers

This paper addresses the problem of cross-modal object tracking from RGB...
research
07/18/2023

Towards a performance analysis on pre-trained Visual Question Answering models for autonomous driving

This short paper presents a preliminary analysis of three popular Visual...
research
10/21/2019

Good, Better, Best: Textual Distractors Generation for Multi-Choice VQA via Policy Gradient

Textual distractors in current multi-choice VQA datasets are not challen...

Please sign up or login with your details

Forgot password? Click here to reset