DeepAI AI Chat
Log In Sign Up

A Transformer-based Cross-modal Fusion Model with Adversarial Training for VQA Challenge 2021

by   Ke-Han Lu, et al.

In this paper, inspired by the successes of visionlanguage pre-trained models and the benefits from training with adversarial attacks, we present a novel transformerbased cross-modal fusion modeling by incorporating the both notions for VQA challenge 2021. Specifically, the proposed model is on top of the architecture of VinVL model [19], and the adversarial training strategy [4] is applied to make the model robust and generalized. Moreover, two implementation tricks are also used in our system to obtain better results. The experiments demonstrate that the novel framework can achieve 76.72


page 1

page 2

page 3


Champion Solution for the WSDM2023 Toloka VQA Challenge

In this report, we present our champion solution to the WSDM2023 Toloka ...

Revisiting Pre-training in Audio-Visual Learning

Pre-training technique has gained tremendous success in enhancing model ...

Transformer-based Cross-Modal Recipe Embeddings with Large Batch Training

In this paper, we present a cross-modal recipe retrieval framework, Tran...

CTAL: Pre-training Cross-modal Transformer for Audio-and-Language Representations

Existing audio-language task-specific predictive approaches focus on bui...

Good, Better, Best: Textual Distractors Generation for Multi-Choice VQA via Policy Gradient

Textual distractors in current multi-choice VQA datasets are not challen...

A Deeper Look at 3D Shape Classifiers

We investigate the role of representations and architectures for classif...