Winner Team Mia at TextVQA Challenge 2021: Vision-and-Language Representation Learning with Pre-trained Sequence-to-Sequence Model

06/24/2021
by   Yixuan Qiao, et al.
0

TextVQA requires models to read and reason about text in images to answer questions about them. Specifically, models need to incorporate a new modality of text present in the images and reason over it to answer TextVQA questions. In this challenge, we use generative model T5 for TextVQA task. Based on pre-trained checkpoint T5-3B from HuggingFace repository, two other pre-training tasks including masked language modeling(MLM) and relative position prediction(RPP) are designed to better align object feature and scene text. In the stage of pre-training, encoder is dedicate to handle the fusion among multiple modalities: question text, object text labels, scene text labels, object visual features, scene visual features. After that decoder generates the text sequence step-by-step, cross entropy loss is required by default. We use a large-scale scene text dataset in pre-training and then fine-tune the T5-3B with the TextVQA dataset only.

READ FULL TEXT
research
12/08/2020

TAP: Text-Aware Pre-training for Text-VQA and Text-Caption

In this paper, we propose Text-Aware Pre-training (TAP) for Text-VQA and...
research
10/14/2022

Plausible May Not Be Faithful: Probing Object Hallucination in Vision-Language Pre-training

Large-scale vision-language pre-trained (VLP) models are prone to halluc...
research
09/12/2022

PreSTU: Pre-Training for Scene-Text Understanding

The ability to read and reason about texts in an image is often lacking ...
research
01/02/2021

VinVL: Making Visual Representations Matter in Vision-Language Models

This paper presents a detailed study of improving visual representations...
research
04/15/2022

Text Revision by On-the-Fly Representation Optimization

Text revision refers to a family of natural language generation tasks, w...
research
07/04/2023

LPN: Language-guided Prototypical Network for few-shot classification

Few-shot classification aims to adapt to new tasks with limited labeled ...
research
09/11/2023

Can you text what is happening? Integrating pre-trained language encoders into trajectory prediction models for autonomous driving

In autonomous driving tasks, scene understanding is the first step towar...

Please sign up or login with your details

Forgot password? Click here to reset