ISAAQ – Mastering Textbook Questions with Pre-trained Transformers and Bottom-Up and Top-Down Attention

Textbook Question Answering is a complex task in the intersection of Machine Comprehension and Visual Question Answering that requires reasoning with multimodal information from text and diagrams. For the first time, this paper taps on the potential of transformer language models and bottom-up and top-down attention to tackle the language and visual understanding challenges this task entails. Rather than training a language-visual transformer from scratch we rely on pre-trained transformers, fine-tuning and ensembling. We add bottom-up and top-down attention to identify regions of interest corresponding to diagram constituents and their relationships, improving the selection of relevant visual information for each question and answer options. Our system ISAAQ reports unprecedented success in all TQA question types, with accuracies of 81.36 questions. ISAAQ also demonstrates its broad applicability, obtaining state-of-the-art results in other demanding datasets.

READ FULL TEXT
research
11/14/2020

Utilizing Bidirectional Encoder Representations from Transformers for Answer Selection

Pre-training a transformer-based model for the language modeling task in...
research
09/09/2021

TxT: Crossmodal End-to-End Learning with Transformers

Reasoning over multiple modalities, e.g. in Visual Question Answering (V...
research
12/08/2021

Improving language models by retrieving from trillions of tokens

We enhance auto-regressive language models by conditioning on document c...
research
06/16/2021

Probing Image-Language Transformers for Verb Understanding

Multimodal image-language transformers have achieved impressive results ...
research
07/25/2023

GPT-3 Models are Few-Shot Financial Reasoners

Financial analysis is an important tool for evaluating company performan...
research
04/30/2021

Chop Chop BERT: Visual Question Answering by Chopping VisualBERT's Heads

Vision-and-Language (VL) pre-training has shown great potential on many ...
research
04/13/2021

Structural analysis of an all-purpose question answering model

Attention is a key component of the now ubiquitous pre-trained language ...

Please sign up or login with your details

Forgot password? Click here to reset