A Study of Autoregressive Decoders for Multi-Tasking in Computer Vision

03/30/2023
by   Lucas Beyer, et al.
4

There has been a recent explosion of computer vision models which perform many tasks and are composed of an image encoder (usually a ViT) and an autoregressive decoder (usually a Transformer). However, most of this work simply presents one system and its results, leaving many questions regarding design decisions and trade-offs of such systems unanswered. In this work, we aim to provide such answers. We take a close look at autoregressive decoders for multi-task learning in multimodal computer vision, including classification, captioning, visual question answering, and optical character recognition. Through extensive systematic experiments, we study the effects of task and data mixture, training and regularization hyperparameters, conditioning type and specificity, modality combination, and more. Importantly, we compare these to well-tuned single-task baselines to highlight the cost incurred by multi-tasking. A key finding is that a small decoder learned on top of a frozen pretrained encoder works surprisingly well. We call this setup locked-image tuning with decoder (LiT-decoder). It can be seen as teaching a decoder to interact with a pretrained vision model via natural language.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/30/2023

DTrOCR: Decoder-only Transformer for Optical Character Recognition

Typical text recognition methods rely on an encoder-decoder structure, i...
research
12/03/2022

Exploring Stochastic Autoregressive Image Modeling for Visual Representation

Autoregressive language modeling (ALM) have been successfully used in se...
research
08/05/2021

Evaluating CLIP: Towards Characterization of Broader Capabilities and Downstream Implications

Recently, there have been breakthroughs in computer vision ("CV") models...
research
05/21/2023

i-Code V2: An Autoregressive Generation Framework over Vision, Language, and Speech Data

The convergence of text, visual, and audio data is a key step towards hu...
research
05/27/2022

GIT: A Generative Image-to-text Transformer for Vision and Language

In this paper, we design and train a Generative Image-to-text Transforme...
research
09/30/2022

Linearly Mapping from Image to Text Space

The extent to which text-only language models (LMs) learn to represent t...
research
09/02/2022

Vision-Language Adaptive Mutual Decoder for OOV-STR

Recent works have shown huge success of deep learning models for common ...

Please sign up or login with your details

Forgot password? Click here to reset