Images Speak in Images: A Generalist Painter for In-Context Visual Learning

12/05/2022
by   Xinlong Wang, et al.
0

In-context learning, as a new paradigm in NLP, allows the model to rapidly adapt to various tasks with only a handful of prompts and examples. But in computer vision, the difficulties for in-context learning lie in that tasks vary significantly in the output representations, thus it is unclear how to define the general-purpose task prompts that the vision model can understand and transfer to out-of-domain tasks. In this work, we present Painter, a generalist model which addresses these obstacles with an "image"-centric solution, that is, to redefine the output of core vision tasks as images, and specify task prompts as also images. With this idea, our training process is extremely simple, which performs standard masked image modeling on the stitch of input and output image pairs. This makes the model capable of performing tasks conditioned on visible image patches. Thus, during inference, we can adopt a pair of input and output images from the same task as the input condition, to indicate which task to perform. Without bells and whistles, our generalist Painter can achieve competitive performance compared to well-established task-specific models, on seven representative vision tasks ranging from high-level visual understanding to low-level image processing. Painter significantly outperforms recent generalist models on several challenging tasks. Surprisingly, our model shows capabilities of completing out-of-domain tasks, which do not exist in the training data, such as open-category keypoint detection and object segmentation, validating the powerful task transferability of in-context learning.

READ FULL TEXT

page 1

page 4

page 6

page 8

page 12

page 13

research
06/15/2022

A Unified Sequence Interface for Vision Tasks

While language tasks are naturally expressed in a single, unified, model...
research
10/19/2022

CroCo: Self-Supervised Pre-training for 3D Vision Tasks by Cross-View Completion

Masked Image Modeling (MIM) has recently been established as a potent pr...
research
06/08/2021

Image2Point: 3D Point-Cloud Understanding with Pretrained 2D ConvNets

3D point-clouds and 2D images are different visual representations of th...
research
09/07/2023

InstructDiffusion: A Generalist Modeling Interface for Vision Tasks

We present InstructDiffusion, a unifying and generic framework for align...
research
11/03/2022

Could Giant Pretrained Image Models Extract Universal Representations?

Frozen pretrained models have become a viable alternative to the pretrai...
research
07/15/2020

Focus-and-Expand: Training Guidance Through Gradual Manipulation of Input Features

We present a simple and intuitive Focus-and-eXpand () method to guide th...
research
09/01/2022

Visual Prompting via Image Inpainting

How does one adapt a pre-trained visual model to novel downstream tasks ...

Please sign up or login with your details

Forgot password? Click here to reset