ADDP: Learning General Representations for Image Recognition and Generation with Alternating Denoising Diffusion Process

06/08/2023
by   Changyao Tian, et al.
1

Image recognition and generation have long been developed independently of each other. With the recent trend towards general-purpose representation learning, the development of general representations for both recognition and generation tasks is also promoted. However, preliminary attempts mainly focus on generation performance, but are still inferior on recognition tasks. These methods are modeled in the vector-quantized (VQ) space, whereas leading recognition methods use pixels as inputs. Our key insights are twofold: (1) pixels as inputs are crucial for recognition tasks; (2) VQ tokens as reconstruction targets are beneficial for generation tasks. These observations motivate us to propose an Alternating Denoising Diffusion Process (ADDP) that integrates these two spaces within a single representation learning framework. In each denoising step, our method first decodes pixels from previous VQ tokens, then generates new VQ tokens from the decoded pixels. The diffusion process gradually masks out a portion of VQ tokens to construct the training samples. The learned representations can be used to generate diverse high-fidelity images and also demonstrate excellent transfer performance on recognition tasks. Extensive experiments show that our method achieves competitive performance on unconditional generation, ImageNet classification, COCO detection, and ADE20k segmentation. Importantly, our method represents the first successful development of general representations applicable to both generation and dense recognition tasks. Code shall be released.

READ FULL TEXT

page 18

page 19

page 20

page 21

page 22

page 23

research
06/21/2021

TokenLearner: What Can 8 Learned Tokens Do for Images and Videos?

In this paper, we introduce a novel visual representation learning which...
research
08/21/2023

Diffusion Model as Representation Learner

Diffusion Probabilistic Models (DPMs) have recently demonstrated impress...
research
11/16/2022

MAGE: MAsked Generative Encoder to Unify Representation Learning and Image Synthesis

Generative modeling and representation learning are two key tasks in com...
research
11/17/2022

RenderDiffusion: Image Diffusion for 3D Reconstruction, Inpainting and Generation

Diffusion models currently achieve state-of-the-art performance for both...
research
07/06/2023

A Critical Look at the Current Usage of Foundation Model for Dense Recognition Task

In recent years large model trained on huge amount of cross-modality dat...
research
03/25/2023

Masked Diffusion Transformer is a Strong Image Synthesizer

Despite its success in image synthesis, we observe that diffusion probab...
research
01/22/2021

Dense outlier detection and open-set recognition based on training with noisy negative images

Deep convolutional models often produce inadequate predictions for input...

Please sign up or login with your details

Forgot password? Click here to reset