Visual Concepts Tokenization

05/20/2022
by   Tao Yang, et al.
2

Obtaining the human-like perception ability of abstracting visual concepts from concrete pixels has always been a fundamental and important target in machine learning research fields such as disentangled representation learning and scene decomposition. Towards this goal, we propose an unsupervised transformer-based Visual Concepts Tokenization framework, dubbed VCT, to perceive an image into a set of disentangled visual concept tokens, with each concept token responding to one type of independent visual concept. Particularly, to obtain these concept tokens, we only use cross-attention to extract visual information from the image tokens layer by layer without self-attention between concept tokens, preventing information leakage across concept tokens. We further propose a Concept Disentangling Loss to facilitate that different concept tokens represent independent visual concepts. The cross-attention and disentangling loss play the role of induction and mutual exclusion for the concept tokens, respectively. Extensive experiments on several popular datasets verify the effectiveness of VCT on the tasks of disentangled representation learning and scene decomposition. VCT achieves the state of the art results by a large margin.

READ FULL TEXT

page 6

page 8

page 9

research
03/29/2022

MatteFormer: Transformer-Based Image Matting via Prior-Tokens

In this paper, we propose a transformer-based image matting model called...
research
06/21/2021

TokenLearner: What Can 8 Learned Tokens Do for Images and Videos?

In this paper, we introduce a novel visual representation learning which...
research
03/01/2023

Succinct Representations for Concepts

Foundation models like chatGPT have demonstrated remarkable performance ...
research
07/13/2023

Domain-Agnostic Tuning-Encoder for Fast Personalization of Text-To-Image Models

Text-to-image (T2I) personalization allows users to guide the creative i...
research
06/01/2023

The Hidden Language of Diffusion Models

Text-to-image diffusion models have demonstrated an unparalleled ability...
research
02/27/2021

Efficient Transformer based Method for Remote Sensing Image Change Detection

Modern change detection (CD) has achieved remarkable success by the powe...
research
06/17/2022

CtrlFormer: Learning Transferable State Representation for Visual Control via Transformer

Transformer has achieved great successes in learning vision and language...

Please sign up or login with your details

Forgot password? Click here to reset