Jigsaw-ViT: Learning Jigsaw Puzzles in Vision Transformer

07/25/2022
by   Yingyi Chen, et al.
20

The success of Vision Transformer (ViT) in various computer vision tasks has promoted the ever-increasing prevalence of this convolution-free network. The fact that ViT works on image patches makes it potentially relevant to the problem of jigsaw puzzle solving, which is a classical self-supervised task aiming at reordering shuffled sequential image patches back to their natural form. Despite its simplicity, solving jigsaw puzzle has been demonstrated to be helpful for diverse tasks using Convolutional Neural Networks (CNNs), such as self-supervised feature representation learning, domain generalization, and fine-grained classification. In this paper, we explore solving jigsaw puzzle as a self-supervised auxiliary loss in ViT for image classification, named Jigsaw-ViT. We show two modifications that can make Jigsaw-ViT superior to standard ViT: discarding positional embeddings and masking patches randomly. Yet simple, we find that Jigsaw-ViT is able to improve both in generalization and robustness over the standard ViT, which is usually rather a trade-off. Experimentally, we show that adding the jigsaw puzzle branch provides better generalization than ViT on large-scale image classification on ImageNet. Moreover, the auxiliary task also improves robustness to noisy labels on Animal-10N, Food-101N, and Clothing1M as well as adversarial examples. Our implementation is available at https://yingyichen-cyy.github.io/Jigsaw-ViT/.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/03/2023

A New Perspective to Boost Vision Transformer for Medical Image Classification

Transformer has achieved impressive successes for various computer visio...
research
07/29/2021

Self-Supervised Learning for Fine-Grained Image Classification

Fine-grained image classification involves identifying different subcate...
research
12/04/2021

Ablation study of self-supervised learning for image classification

This project focuses on the self-supervised training of convolutional ne...
research
06/10/2022

Position Labels for Self-Supervised Vision Transformer

Position encoding is important for vision transformer (ViT) to capture t...
research
08/17/2023

SimFIR: A Simple Framework for Fisheye Image Rectification with Self-supervised Representation Learning

In fisheye images, rich distinct distortion patterns are regularly distr...
research
12/02/2021

Vision Pair Learning: An Efficient Training Framework for Image Classification

Transformer is a potentially powerful architecture for vision tasks. Alt...
research
04/10/2017

DeepPermNet: Visual Permutation Learning

We present a principled approach to uncover the structure of visual data...

Please sign up or login with your details

Forgot password? Click here to reset