Can Vision Transformers Learn without Natural Images?

03/24/2021
by   Kodai Nakashima, et al.
92

Can we complete pre-training of Vision Transformers (ViT) without natural images and human-annotated labels? Although a pre-trained ViT seems to heavily rely on a large-scale dataset and human-annotated labels, recent large-scale datasets contain several problems in terms of privacy violations, inadequate fairness protection, and labor-intensive annotation. In the present paper, we pre-train ViT without any image collections and annotation labor. We experimentally verify that our proposed framework partially outperforms sophisticated Self-Supervised Learning (SSL) methods like SimCLRv2 and MoCov2 without using any natural images in the pre-training phase. Moreover, although the ViT pre-trained without natural images produces some different visualizations from ImageNet pre-trained ViT, it can interpret natural image datasets to a large extent. For example, the performance rates on the CIFAR-10 dataset are as follows: our proposal 97.6 vs. SimCLRv2 97.4 vs. ImageNet 98.0.

READ FULL TEXT

page 2

page 8

research
01/21/2021

Pre-training without Natural Images

Is it possible to use convolutional neural networks pre-trained without ...
research
11/30/2021

Beyond Flatland: Pre-training with a Strong 3D Inductive Bias

Pre-training on large-scale databases consisting of natural images and t...
research
06/06/2021

Exploring the Limits of Out-of-Distribution Detection

Near out-of-distribution detection (OOD) is a major challenge for deep n...
research
03/02/2023

Visual Atoms: Pre-training Vision Transformers with Sinusoidal Waves

Formula-driven supervised learning (FDSL) has been shown to be an effect...
research
08/26/2023

Transfer Learning for Microstructure Segmentation with CS-UNet: A Hybrid Algorithm with Transformer and CNN Encoders

Transfer learning improves the performance of deep learning models by in...
research
05/07/2019

Learning to Interpret Satellite Images in Global Scale Using Wikipedia

Despite recent progress in computer vision, finegrained interpretation o...
research
11/26/2018

Art2Real: Unfolding the Reality of Artworks via Semantically-Aware Image-to-Image Translation

The applicability of computer vision to real paintings and artworks has ...

Please sign up or login with your details

Forgot password? Click here to reset