Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision

02/16/2022
by   Priya Goyal, et al.
70

Discriminative self-supervised learning allows training models on any random group of internet images, and possibly recover salient information that helps differentiate between the images. Applied to ImageNet, this leads to object centric features that perform on par with supervised features on most object-centric downstream tasks. In this work, we question if using this ability, we can learn any salient and more representative information present in diverse unbounded set of images from across the globe. To do so, we train models on billions of random images without any data pre-processing or prior assumptions about what we want the model to learn. We scale our model size to dense 10 billion parameters to avoid underfitting on a large data size. We extensively study and validate our model performance on over 50 benchmarks including fairness, robustness to distribution shift, geographical diversity, fine grained recognition, image copy detection and many image classification datasets. The resulting model, not only captures well semantic information, it also captures information about artistic style and learns salient information such as geolocations and multilingual word embeddings based on visual content only. More importantly, we discover that such model is more robust, more fair, less harmful and less biased than supervised models or models trained on object centric datasets such as ImageNet.

READ FULL TEXT

page 1

page 11

page 13

page 16

page 17

page 27

page 28

page 34

research
06/07/2023

Coarse Is Better? A New Pipeline Towards Self-Supervised Learning with Uncurated Images

Most self-supervised learning (SSL) methods often work on curated datase...
research
03/27/2022

Mugs: A Multi-Granular Self-Supervised Learning Framework

In self-supervised learning, multi-granular features are heavily desired...
research
04/17/2023

Self-Supervised Learning from Non-Object Centric Images with a Geometric Transformation Sensitive Architecture

Most invariance-based self-supervised methods rely on single object-cent...
research
04/08/2022

Does Robustness on ImageNet Transfer to Downstream Tasks?

As clean ImageNet accuracy nears its ceiling, the research community is ...
research
08/16/2023

Flickr Africa: Examining Geo-Diversity in Large-Scale, Human-Centric Visual Data

Biases in large-scale image datasets are known to influence the performa...
research
06/26/2023

Learning with Difference Attention for Visually Grounded Self-supervised Representations

Recent works in self-supervised learning have shown impressive results o...
research
05/24/2023

What can generic neural networks learn from a child's visual experience?

Young children develop sophisticated internal models of the world based ...

Please sign up or login with your details

Forgot password? Click here to reset