Scaling may be all you need for achieving human-level object recognition capacity with human-like visual experience

08/07/2023
by   A. Emin Orhan, et al.
0

This paper asks whether current self-supervised learning methods, if sufficiently scaled up, would be able to reach human-level visual object recognition capabilities with the same type and amount of visual experience humans learn from. Previous work on this question only considered the scaling of data size. Here, we consider the simultaneous scaling of data size, model size, and image resolution. We perform a scaling experiment with vision transformers up to 633M parameters in size (ViT-H/14) trained with up to 5K hours of human-like video data (long, continuous, mostly egocentric videos) with image resolutions of up to 476x476 pixels. The efficiency of masked autoencoders (MAEs) as a self-supervised learning algorithm makes it possible to run this scaling experiment on an unassuming academic budget. We find that it is feasible to reach human-level object recognition capacity at sub-human scales of model size, data size, and image size, if these factors are scaled up simultaneously. To give a concrete example, we estimate that a 2.5B parameter ViT model trained with 20K hours (2.3 years) of human-like video data with a spatial resolution of 952x952 pixels should be able to reach roughly human-level accuracy on ImageNet. Human-level competence is thus achievable for a fundamental perceptual capability from human-like perceptual experience (human-like in both amount and type) with extremely generic learning algorithms and architectures and without any substantive inductive biases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/23/2021

How much "human-like" visual experience do current self-supervised learning algorithms need to achieve human-level object recognition?

This paper addresses a fundamental question: how good are our current se...
research
08/09/2023

A degree of image identification at sub-human scales could be possible with more advanced clusters

The purpose of the research is to determine if currently available self-...
research
06/06/2022

Scaling Vision Transformers to Gigapixel Images via Hierarchical Self-Supervised Learning

Vision Transformers (ViTs) and their multi-scale and hierarchical variat...
research
11/23/2022

Reason from Context with Self-supervised Learning

A tiny object in the sky cannot be an elephant. Context reasoning is cri...
research
05/24/2023

What can generic neural networks learn from a child's visual experience?

Young children develop sophisticated internal models of the world based ...
research
07/31/2020

Self-supervised learning through the eyes of a child

Within months of birth, children have meaningful expectations about the ...
research
04/27/2022

Can deep learning match the efficiency of human visual long-term memory in storing object details?

Humans have a remarkably large capacity to store detailed visual informa...

Please sign up or login with your details

Forgot password? Click here to reset