Trivial or impossible – dichotomous data difficulty masks model differences (on ImageNet and beyond)

10/12/2021
by   Kristof Meding, et al.
0

"The power of a generalization system follows directly from its biases" (Mitchell 1980). Today, CNNs are incredibly powerful generalisation systems – but to what degree have we understood how their inductive bias influences model decisions? We here attempt to disentangle the various aspects that determine how a model decides. In particular, we ask: what makes one model decide differently from another? In a meticulously controlled setting, we find that (1.) irrespective of the network architecture or objective (e.g. self-supervised, semi-supervised, vision transformers, recurrent models) all models end up with a similar decision boundary. (2.) To understand these findings, we analysed model decisions on the ImageNet validation set from epoch to epoch and image by image. We find that the ImageNet validation set, among others, suffers from dichotomous data difficulty (DDD): For the range of investigated models and their accuracies, it is dominated by 46.0 and 11.5 could possibly be responsible for the differences between two models' decision boundaries. (3.) Only removing the "impossible" and "trivial" images allows us to see pronounced differences between models. (4.) Humans are highly accurate at predicting which images are "trivial" and "impossible" for CNNs (81.4 This implies that in future comparisons of brains, machines and behaviour, much may be gained from investigating the decisive role of images and the distribution of their difficulties.

READ FULL TEXT

page 2

page 6

page 7

page 8

page 21

page 22

research
10/16/2020

On the surprising similarities between supervised and self-supervised models

How do humans learn to acquire a powerful, flexible and robust represent...
research
06/14/2021

Partial success in closing the gap between human and machine vision

A few years ago, the first CNN surpassed human performance on ImageNet. ...
research
09/15/2022

On the Surprising Effectiveness of Transformers in Low-Labeled Video Recognition

Recently vision transformers have been shown to be competitive with conv...
research
06/07/2021

Efficient Training of Visual Transformers with Small-Size Datasets

Visual Transformers (VTs) are emerging as an architectural paradigm alte...
research
06/15/2022

SP-ViT: Learning 2D Spatial Priors for Vision Transformers

Recently, transformers have shown great potential in image classificatio...
research
10/08/2021

Distinguishing rule- and exemplar-based generalization in learning systems

Despite the increasing scale of datasets in machine learning, generaliza...

Please sign up or login with your details

Forgot password? Click here to reset