Do CIFAR-10 Classifiers Generalize to CIFAR-10?

06/01/2018
by   Benjamin Recht, et al.
0

Machine learning is currently dominated by largely experimental work focused on improvements in a few key tasks. However, the impressive accuracy numbers of the best performing models are questionable because the same test sets have been used to select these models for multiple years now. To understand the danger of overfitting, we measure the accuracy of CIFAR-10 classifiers by creating a new test set of truly unseen images. Although we ensure that the new test set is as close to the original data distribution as possible, we find a large drop in accuracy (4 Yet more recent models with higher original accuracy show a smaller drop and better overall performance, indicating that this drop is likely not due to overfitting based on adaptivity. Instead, we view our results as evidence that current accuracy numbers are brittle and susceptible to even minute natural variations in the data distribution.

READ FULL TEXT

page 6

page 8

page 19

page 22

research
03/06/2019

Detecting Overfitting via Adversarial Examples

The repeated reuse of test sets in popular benchmark problems raises dou...
research
04/29/2020

The Effect of Natural Distribution Shift on Question Answering Models

We build four new test sets for the Stanford Question Answering Dataset ...
research
02/20/2022

Deconstructing Distributions: A Pointwise Framework of Learning

In machine learning, we traditionally evaluate the performance of a sing...
research
11/05/2021

Toward Learning Human-aligned Cross-domain Robust Models by Countering Misaligned Features

Machine learning has demonstrated remarkable prediction accuracy over i....
research
11/15/2016

CIFAR-10: KNN-based Ensemble of Classifiers

In this paper, we study the performance of different classifiers on the ...
research
02/25/2020

I Am Going MAD: Maximum Discrepancy Competition for Comparing Classifiers Adaptively

The learning of hierarchical representations for image classification ha...
research
06/08/2020

Passive Batch Injection Training Technique: Boosting Network Performance by Injecting Mini-Batches from a different Data Distribution

This work presents a novel training technique for deep neural networks t...

Please sign up or login with your details

Forgot password? Click here to reset