Detecting Overfitting via Adversarial Examples

03/06/2019
by   Roman Werpachowski, et al.
10

The repeated reuse of test sets in popular benchmark problems raises doubts about the credibility of reported test error rates. Verifying whether a learned model is overfitted to a test set is challenging as independent test sets drawn from the same data distribution are usually unavailable, while other test sets may introduce a distribution shift. We propose a new hypothesis test that uses only the original test data to detect overfitting. It utilizes a new unbiased error estimate that is based on adversarial examples generated from the test data and importance weighting. Overfitting is detected if this error estimate is sufficiently different from the original test error rate. The power of the method is illustrated using Monte Carlo simulations on a synthetic problem. We develop a specialized variant of our dependence detector for multiclass image classification, and apply it to testing overfitting of recent models to two popular real-world image classification benchmarks. In the case of ImageNet, our method was not able to detect overfitting to the test set for a state-of-the-art classifier, while on CIFAR-10 we found strong evidence of overfitting for the two recent model architectures we considered, and weak evidence of overfitting on the level of individual training runs.

READ FULL TEXT
research
06/01/2018

Do CIFAR-10 Classifiers Generalize to CIFAR-10?

Machine learning is currently dominated by largely experimental work foc...
research
05/29/2019

Model Similarity Mitigates Test Set Overuse

Excessive reuse of test data has become commonplace in today's machine l...
research
12/11/2017

Training Ensembles to Detect Adversarial Examples

We propose a new ensemble method for detecting and classifying adversari...
research
05/24/2019

The advantages of multiple classes for reducing overfitting from test set reuse

Excessive reuse of holdout data can lead to overfitting. However, there ...
research
02/25/2021

Rip van Winkle's Razor: A Simple Estimate of Overfit to Test Data

Traditional statistics forbids use of test data (a.k.a. holdout data) du...
research
04/29/2020

The Effect of Natural Distribution Shift on Question Answering Models

We build four new test sets for the Stanford Question Answering Dataset ...
research
06/11/2020

A new measure for overfitting and its implications for backdooring of deep learning

Overfitting describes the phenomenon that a machine learning model fits ...

Please sign up or login with your details

Forgot password? Click here to reset