The advantages of multiple classes for reducing overfitting from test set reuse

05/24/2019
by   Vitaly Feldman, et al.
2

Excessive reuse of holdout data can lead to overfitting. However, there is little concrete evidence of significant overfitting due to holdout reuse in popular multiclass benchmarks today. Known results show that, in the worst-case, revealing the accuracy of k adaptively chosen classifiers on a data set of size n allows to create a classifier with bias of Θ(√(k/n)) for any binary prediction problem. We show a new upper bound of Õ({√(k(n)/(mn)),k/n}) on the worst-case bias that any attack can achieve in a prediction problem with m classes. Moreover, we present an efficient attack that achieve a bias of Ω(√(k/(m^2 n))) and improves on previous work for the binary setting (m=2). We also present an inefficient attack that achieves a bias of Ω̃(k/n). Complementing our theoretical work, we give new practical attacks to stress-test multiclass benchmarks by aiming to create as large a bias as possible with a given number of queries. Our experiments show that the additional uncertainty of prediction with a large number of classes indeed mitigates the effect of our best attacks. Our work extends developments in understanding overfitting due to adaptive data analysis to multiclass prediction problems. It also bears out the surprising fact that multiclass prediction problems are significantly more robust to overfitting when reusing a test (or holdout) dataset. This offers an explanation as to why popular multiclass prediction benchmarks, such as ImageNet, may enjoy a longer lifespan than what intuition from literature on binary classification suggests.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/08/2019

Optimal multiclass overfitting by sequence reconstruction from Hamming queries

A primary concern of excessive reuse of test datasets in machine learnin...
research
05/29/2019

Model Similarity Mitigates Test Set Overuse

Excessive reuse of test data has become commonplace in today's machine l...
research
03/06/2019

Detecting Overfitting via Adversarial Examples

The repeated reuse of test sets in popular benchmark problems raises dou...
research
11/24/2020

Contract Scheduling With Predictions

Contract scheduling is a general technique that allows to design a syste...
research
06/20/2021

Generalization in the Face of Adaptivity: A Bayesian Perspective

Repeated use of a data sample via adaptively chosen queries can rapidly ...
research
06/20/2017

Most Ligand-Based Benchmarks Measure Overfitting Rather than Accuracy

Undetected overfitting can occur when there are significant redundancies...
research
03/01/2023

A Practical Upper Bound for the Worst-Case Attribution Deviations

Model attribution is a critical component of deep neural networks (DNNs)...

Please sign up or login with your details

Forgot password? Click here to reset