I Am Going MAD: Maximum Discrepancy Competition for Comparing Classifiers Adaptively

02/25/2020
by   Haotao Wang, et al.
7

The learning of hierarchical representations for image classification has experienced an impressive series of successes due in part to the availability of large-scale labeled data for training. On the other hand, the trained classifiers have traditionally been evaluated on small and fixed sets of test images, which are deemed to be extremely sparsely distributed in the space of all natural images. It is thus questionable whether recent performance improvements on the excessively re-used test sets generalize to real-world natural images with much richer content variations. Inspired by efficient stimulus selection for testing perceptual models in psychophysical and physiological studies, we present an alternative framework for comparing image classifiers, which we name the MAximum Discrepancy (MAD) competition. Rather than comparing image classifiers using fixed test images, we adaptively sample a small test set from an arbitrarily large corpus of unlabeled images so as to maximize the discrepancies between the classifiers, measured by the distance over WordNet hierarchy. Human labeling on the resulting model-dependent image sets reveals the relative performance of the competing classifiers, and provides useful insights on potential ways to improve them. We report the MAD competition results of eleven ImageNet classifiers while noting that the framework is readily extensible and cost-effective to add future classifiers into the competition. Codes can be found at https://github.com/TAMU-VITA/MAD.

READ FULL TEXT

page 2

page 7

page 12

page 13

research
02/27/2021

Exposing Semantic Segmentation Failures via Maximum Discrepancy Competition

Semantic segmentation is an extensively studied task in computer vision,...
research
08/05/2018

Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models

The prediction accuracy has been the long-lasting and sole standard for ...
research
06/06/2021

On Training Sample Memorization: Lessons from Benchmarking Generative Modeling with a Large-scale Competition

Many recent developments on generative models for natural images have re...
research
06/01/2018

Do CIFAR-10 Classifiers Generalize to CIFAR-10?

Machine learning is currently dominated by largely experimental work foc...
research
03/14/2019

Absit invidia verbo: Comparing Deep Learning methods for offensive language

This document describes our approach to building an Offensive Language C...

Please sign up or login with your details

Forgot password? Click here to reset