Identifying Model Weakness with Adversarial Examiner

11/25/2019
by   Michelle Shu, et al.
0

Machine learning models are usually evaluated according to the average case performance on the test set. However, this is not always ideal, because in some sensitive domains (e.g. autonomous driving), it is the worst case performance that matters more. In this paper, we are interested in systematic exploration of the input data space to identify the weakness of the model to be evaluated. We propose to use an adversarial examiner in the testing stage. Different from the existing strategy to always give the same (distribution of) test data, the adversarial examiner will dynamically select the next test data to hand out based on the testing history so far, with the goal being to undermine the model's performance. This sequence of test data not only helps us understand the current model, but also serves as constructive feedback to help improve the model in the next iteration. We conduct experiments on ShapeNet object classification. We show that our adversarial examiner can successfully put more emphasis on the weakness of the model, preventing performance estimates from being overly optimistic.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/23/2022

A Diagnostic Approach to Assess the Quality of Data Splitting in Machine Learning

In machine learning, a routine practice is to split the data into a trai...
research
05/24/2021

Robust Fairness-aware Learning Under Sample Selection Bias

The underlying assumption of many machine learning algorithms is that th...
research
01/28/2022

Systematic Training and Testing for Machine Learning Using Combinatorial Interaction Testing

This paper demonstrates the systematic use of combinatorial coverage for...
research
02/11/2021

Defuse: Harnessing Unrestricted Adversarial Examples for Debugging Models Beyond Test Accuracy

We typically compute aggregate statistics on held-out test data to asses...
research
07/29/2023

Evaluating the Robustness of Test Selection Methods for Deep Neural Networks

Testing deep learning-based systems is crucial but challenging due to th...
research
07/06/2022

Adversarial Robustness of Visual Dialog

Adversarial robustness evaluates the worst-case performance scenario of ...
research
02/25/2021

Rip van Winkle's Razor: A Simple Estimate of Overfit to Test Data

Traditional statistics forbids use of test data (a.k.a. holdout data) du...

Please sign up or login with your details

Forgot password? Click here to reset