Evaluating Adversarial Robustness with Expected Viable Performance

09/18/2023
by   Ryan McCoppin, et al.
0

We introduce a metric for evaluating the robustness of a classifier, with particular attention to adversarial perturbations, in terms of expected functionality with respect to possible adversarial perturbations. A classifier is assumed to be non-functional (that is, has a functionality of zero) with respect to a perturbation bound if a conventional measure of performance, such as classification accuracy, is less than a minimally viable threshold when the classifier is tested on examples from that perturbation bound. Defining robustness in terms of an expected value is motivated by a domain general approach to robustness quantification.

READ FULL TEXT
research
10/03/2020

Adversarial and Natural Perturbations for General Robustness

In this paper we aim to explore the general robustness of neural network...
research
03/28/2019

Benchmarking Neural Network Robustness to Common Corruptions and Perturbations

In this paper we establish rigorous benchmarks for image classifier robu...
research
10/29/2018

Adversarial Risk and Robustness: General Definitions and Implications for the Uniform Distribution

We study adversarial perturbations when the instances are uniformly dist...
research
03/22/2023

Revisiting DeepFool: generalization and improvement

Deep neural networks have been known to be vulnerable to adversarial exa...
research
12/20/2022

Towards Robustness of Text-to-SQL Models Against Natural and Realistic Adversarial Table Perturbation

The robustness of Text-to-SQL parsers against adversarial perturbations ...
research
03/23/2023

Optimization and Optimizers for Adversarial Robustness

Empirical robustness evaluation (RE) of deep learning models against adv...
research
05/29/2018

On Robust Trimming of Bayesian Network Classifiers

This paper considers the problem of removing costly features from a Baye...

Please sign up or login with your details

Forgot password? Click here to reset