Trust but Verify: Assigning Prediction Credibility by Counterfactual Constrained Learning

11/24/2020
by   Luiz F. O. Chamon, et al.
0

Prediction credibility measures, in the form of confidence intervals or probability distributions, are fundamental in statistics and machine learning to characterize model robustness, detect out-of-distribution samples (outliers), and protect against adversarial attacks. To be effective, these measures should (i) account for the wide variety of models used in practice, (ii) be computable for trained models or at least avoid modifying established training procedures, (iii) forgo the use of data, which can expose them to the same robustness issues and attacks as the underlying model, and (iv) be followed by theoretical guarantees. These principles underly the framework developed in this work, which expresses the credibility as a risk-fit trade-off, i.e., a compromise between how much can fit be improved by perturbing the model input and the magnitude of this perturbation (risk). Using a constrained optimization formulation and duality theory, we analyze this compromise and show that this balance can be determined counterfactually, without having to test multiple perturbations. This results in an unsupervised, a posteriori method of assigning prediction credibility for any (possibly non-convex) differentiable model, from RKHS-based solutions to any architecture of (feedforward, convolutional, graph) neural network. Its use is illustrated in data filtering and defense against adversarial attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/04/2019

Adversarial Attacks in Sound Event Classification

Adversarial attacks refer to a set of methods that perturb the input to ...
research
06/06/2018

Adversarial Attack on Graph Structured Data

Deep learning on graph structures has shown exciting results in various ...
research
12/22/2020

On Frank-Wolfe Optimization for Adversarial Robustness and Interpretability

Deep neural networks are easily fooled by small perturbations known as a...
research
07/29/2021

Enhancing Adversarial Robustness via Test-time Transformation Ensembling

Deep learning models are prone to being fooled by imperceptible perturba...
research
03/17/2023

Adversarial Counterfactual Visual Explanations

Counterfactual explanations and adversarial attacks have a related goal:...
research
01/03/2022

Actor-Critic Network for Q A in an Adversarial Environment

Significant work has been placed in the Q A NLP space to build models ...

Please sign up or login with your details

Forgot password? Click here to reset