Optimization and Optimizers for Adversarial Robustness

03/23/2023
by   Hengyue Liang, et al.
0

Empirical robustness evaluation (RE) of deep learning models against adversarial perturbations entails solving nontrivial constrained optimization problems. Existing numerical algorithms that are commonly used to solve them in practice predominantly rely on projected gradient, and mostly handle perturbations modeled by the ℓ_1, ℓ_2 and ℓ_∞ distances. In this paper, we introduce a novel algorithmic framework that blends a general-purpose constrained-optimization solver PyGRANSO with Constraint Folding (PWCF), which can add more reliability and generality to the state-of-the-art RE packages, e.g., AutoAttack. Regarding reliability, PWCF provides solutions with stationarity measures and feasibility tests to assess the solution quality. For generality, PWCF can handle perturbation models that are typically inaccessible to the existing projected gradient methods; the main requirement is the distance metric to be almost everywhere differentiable. Taking advantage of PWCF and other existing numerical algorithms, we further explore the distinct patterns in the solutions found for solving these optimization problems using various combinations of losses, perturbation models, and optimization algorithms. We then discuss the implications of these patterns on the current robustness evaluation and adversarial training.

READ FULL TEXT

page 18

page 19

research
10/02/2022

Optimization for Robustness Evaluation beyond ℓ_p Metrics

Empirical evaluation of deep learning models against adversarial attacks...
research
07/13/2017

Foolbox v0.8.0: A Python toolbox to benchmark the robustness of machine learning models

Even todays most advanced machine learning models are easily fooled by a...
research
10/03/2022

NCVX: A General-Purpose Optimization Solver for Constrained Machine and Deep Learning

Imposing explicit constraints is relatively new but increasingly pressin...
research
12/22/2020

On Frank-Wolfe Optimization for Adversarial Robustness and Interpretability

Deep neural networks are easily fooled by small perturbations known as a...
research
09/18/2023

Evaluating Adversarial Robustness with Expected Viable Performance

We introduce a metric for evaluating the robustness of a classifier, wit...
research
10/21/2021

Generalization of Neural Combinatorial Solvers Through the Lens of Adversarial Robustness

End-to-end (geometric) deep learning has seen first successes in approxi...

Please sign up or login with your details

Forgot password? Click here to reset