Tightening the Approximation Error of Adversarial Risk with Auto Loss Function Search

11/09/2021
by   Pengfei Xia, et al.
7

Numerous studies have demonstrated that deep neural networks are easily misled by adversarial examples. Effectively evaluating the adversarial robustness of a model is important for its deployment in practical applications. Currently, a common type of evaluation is to approximate the adversarial risk of a model as a robustness indicator by constructing malicious instances and executing attacks. Unfortunately, there is an error (gap) between the approximate value and the true value. Previous studies manually design attack methods to achieve a smaller error, which is inefficient and may miss a better solution. In this paper, we establish the tightening of the approximation error as an optimization problem and try to solve it with an algorithm. More specifically, we first analyze that replacing the non-convex and discontinuous 0-1 loss with a surrogate loss, a necessary compromise in calculating the approximation, is one of the main reasons for the error. Then we propose AutoLoss-AR, the first method for searching loss functions for tightening the approximation error of adversarial risk. Extensive experiments are conducted in multiple settings. The results demonstrate the effectiveness of the proposed method: the best-discovered loss functions outperform the handcrafted baseline by 0.9 respectively. Besides, we also verify that the searched losses can be transferred to other settings and explore why they are better than the baseline by visualizing the local loss landscape.

READ FULL TEXT

page 1

page 12

page 13

page 14

research
09/02/2023

Non-Asymptotic Bounds for Adversarial Excess Risk under Misspecified Models

We propose a general approach to evaluating the performance of robust es...
research
08/15/2022

A Multi-objective Memetic Algorithm for Auto Adversarial Attack Optimization Design

The phenomenon of adversarial examples has been revealed in variant scen...
research
09/07/2021

Adversarial Parameter Defense by Multi-Step Risk Minimization

Previous studies demonstrate DNNs' vulnerability to adversarial examples...
research
01/07/2021

Understanding the Error in Evaluating Adversarial Robustness

Deep neural networks are easily misled by adversarial examples. Although...
research
10/15/2020

Auto Seg-Loss: Searching Metric Surrogates for Semantic Segmentation

We propose a general framework for searching surrogate losses for mainst...
research
10/15/2021

Robustness of different loss functions and their impact on networks learning capability

Recent developments in AI have made it ubiquitous, every industry is try...
research
11/02/2021

HydraText: Multi-objective Optimization for Adversarial Textual Attack

The field of adversarial textual attack has significantly grown over the...

Please sign up or login with your details

Forgot password? Click here to reset