Group-based Robustness: A General Framework for Customized Robustness in the Real World

06/29/2023
by   Weiran Lin, et al.
0

Machine-learning models are known to be vulnerable to evasion attacks that perturb model inputs to induce misclassifications. In this work, we identify real-world scenarios where the true threat cannot be assessed accurately by existing attacks. Specifically, we find that conventional metrics measuring targeted and untargeted robustness do not appropriately reflect a model's ability to withstand attacks from one set of source classes to another set of target classes. To address the shortcomings of existing methods, we formally define a new metric, termed group-based robustness, that complements existing metrics and is better-suited for evaluating model performance in certain attack scenarios. We show empirically that group-based robustness allows us to distinguish between models' vulnerability against specific threat models in situations where traditional robustness metrics do not apply. Moreover, to measure group-based robustness efficiently and accurately, we 1) propose two loss functions and 2) identify three new attack strategies. We show empirically that with comparable success rates, finding evasive samples using our new loss functions saves computation by a factor as large as the number of targeted classes, and finding evasive samples using our new attack strategies saves time by up to 99% compared to brute-force search methods. Finally, we propose a defense method that increases group-based robustness by up to 3.52×.

READ FULL TEXT

page 1

page 6

page 9

page 11

page 13

research
09/09/2022

The Space of Adversarial Strategies

Adversarial examples, inputs designed to induce worst-case behavior in m...
research
12/01/2018

Effects of Loss Functions And Target Representations on Adversarial Robustness

Understanding and evaluating the robustness of neural networks against a...
research
03/01/2021

Am I a Real or Fake Celebrity? Measuring Commercial Face Recognition Web APIs under Deepfake Impersonation Attack

Recently, significant advancements have been made in face recognition te...
research
08/14/2020

WAN: Watermarking Attack Network

Multi-bit watermarking (MW) has been developed to improve robustness aga...
research
03/22/2023

Distribution-restrained Softmax Loss for the Model Robustness

Recently, the robustness of deep learning models has received widespread...
research
05/29/2023

UMD: Unsupervised Model Detection for X2X Backdoor Attacks

Backdoor (Trojan) attack is a common threat to deep neural networks, whe...
research
07/26/2023

Efficient Estimation of the Local Robustness of Machine Learning Models

Machine learning models often need to be robust to noisy input data. The...

Please sign up or login with your details

Forgot password? Click here to reset