Adversarial Robustness with Non-uniform Perturbations

02/24/2021
by   Ecenaz Erdemir, et al.
0

Robustness of machine learning models is critical for security related applications, where real-world adversaries are uniquely focused on evading neural network based detectors. Prior work mainly focus on crafting adversarial examples with small uniform norm-bounded perturbations across features to maintain the requirement of imperceptibility. Although such approaches are valid for images, uniform perturbations do not result in realistic adversarial examples in domains such as malware, finance, and social networks. For these types of applications, features typically have some semantically meaningful dependencies. The key idea of our proposed approach is to enable non-uniform perturbations that can adequately represent these feature dependencies during adversarial training. We propose using characteristics of the empirical data distribution, both on correlations between the features and the importance of the features themselves. Using experimental datasets for malware classification, credit risk prediction, and spam detection, we show that our approach is more robust to real-world attacks. Our approach can be adapted to other domains where non-uniform perturbations more accurately represent realistic adversarial examples.

READ FULL TEXT
research
10/24/2020

Are Adversarial Examples Created Equal? A Learnable Weighted Minimax Risk for Robustness under Non-uniform Attacks

Adversarial Training is proved to be an efficient method to defend again...
research
08/10/2018

Android HIV: A Study of Repackaging Malware for Evading Machine-Learning Detection

Machine learning based solutions have been successfully employed for aut...
research
10/25/2018

Evading classifiers in discrete domains with provable optimality guarantees

Security-critical applications such as malware, fraud, or spam detection...
research
03/15/2019

On Certifying Non-uniform Bound against Adversarial Attacks

This work studies the robustness certification problem of neural network...
research
07/01/2020

Robust Learning against Logical Adversaries

Test-time adversarial attacks have posed serious challenges to the robus...
research
10/07/2020

Not All Datasets Are Born Equal: On Heterogeneous Data and Adversarial Examples

Recent work on adversarial learning has focused mainly on neural network...
research
08/03/2023

URET: Universal Robustness Evaluation Toolkit (for Evasion)

Machine learning models are known to be vulnerable to adversarial evasio...

Please sign up or login with your details

Forgot password? Click here to reset