DeepAI AI Chat
Log In Sign Up

Explanation-Guided Diagnosis of Machine Learning Evasion Attacks

by   Abderrahmen Amich, et al.

Machine Learning (ML) models are susceptible to evasion attacks. Evasion accuracy is typically assessed using aggregate evasion rate, and it is an open question whether aggregate evasion rate enables feature-level diagnosis on the effect of adversarial perturbations on evasive predictions. In this paper, we introduce a novel framework that harnesses explainable ML methods to guide high-fidelity assessment of ML evasion attacks. Our framework enables explanation-guided correlation analysis between pre-evasion perturbations and post-evasion explanations. Towards systematic assessment of ML evasion attacks, we propose and evaluate a novel suite of model-agnostic metrics for sample-level and dataset-level correlation analysis. Using malware and image classifiers, we conduct comprehensive evaluations across diverse model architectures and complementary feature representations. Our explanation-guided correlation analysis reveals correlation gaps between adversarial samples and the corresponding perturbations performed on them. Using a case study on explanation-guided evasion, we show the broader usage of our methodology for assessing robustness of ML models.


page 9

page 10


EG-Booster: Explanation-Guided Booster of ML Evasion Attacks

The widespread usage of machine learning (ML) in a myriad of domains has...

Hide and Seek: on the Stealthiness of Attacks against Deep Learning Systems

With the growing popularity of artificial intelligence and machine learn...

Certified Robustness of Learning-based Static Malware Detectors

Certified defenses are a recent development in adversarial machine learn...

Robustness Evaluation of Transformer-based Form Field Extractors via Form Attacks

We propose a novel framework to evaluate the robustness of transformer-b...

Backdooring Explainable Machine Learning

Explainable machine learning holds great potential for analyzing and und...

FINER: Enhancing State-of-the-art Classifiers with Feature Attribution to Facilitate Security Analysis

Deep learning classifiers achieve state-of-the-art performance in variou...