Poisoning Network Flow Classifiers

06/02/2023
by   Giorgio Severi, et al.
0

As machine learning (ML) classifiers increasingly oversee the automated monitoring of network traffic, studying their resilience against adversarial attacks becomes critical. This paper focuses on poisoning attacks, specifically backdoor attacks, against network traffic flow classifiers. We investigate the challenging scenario of clean-label poisoning where the adversary's capabilities are constrained to tampering only with the training data - without the ability to arbitrarily modify the training labels or any other component of the training process. We describe a trigger crafting strategy that leverages model interpretability techniques to generate trigger patterns that are effective even at very low poisoning rates. Finally, we design novel strategies to generate stealthy triggers, including an approach based on generative Bayesian network models, with the goal of minimizing the conspicuousness of the trigger, and thus making detection of an ongoing poisoning campaign more challenging. Our findings provide significant insights into the feasibility of poisoning attacks on network traffic classifiers used in multiple scenarios, including detecting malicious communication and application classification.

READ FULL TEXT

page 6

page 11

research
01/31/2022

GADoT: GAN-based Adversarial Training for Robust DDoS Attack Detection

Machine Learning (ML) has proven to be effective in many application dom...
research
03/03/2020

Adversarial Network Traffic: Toward Evaluating the Robustness of Deep Learning Based Network Traffic Classification

Network traffic classification is used in various applications such as n...
research
04/24/2020

Systematic Evaluation of Backdoor Data Poisoning Attacks on Image Classifiers

Backdoor data poisoning attacks have recently been demonstrated in compu...
research
03/02/2020

Exploring Backdoor Poisoning Attacks Against Malware Classifiers

Current training pipelines for machine learning (ML) based malware class...
research
10/07/2020

Adversarial Attacks to Machine Learning-Based Smart Healthcare Systems

The increasing availability of healthcare data requires accurate analysi...
research
03/07/2020

Dynamic Backdoor Attacks Against Machine Learning Models

Machine learning (ML) has made tremendous progress during the past decad...
research
04/03/2023

Is Stochastic Mirror Descent Vulnerable to Adversarial Delay Attacks? A Traffic Assignment Resilience Study

Intelligent Navigation Systems (INS) are exposed to an increasing number...

Please sign up or login with your details

Forgot password? Click here to reset