Exploring Backdoor Poisoning Attacks Against Malware Classifiers

03/02/2020
by   Giorgio Severi, et al.
0

Current training pipelines for machine learning (ML) based malware classification rely on crowdsourced threat feeds, exposing a natural attack injection point. We study for the first time the susceptibility of ML malware classifiers to backdoor poisoning attacks, specifically focusing on challenging "clean label" attacks where attackers do not control the sample labeling process. We propose the use of techniques from explainable machine learning to guide the selection of relevant features and their values to create a watermark in a model-agnostic fashion. Using a dataset of 800,000 Windows binaries, we demonstrate effective attacks against gradient boosting decision trees and a neural network model for malware classification under various constraints imposed on the attacker. For example, an attacker injecting just 1 samples in the training process can achieve a success rate greater than 97 crafting a watermark of 8 features out of more than 2,300 available features. To demonstrate the feasibility of our backdoor attacks in practice, we create a watermarking utility for Windows PE files that preserves the binary's functionality. Finally, we experiment with potential defensive strategies and show the difficulties of completely defending against these powerful attacks, especially when the attacks blend in with the legitimate sample distribution.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/03/2023

Can Feature Engineering Help Quantum Machine Learning for Malware Detection?

With the increasing number and sophistication of malware attacks, malwar...
research
05/09/2022

Do You Think You Can Hold Me? The Real Challenge of Problem-Space Evasion Attacks

Android malware is a spreading disease in the virtual world. Anti-virus ...
research
10/28/2022

Multi-feature Dataset for Windows PE Malware Classification

This paper describes a multi-feature dataset for training machine learni...
research
10/30/2020

Being Single Has Benefits. Instance Poisoning to Deceive Malware Classifiers

The performance of a machine learning-based malware classifier depends o...
research
06/02/2023

Poisoning Network Flow Classifiers

As machine learning (ML) classifiers increasingly oversee the automated ...
research
09/23/2021

Adversarial Transfer Attacks With Unknown Data and Class Overlap

The ability to transfer adversarial attacks from one model (the surrogat...
research
03/24/2022

MERLIN – Malware Evasion with Reinforcement LearnINg

In addition to signature-based and heuristics-based detection techniques...

Please sign up or login with your details

Forgot password? Click here to reset