Adversarial EXEmples: A Survey and Experimental Evaluation of Practical Attacks on Machine Learning for Windows Malware Detection

08/17/2020
by   Luca Demetrio, et al.
0

Recent work has shown that adversarial Windows malware samples - also referred to as adversarial EXEmples in this paper - can bypass machine learning-based detection relying on static code analysis by perturbing relatively few input bytes. To preserve malicious functionality, previous attacks either add bytes to existing non-functional areas of the file, potentially limiting their effectiveness, or require running computationally-demanding validation steps to discard malware variants that do not correctly execute in sandbox environments. In this work, we overcome these limitations by developing a unifying framework that not only encompasses and generalizes previous attacks against machine-learning models, but also includes two novel attacks based on practical, functionality-preserving manipulations to the Windows Portable Executable (PE) file format, based on injecting the adversarial payload by respectively extending the DOS header and shifting the content of the first section. Our experimental results show that these attacks outperform existing ones in both white-box and black-box attack scenarios by achieving a better trade-off in terms of evasion rate and size of the injected payload, as well as enabling evasion of models that were shown to be robust to previous attacks. To facilitate reproducibility and future work, we open source our framework and all the corresponding attack implementations. We conclude by discussing the limitations of current machine learning-based malware detectors, along with potential mitigation strategies based on embedding domain knowledge coming from subject-matter experts naturally into the learning process.

READ FULL TEXT

page 4

page 8

page 18

research
03/30/2020

Efficient Black-box Optimization of Adversarial Windows Malware with Constrained Manipulations

Windows malware detectors based on machine learning are vulnerable to ad...
research
04/26/2021

secml-malware: A Python Library for Adversarial Robustness Evaluation of Windows Malware Classifiers

Machine learning has been increasingly used as a first line of defense f...
research
01/26/2018

Learning to Evade Static PE Machine Learning Malware Models via Reinforcement Learning

Machine learning is a popular approach to signatureless malware detectio...
research
10/07/2021

EvadeDroid: A Practical Evasion Attack on Machine Learning for Black-box Android Malware Detection

Over the last decade, several studies have investigated the weaknesses o...
research
10/25/2022

Multi-view Representation Learning from Malware to Defend Against Adversarial Variants

Deep learning-based adversarial malware detectors have yielded promising...
research
07/12/2022

Practical Attacks on Machine Learning: A Case Study on Adversarial Windows Malware

While machine learning is vulnerable to adversarial examples, it still l...
research
03/11/2021

Stochastic-HMDs: Adversarial Resilient Hardware Malware Detectors through Voltage Over-scaling

Machine learning-based hardware malware detectors (HMDs) offer a potenti...

Please sign up or login with your details

Forgot password? Click here to reset