Certified Robustness of Learning-based Static Malware Detectors

01/31/2023
by   Zhuoqun Huang, et al.
0

Certified defenses are a recent development in adversarial machine learning (ML), which aim to rigorously guarantee the robustness of ML models to adversarial perturbations. A large body of work studies certified defenses in computer vision, where ℓ_p norm-bounded evasion attacks are adopted as a tractable threat model. However, this threat model has known limitations in vision, and is not applicable to other domains – e.g., where inputs may be discrete or subject to complex constraints. Motivated by this gap, we study certified defenses for malware detection, a domain where attacks against ML-based systems are a real and current threat. We consider static malware detection systems that operate on byte-level data. Our certified defense is based on the approach of randomized smoothing which we adapt by: (1) replacing the standard Gaussian randomization scheme with a novel deletion randomization scheme that operates on bytes or chunks of an executable; and (2) deriving a certificate that measures robustness to evasion attacks in terms of generalized edit distance. To assess the size of robustness certificates that are achievable while maintaining high accuracy, we conduct experiments on malware datasets using a popular convolutional malware detection model, MalConv. We are able to accurately classify 91 any adversarial perturbations of edit distance 128 bytes or less. By comparison, an existing certification of up to 128 bytes of substitutions (without insertions or deletions) achieves an accuracy of 78 given that robustness certificates are conservative, we evaluate practical robustness to several recently published evasion attacks and, in some cases, find robustness beyond certified guarantees.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/21/2023

MalProtect: Stateful Defense Against Adversarial Query Attacks in ML-based Malware Detection

ML models are known to be vulnerable to adversarial query attacks. In th...
research
02/15/2022

StratDef: a strategic defense against adversarial attacks in malware detection

Over the years, most research towards defenses against adversarial attac...
research
06/12/2018

Static Malware Detection & Subterfuge: Quantifying the Robustness of Machine Learning and Current Anti-Virus

As machine-learning (ML) based systems for malware detection become more...
research
06/30/2021

Explanation-Guided Diagnosis of Machine Learning Evasion Attacks

Machine Learning (ML) models are susceptible to evasion attacks. Evasion...
research
05/26/2019

Enhancing ML Robustness Using Physical-World Constraints

Recent advances in Machine Learning (ML) have demonstrated that neural n...
research
02/22/2023

PAD: Towards Principled Adversarial Malware Detection Against Evasion Attacks

Machine Learning (ML) techniques facilitate automating malicious softwar...
research
08/18/2023

Attacking logo-based phishing website detectors with adversarial perturbations

Recent times have witnessed the rise of anti-phishing schemes powered by...

Please sign up or login with your details

Forgot password? Click here to reset