Automatic Generation of Adversarial Examples for Interpreting Malware Classifiers

03/06/2020
by   Wei Song, et al.
0

Recent advances in adversarial attacks have shown that machine learning classifiers based on static analysis are vulnerable to adversarial attacks. However, real-world antivirus systems do not rely only on static classifiers, thus many of these static evasions get detected by dynamic analysis whenever the malware runs. The real question is to what extent these adversarial attacks are actually harmful to the real users? In this paper, we propose a systematic framework to create and evaluate realistic adversarial malware to evade real-world systems. We propose new adversarial attacks against real-world antivirus systems based on code randomization and binary manipulation, and use our framework to perform the attacks on 1000 malware samples and test four commercial antivirus software and two open-source classifiers. We demonstrate that the static detectors of real-world antivirus can be evaded 24.3 the cases and often by changing only one byte. We also find that the adversarial attacks are transferable between different antivirus up to 16 the cases. We also tested the efficacy of the complete (i.e. static + dynamic) classifiers in protecting users. While most of the commercial antivirus use their dynamic engines to protect the users' device when the static classifiers are evaded, we are the first to demonstrate that for one commercial antivirus, static evasions can also evade the offline dynamic detectors and infect users' machines. Our framework can also help explain which features are responsible for evasion and thus can help improve the robustness of malware detectors.

READ FULL TEXT

page 12

page 15

research
02/07/2022

On The Empirical Effectiveness of Unrealistic Adversarial Hardening Against Realistic Adversarial Attacks

While the literature on security attacks and defense of Machine Learning...
research
07/01/2020

Robust Learning against Logical Adversaries

Test-time adversarial attacks have posed serious challenges to the robus...
research
12/16/2017

Attack and Defense of Dynamic Analysis-Based, Adversarial Neural Malware Classification Models

Recently researchers have proposed using deep learning-based systems for...
research
03/21/2022

On The Robustness of Offensive Language Classifiers

Social media platforms are deploying machine learning based offensive la...
research
11/09/2019

Protecting from Malware Obfuscation Attacks through Adversarial Risk Analysis

Malware constitutes a major global risk affecting millions of users each...
research
10/20/2021

Adversarial attacks against Bayesian forecasting dynamic models

The last decade has seen the rise of Adversarial Machine Learning (AML)....
research
05/04/2023

Madvex: Instrumentation-based Adversarial Attacks on Machine Learning Malware Detection

WebAssembly (Wasm) is a low-level binary format for web applications, wh...

Please sign up or login with your details

Forgot password? Click here to reset