COPYCAT: Practical Adversarial Attacks on Visualization-Based Malware Detection

09/20/2019
by   Aminollah Khormali, et al.
0

Despite many attempts, the state-of-the-art of adversarial machine learning on malware detection systems generally yield unexecutable samples. In this work, we set out to examine the robustness of visualization-based malware detection system against adversarial examples (AEs) that not only are able to fool the model, but also maintain the executability of the original input. As such, we first investigate the application of existing off-the-shelf adversarial attack approaches on malware detection systems through which we found that those approaches do not necessarily maintain the functionality of the original inputs. Therefore, we proposed an approach to generate adversarial examples, COPYCAT, which is specifically designed for malware detection systems considering two main goals; achieving a high misclassification rate and maintaining the executability and functionality of the original input. We designed two main configurations for COPYCAT, namely AE padding and sample injection. While the first configuration results in untargeted misclassification attacks, the sample injection configuration is able to force the model to generate a targeted output, which is highly desirable in the malware attribution setting. We evaluate the performance of COPYCAT through an extensive set of experiments on two malware datasets, and report that we were able to generate adversarial samples that are misclassified at a rate of 98.9 and 96.5 misclassification rates in the literature. Most importantly, we report that those AEs were executable unlike AEs generated by off-the-shelf approaches. Our transferability study demonstrates that the generated AEs through our proposed method can be generalized to other models.

READ FULL TEXT
research
09/21/2021

Attacks on Visualization-Based Malware Detection: Balancing Effectiveness and Executability

With the rapid development of machine learning for image classification,...
research
08/12/2022

On deceiving malware classification with section injection

We investigate how to modify executable files to deceive malware classif...
research
05/22/2023

FGAM:Fast Adversarial Malware Generation Method Based on Gradient Sign

Malware detection models based on deep learning have been widely used, b...
research
09/14/2023

Unleashing the Adversarial Facet of Software Debloating

Software debloating techniques are applied to craft a specialized versio...
research
02/12/2019

Examining Adversarial Learning against Graph-based IoT Malware Detection Systems

The main goal of this study is to investigate the robustness of graph-ba...
research
06/23/2023

Creating Valid Adversarial Examples of Malware

Machine learning is becoming increasingly popular as a go-to approach fo...
research
05/14/2020

Deep Learning-based Fine-grained Hierarchical Learning Approach for Robust Malware Classification

The wide acceptance of Internet of Things (IoT) for both household and i...

Please sign up or login with your details

Forgot password? Click here to reset