Short Paper: Creating Adversarial Malware Examples using Code Insertion

04/09/2019
by   Daniel Park, et al.
0

There has been an increased interest in the application of convolutional neural networks for image based malware classification, but the susceptibility of neural networks to adversarial examples allows malicious actors to evade classifiers. We shed light on the definition of an adversarial example in the malware domain. Then, we propose a method to obfuscate malware using patterns found in adversarial examples such that the newly obfuscated malware evades classification while maintaining executability and the original program logic.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/10/2019

Effectiveness of Adversarial Examples and Defenses for Malware Classification

Artificial neural networks have been successfully used for many differen...
research
11/06/2020

A survey on practical adversarial examples for malware classifiers

Machine learning based solutions have been very helpful in solving probl...
research
11/18/2021

Enhancing the Insertion of NOP Instructions to Obfuscate Malware via Deep Reinforcement Learning

Current state-of-the-art research for tackling the problem of malware de...
research
10/18/2018

Exploring Adversarial Examples in Malware Detection

The Convolutional Neural Network (CNN) architecture is increasingly bein...
research
10/22/2021

Improving Robustness of Malware Classifiers using Adversarial Strings Generated from Perturbed Latent Representations

In malware behavioral analysis, the list of accessed and created files v...
research
12/11/2019

Towards a Robust Classifier: An MDL-Based Method for Generating Adversarial Examples

We address the problem of adversarial examples in machine learning where...
research
01/11/2019

Explaining Vulnerabilities of Deep Learning to Adversarial Malware Binaries

Recent work has shown that deep-learning algorithms for malware detectio...

Please sign up or login with your details

Forgot password? Click here to reset