Malware Evasion Attack and Defense

04/07/2019
by   Yonghong Huang, et al.
1

Machine learning (ML) classifiers are vulnerable to adversarial examples. An adversarial example is an input sample which can be modified slightly to intentionally cause an ML classifier to misclassify it. In this work, we investigate white-box and grey-box evasion attacks to an ML-based malware detector and conducted performance evaluations in a real-world setting. We propose a framework for deploying grey-box and black-box attacks to malware detection systems. We compared the defense approaches in mitigating the attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/23/2018

Low Resource Black-Box End-to-End Attack Against State of the Art API Call Based Malware Classifiers

In this paper, we present a black-box attack against API call based mach...
research
04/15/2020

Advanced Evasion Attacks and Mitigations on Practical ML-Based Phishing Website Classifiers

Machine learning (ML) based approaches have been the mainstream solution...
research
05/07/2020

Defending Hardware-based Malware Detectors against Adversarial Attacks

In the era of Internet of Things (IoT), Malware has been proliferating e...
research
12/21/2018

Towards resilient machine learning for ransomware detection

There has been a surge of interest in using machine learning (ML) to aut...
research
08/09/2022

Adversarial Machine Learning-Based Anticipation of Threats Against Vehicle-to-Microgrid Services

In this paper, we study the expanding attack surface of Adversarial Mach...
research
05/24/2020

SoK: Arms Race in Adversarial Malware Detection

Malicious software (malware) is a major cyber threat that shall be tackl...
research
08/23/2023

SEA: Shareable and Explainable Attribution for Query-based Black-box Attacks

Machine Learning (ML) systems are vulnerable to adversarial examples, pa...

Please sign up or login with your details

Forgot password? Click here to reset