Android HIV: A Study of Repackaging Malware for Evading Machine-Learning Detection

08/10/2018
by   Xiao Chen, et al.
0

Machine learning based solutions have been successfully employed for automatic detection of malware in Android applications. However, machine learning models are known to lack robustness against inputs crafted by an adversary. So far, the adversarial examples can only deceive Android malware detectors that rely on syntactic features, and the perturbations can only be implemented by simply modifying Android manifest. While recent Android malware detectors rely more on semantic features from Dalvik bytecode rather than manifest, existing attacking/defending methods are no longer effective. In this paper, we introduce a new highly-effective attack that generates adversarial examples of Android malware and evades being detected by the current models. To this end, we propose a method of applying optimal perturbations onto Android APK using a substitute model. Based on the transferability concept, the perturbations that successfully deceive the substitute model are likely to deceive the original models as well. We develop an automated tool to generate the adversarial examples without human intervention to apply the attacks. In contrast to existing works, the adversarial examples crafted by our method can also deceive recent machine learning based detectors that rely on semantic features such as control-flow-graph. The perturbations can also be implemented directly onto APK's Dalvik bytecode rather than Android manifest to evade from recent detectors. We evaluated the proposed manipulation methods for adversarial examples by using the same datasets that Drebin and MaMadroid (5879 malware samples) used. Our results show that, the malware detection rates decreased from 96 a small distortion generated by our adversarial examples manipulation method.

READ FULL TEXT
research
05/30/2022

Domain Constraints in Feature Space: Strengthening Robustness of Android Malware Detection against Realizable Adversarial Examples

Strengthening the robustness of machine learning-based malware detectors...
research
09/20/2021

Can We Leverage Predictive Uncertainty to Detect Dataset Shift and Adversarial Examples in Android Malware Detection?

The deep learning approach to detecting malicious software (malware) is ...
research
10/27/2017

Adversarial Detection of Flash Malware: Limitations and Open Issues

During the past two years, Flash malware has become one of the most insi...
research
02/24/2021

Adversarial Robustness with Non-uniform Perturbations

Robustness of machine learning models is critical for security related a...
research
10/25/2018

Evading classifiers in discrete domains with provable optimality guarantees

Security-critical applications such as malware, fraud, or spam detection...
research
03/04/2022

Adversarial Patterns: Building Robust Android Malware Classifiers

Deep learning-based classifiers have substantially improved recognition ...
research
06/14/2016

Adversarial Perturbations Against Deep Neural Networks for Malware Classification

Deep neural networks, like many other machine learning models, have rece...

Please sign up or login with your details

Forgot password? Click here to reset