Domain Constraints in Feature Space: Strengthening Robustness of Android Malware Detection against Realizable Adversarial Examples

05/30/2022
by   Hamid Bostani, et al.
0

Strengthening the robustness of machine learning-based malware detectors against realistic evasion attacks remains one of the major obstacles for Android malware detection. To that end, existing work has focused on interpreting domain constraints of Android malware in the problem space, where problem-space realizable adversarial examples are generated. In this paper, we provide another promising way to achieve the same goal but based on interpreting the domain constraints in the feature space, where feature-space realizable adversarial examples are generated. Specifically, we present a novel approach to extracting feature-space domain constraints by learning meaningful feature dependencies from data, and applying them based on a novel robust feature space. Experimental results successfully demonstrate the effectiveness of our novel robust feature space in providing adversarial robustness for DREBIN, a state-of-the-art Android malware detector. For example, it can decrease the evasion rate of a realistic gradient-based attack by 96.4% in a limited-knowledge (transfer) setting and by 13.8% in a more challenging, perfect-knowledge setting. In addition, we show that directly using our learned domain constraints in the adversarial retraining framework leads to about 84% improvement in a limited-knowledge setting, with up to 377× faster implementation than using problem-space adversarial examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/10/2018

Android HIV: A Study of Repackaging Malware for Evading Machine-Learning Detection

Machine learning based solutions have been successfully employed for aut...
research
09/05/2023

Efficient Query-Based Attack against ML-Based Android Malware Detection under Zero Knowledge Setting

The widespread adoption of the Android operating system has made malicio...
research
02/12/2021

Universal Adversarial Perturbations for Malware

Machine learning classification models are vulnerable to adversarial exa...
research
11/05/2019

Intriguing Properties of Adversarial ML Attacks in the Problem Space

Recent research efforts on adversarial ML have investigated problem-spac...
research
12/02/2021

A Unified Framework for Adversarial Attack and Defense in Constrained Feature Space

The generation of feasible adversarial examples is necessary for properl...
research
09/20/2021

Can We Leverage Predictive Uncertainty to Detect Dataset Shift and Adversarial Examples in Android Malware Detection?

The deep learning approach to detecting malicious software (malware) is ...
research
10/22/2021

Improving Robustness of Malware Classifiers using Adversarial Strings Generated from Perturbed Latent Representations

In malware behavioral analysis, the list of accessed and created files v...

Please sign up or login with your details

Forgot password? Click here to reset