DeepAI AI Chat
Log In Sign Up

"How Does It Detect A Malicious App?" Explaining the Predictions of AI-based Android Malware Detector

by   Zhi Lu, et al.
Singapore Technologies Engineering Ltd

AI methods have been proven to yield impressive performance on Android malware detection. However, most AI-based methods make predictions of suspicious samples in a black-box manner without transparency on models' inference. The expectation on models' explainability and transparency by cyber security and AI practitioners to assure the trustworthiness increases. In this article, we present a novel model-agnostic explanation method for AI models applied for Android malware detection. Our proposed method identifies and quantifies the data features relevance to the predictions by two steps: i) data perturbation that generates the synthetic data by manipulating features' values; and ii) optimization of features attribution values to seek significant changes of prediction scores on the perturbed data with minimal feature values changes. The proposed method is validated by three experiments. We firstly demonstrate that our proposed model explanation method can aid in discovering how AI models are evaded by adversarial samples quantitatively. In the following experiments, we compare the explainability and fidelity of our proposed method with state-of-the-arts, respectively.


page 1

page 2

page 3

page 4


PhilaeX: Explaining the Failure and Success of AI Models in Malware Detection

The explanation to an AI model's prediction used to support decision mak...

Explaining Black-box Android Malware Detection

Machine-learning models have been recently used for detecting malicious ...

A two-steps approach to improve the performance of Android malware detectors

The popularity of Android OS has made it an appealing target to malware ...

Who Explains the Explanation? Quantitatively Assessing Feature Attribution Methods

AI explainability seeks to increase the transparency of models, making t...

Flexible Android Malware Detection Model based on Generative Adversarial Networks with Code Tensor

The behavior of malware threats is gradually increasing, heightened the ...

Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability

Explaining AI systems is fundamental both to the development of high per...

The Inadequacy of Shapley Values for Explainability

This paper develops a rigorous argument for why the use of Shapley value...