DeepAI AI Chat
Log In Sign Up

"How Does It Detect A Malicious App?" Explaining the Predictions of AI-based Android Malware Detector

11/06/2021
by   Zhi Lu, et al.
Singapore Technologies Engineering Ltd
0

AI methods have been proven to yield impressive performance on Android malware detection. However, most AI-based methods make predictions of suspicious samples in a black-box manner without transparency on models' inference. The expectation on models' explainability and transparency by cyber security and AI practitioners to assure the trustworthiness increases. In this article, we present a novel model-agnostic explanation method for AI models applied for Android malware detection. Our proposed method identifies and quantifies the data features relevance to the predictions by two steps: i) data perturbation that generates the synthetic data by manipulating features' values; and ii) optimization of features attribution values to seek significant changes of prediction scores on the perturbed data with minimal feature values changes. The proposed method is validated by three experiments. We firstly demonstrate that our proposed model explanation method can aid in discovering how AI models are evaded by adversarial samples quantitatively. In the following experiments, we compare the explainability and fidelity of our proposed method with state-of-the-arts, respectively.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/02/2022

PhilaeX: Explaining the Failure and Success of AI Models in Malware Detection

The explanation to an AI model's prediction used to support decision mak...
03/09/2018

Explaining Black-box Android Malware Detection

Machine-learning models have been recently used for detecting malicious ...
05/17/2022

A two-steps approach to improve the performance of Android malware detectors

The popularity of Android OS has made it an appealing target to malware ...
09/28/2021

Who Explains the Explanation? Quantitatively Assessing Feature Attribution Methods

AI explainability seeks to increase the transparency of models, making t...
10/25/2022

Flexible Android Malware Detection Model based on Generative Adversarial Networks with Code Tensor

The behavior of malware threats is gradually increasing, heightened the ...
10/14/2019

Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability

Explaining AI systems is fundamental both to the development of high per...
02/16/2023

The Inadequacy of Shapley Values for Explainability

This paper develops a rigorous argument for why the use of Shapley value...