FINER: Enhancing State-of-the-art Classifiers with Feature Attribution to Facilitate Security Analysis

08/10/2023
by   Yiling He, et al.
0

Deep learning classifiers achieve state-of-the-art performance in various risk detection applications. They explore rich semantic representations and are supposed to automatically discover risk behaviors. However, due to the lack of transparency, the behavioral semantics cannot be conveyed to downstream security experts to reduce their heavy workload in security analysis. Although feature attribution (FA) methods can be used to explain deep learning, the underlying classifier is still blind to what behavior is suspicious, and the generated explanation cannot adapt to downstream tasks, incurring poor explanation fidelity and intelligibility. In this paper, we propose FINER, the first framework for risk detection classifiers to generate high-fidelity and high-intelligibility explanations. The high-level idea is to gather explanation efforts from model developer, FA designer, and security experts. To improve fidelity, we fine-tune the classifier with an explanation-guided multi-task learning strategy. To improve intelligibility, we engage task knowledge to adjust and ensemble FA methods. Extensive evaluations show that FINER improves explanation quality for risk detection. Moreover, we demonstrate that FINER outperforms a state-of-the-art tool in facilitating malware analysis.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/09/2023

Efficient GNN Explanation via Learning Removal-based Attribution

As Graph Neural Networks (GNNs) have been widely used in real-world appl...
research
07/02/2022

PhilaeX: Explaining the Failure and Success of AI Models in Malware Detection

The explanation to an AI model's prediction used to support decision mak...
research
07/29/2021

Multi-objective optimization and explanation for stroke risk assessment in Shanxi province

Stroke is the top leading causes of death in China (Zhou et al. The Lanc...
research
04/26/2020

An Extension of LIME with Improvement of Interpretability and Fidelity

While deep learning makes significant achievements in Artificial Intelli...
research
07/18/2023

DreaMR: Diffusion-driven Counterfactual Explanation for Functional MRI

Deep learning analyses have offered sensitivity leaps in detection of co...
research
07/10/2023

False Sense of Security: Leveraging XAI to Analyze the Reasoning and True Performance of Context-less DGA Classifiers

The problem of revealing botnet activity through Domain Generation Algor...
research
06/30/2021

Explanation-Guided Diagnosis of Machine Learning Evasion Attacks

Machine Learning (ML) models are susceptible to evasion attacks. Evasion...

Please sign up or login with your details

Forgot password? Click here to reset