Don't Paint It Black: White-Box Explanations for Deep Learning in Computer Security

06/05/2019
by   Alexander Warnecke, et al.
0

Deep learning is increasingly used as a basic building block of security systems. Unfortunately, deep neural networks are hard to interpret, and their decision process is opaque to the practitioner. Recent work has started to address this problem by considering black-box explanations for deep learning in computer security (CCS'18). The underlying explanation methods, however, ignore the structure of neural networks and thus omit crucial information for analyzing the decision process. In this paper, we investigate white-box explanations and systematically compare them with current black-box approaches. In an extensive evaluation with learning-based systems for malware detection and vulnerability discovery, we demonstrate that white-box explanations are more concise, sparse, complete and efficient than black-box approaches. As a consequence, we generally recommend the use of white-box explanations if access to the employed neural network is available, which usually is the case for stand-alone systems for malware detection, binary analysis, and vulnerability discovery.

READ FULL TEXT

page 10

page 11

research
08/18/2023

On Gradient-like Explanation under a Black-box Setting: When Black-box Explanations Become as Good as White-box

Attribution methods shed light on the explainability of data-driven appr...
research
12/18/2020

Towards Robust Explanations for Deep Neural Networks

Explanation methods shed light on the decision process of black-box clas...
research
06/14/2021

Learning-Aided Heuristics Design for Storage System

Computer systems such as storage systems normally require transparent wh...
research
05/20/2019

The Twin-System Approach as One Generic Solution for XAI: An Overview of ANN-CBR Twins for Explaining Deep Learning

The notion of twin systems is proposed to address the eXplainable AI (XA...
research
10/19/2020

Dos and Don'ts of Machine Learning in Computer Security

With the growing processing power of computing systems and the increasin...
research
06/19/2018

RISE: Randomized Input Sampling for Explanation of Black-box Models

Deep neural networks are increasingly being used to automate data analys...
research
06/26/2018

A Theory of Diagnostic Interpretation in Supervised Classification

Interpretable deep learning is a fundamental building block towards safe...

Please sign up or login with your details

Forgot password? Click here to reset