SoK: Explainable Machine Learning for Computer Security Applications

08/22/2022
by   Azqa Nadeem, et al.
0

Explainable Artificial Intelligence (XAI) is a promising solution to improve the transparency of machine learning (ML) pipelines. We systematize the increasingly growing (but fragmented) microcosm of studies that develop and utilize XAI methods for defensive and offensive cybersecurity tasks. We identify 3 cybersecurity stakeholders, i.e., model users, designers, and adversaries, that utilize XAI for 5 different objectives within an ML pipeline, namely 1) XAI-enabled decision support, 2) applied XAI for security tasks, 3) model verification via XAI, 4) explanation verification robustness, and 5) offensive use of explanations. We further classify the literature w.r.t. the targeted security domain. Our analysis of the literature indicates that many of the XAI applications are designed with little understanding of how they might be integrated into analyst workflows – user studies for explanation evaluation are conducted in only 14 the role of the various stakeholders. Particularly, the role of the model designer is minimized within the security literature. To this end, we present an illustrative use case accentuating the role of model designers. We demonstrate cases where XAI can help in model verification and cases where it may lead to erroneous conclusions instead. The systematization and use case enable us to challenge several assumptions and present open problems that can help shape the future of XAI within cybersecurity

READ FULL TEXT

page 12

page 18

research
03/17/2020

Directions for Explainable Knowledge-Enabled Systems

Interest in the field of Explainable Artificial Intelligence has been gr...
research
06/01/2022

OmniXAI: A Library for Explainable AI

We introduce OmniXAI, an open-source Python library of eXplainable AI (X...
research
11/11/2022

Explainability in Practice: Estimating Electrification Rates from Mobile Phone Data in Senegal

Explainable artificial intelligence (XAI) provides explanations for not ...
research
06/05/2022

Use-Case-Grounded Simulations for Explanation Evaluation

A growing body of research runs human subject evaluations to study wheth...
research
07/10/2020

Machine Learning Explainability for External Stakeholders

As machine learning is increasingly deployed in high-stakes contexts aff...
research
10/01/2018

Utilizing a Transparency-driven Environment toward Trusted Automatic Genre Classification: A Case Study in Journalism History

With the growing abundance of unlabeled data in real-world tasks, resear...
research
06/08/2023

Explainable Predictive Maintenance

Explainable Artificial Intelligence (XAI) fills the role of a critical i...

Please sign up or login with your details

Forgot password? Click here to reset