SoK: Modeling Explainability in Security Monitoring for Trust, Privacy, and Interpretability

10/31/2022
by   Dipkamal Bhusal, et al.
0

Trust, privacy, and interpretability have emerged as significant concerns for experts deploying deep learning models for security monitoring. Due to their back-box nature, these models cannot provide an intuitive understanding of the machine learning predictions, which are crucial in several decision-making applications, like anomaly detection. Security operations centers have a number of security monitoring tools that analyze logs and generate threat alerts which security analysts inspect. The alerts lack sufficient explanation on why it was raised or the context in which they occurred. Existing explanation methods for security also suffer from low fidelity and low stability and ignore privacy concerns. However, explanations are highly desirable; therefore, we systematize this knowledge on explanation models so they can ensure trust and privacy in security monitoring. Through our collaborative study of security operation centers, security monitoring tools, and explanation techniques, we discuss the strengths of existing methods and concerns vis-a-vis applications, such as security log analysis. We present a pipeline to design interpretable and privacy-preserving system monitoring tools. Additionally, we define and propose quantitative metrics to evaluate methods in explainable security. Finally, we discuss challenges and enlist exciting research directions for explorations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/20/2021

Towards Privacy-preserving Explanations in Medical Image Analysis

The use of Deep Learning in the medical field is hindered by the lack of...
research
02/15/2022

Explainable Predictive Process Monitoring: A User Evaluation

Explainability is motivated by the lack of transparency of black-box Mac...
research
01/08/2021

Towards a Robust and Trustworthy Machine Learning System Development

Machine Learning (ML) technologies have been widely adopted in many miss...
research
07/11/2018

Explainable Security

The Defense Advanced Research Projects Agency (DARPA) recently launched ...
research
11/20/2017

The Promise and Peril of Human Evaluation for Model Interpretability

Transparency, user trust, and human comprehension are popular ethical mo...
research
05/14/2021

Agree to Disagree: When Deep Learning Models With Identical Architectures Produce Distinct Explanations

Deep Learning of neural networks has progressively become more prominent...
research
07/09/2023

On the Challenges of Deploying Privacy-Preserving Synthetic Data in the Enterprise

Generative AI technologies are gaining unprecedented popularity, causing...

Please sign up or login with your details

Forgot password? Click here to reset