DeepAI AI Chat
Log In Sign Up

SoK: Modeling Explainability in Security Monitoring for Trust, Privacy, and Interpretability

by   Dipkamal Bhusal, et al.
Rochester Institute of Technology

Trust, privacy, and interpretability have emerged as significant concerns for experts deploying deep learning models for security monitoring. Due to their back-box nature, these models cannot provide an intuitive understanding of the machine learning predictions, which are crucial in several decision-making applications, like anomaly detection. Security operations centers have a number of security monitoring tools that analyze logs and generate threat alerts which security analysts inspect. The alerts lack sufficient explanation on why it was raised or the context in which they occurred. Existing explanation methods for security also suffer from low fidelity and low stability and ignore privacy concerns. However, explanations are highly desirable; therefore, we systematize this knowledge on explanation models so they can ensure trust and privacy in security monitoring. Through our collaborative study of security operation centers, security monitoring tools, and explanation techniques, we discuss the strengths of existing methods and concerns vis-a-vis applications, such as security log analysis. We present a pipeline to design interpretable and privacy-preserving system monitoring tools. Additionally, we define and propose quantitative metrics to evaluate methods in explainable security. Finally, we discuss challenges and enlist exciting research directions for explorations.


page 1

page 2

page 3

page 4


Towards Privacy-preserving Explanations in Medical Image Analysis

The use of Deep Learning in the medical field is hindered by the lack of...

Explainable Predictive Process Monitoring: A User Evaluation

Explainability is motivated by the lack of transparency of black-box Mac...

Towards a Robust and Trustworthy Machine Learning System Development

Machine Learning (ML) technologies have been widely adopted in many miss...

Explainable Security

The Defense Advanced Research Projects Agency (DARPA) recently launched ...

The Promise and Peril of Human Evaluation for Model Interpretability

Transparency, user trust, and human comprehension are popular ethical mo...

Agree to Disagree: When Deep Learning Models With Identical Architectures Produce Distinct Explanations

Deep Learning of neural networks has progressively become more prominent...

On the Challenges of Deploying Privacy-Preserving Synthetic Data in the Enterprise

Generative AI technologies are gaining unprecedented popularity, causing...