Explainable Intrusion Detection Systems (X-IDS): A Survey of Current Methods, Challenges, and Opportunities

07/13/2022
by   Subash Neupane, et al.
34

The application of Artificial Intelligence (AI) and Machine Learning (ML) to cybersecurity challenges has gained traction in industry and academia, partially as a result of widespread malware attacks on critical systems such as cloud infrastructures and government institutions. Intrusion Detection Systems (IDS), using some forms of AI, have received widespread adoption due to their ability to handle vast amounts of data with a high prediction accuracy. These systems are hosted in the organizational Cyber Security Operation Center (CSoC) as a defense tool to monitor and detect malicious network flow that would otherwise impact the Confidentiality, Integrity, and Availability (CIA). CSoC analysts rely on these systems to make decisions about the detected threats. However, IDSs designed using Deep Learning (DL) techniques are often treated as black box models and do not provide a justification for their predictions. This creates a barrier for CSoC analysts, as they are unable to improve their decisions based on the model's predictions. One solution to this problem is to design explainable IDS (X-IDS). This survey reviews the state-of-the-art in explainable AI (XAI) for IDS, its current challenges, and discusses how these challenges span to the design of an X-IDS. In particular, we discuss black box and white box approaches comprehensively. We also present the tradeoff between these approaches in terms of their performance and ability to produce explanations. Furthermore, we propose a generic architecture that considers human-in-the-loop which can be used as a guideline when designing an X-IDS. Research recommendations are given from three critical viewpoints: the need to define explainability for IDS, the need to create explanations tailored to various stakeholders, and the need to design metrics to evaluate explanations.

READ FULL TEXT

page 9

page 10

page 14

page 15

page 21

page 22

page 23

page 24

research
07/15/2022

Creating an Explainable Intrusion Detection System Using Self Organizing Maps

Modern Artificial Intelligence (AI) enabled Intrusion Detection Systems ...
research
01/31/2023

A Survey of Explainable AI in Deep Visual Modeling: Methods and Metrics

Deep visual models have widespread applications in high-stake domains. H...
research
03/30/2023

Explainable Intrusion Detection Systems Using Competitive Learning Techniques

The current state of the art systems in Artificial Intelligence (AI) ena...
research
03/15/2022

Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement

Explainable Artificial Intelligence (XAI) is an emerging research field ...
research
01/20/2022

Assembling a Cyber Range to Evaluate Artificial Intelligence / Machine Learning (AI/ML) Security Tools

In this case study, we describe the design and assembly of a cyber secur...
research
11/28/2018

An Adversarial Approach for Explainable AI in Intrusion Detection Systems

Despite the growing popularity of modern machine learning techniques (e....
research
08/24/2022

Explainable AI for tailored electricity consumption feedback – an experimental evaluation of visualizations

Machine learning (ML) methods can effectively analyse data, recognize pa...

Please sign up or login with your details

Forgot password? Click here to reset