Utilizing XAI technique to improve autoencoder based model for computer network anomaly detection with shapley additive explanation(SHAP)

12/14/2021
by   Khushnaseeb Roshan, et al.
0

Machine learning (ML) and Deep Learning (DL) methods are being adopted rapidly, especially in computer network security, such as fraud detection, network anomaly detection, intrusion detection, and much more. However, the lack of transparency of ML and DL based models is a major obstacle to their implementation and criticized due to its black-box nature, even with such tremendous results. Explainable Artificial Intelligence (XAI) is a promising area that can improve the trustworthiness of these models by giving explanations and interpreting its output. If the internal working of the ML and DL based models is understandable, then it can further help to improve its performance. The objective of this paper is to show that how XAI can be used to interpret the results of the DL model, the autoencoder in this case. And, based on the interpretation, we improved its performance for computer network anomaly detection. The kernel SHAP method, which is based on the shapley values, is used as a novel feature selection technique. This method is used to identify only those features that are actually causing the anomalous behaviour of the set of attack/anomaly instances. Later, these feature sets are used to train and validate the autoencoder but on benign data only. Finally, the built SHAP_Model outperformed the other two models proposed based on the feature selection method. This whole experiment is conducted on the subset of the latest CICIDS2017 network dataset. The overall accuracy and AUC of SHAP_Model is 94

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/31/2023

Using Kernel SHAP XAI Method to optimize the Network Anomaly Detection Model

Anomaly detection and its explanation is important in many research area...
research
03/14/2021

A new interpretable unsupervised anomaly detection method based on residual explanation

Despite the superior performance in modeling complex patterns to address...
research
09/05/2022

RX-ADS: Interpretable Anomaly Detection using Adversarial ML for Electric Vehicle CAN data

Recent year has brought considerable advancements in Electric Vehicles (...
research
02/05/2023

LiteVR: Interpretable and Lightweight Cybersickness Detection using Explainable AI

Cybersickness is a common ailment associated with virtual reality (VR) u...
research
10/23/2020

DualNet: Locate Then Detect Effective Payload with Deep Attention Network

Network intrusion detection (NID) is an essential defense strategy that ...
research
09/09/2022

Explanation Method for Anomaly Detection on Mixed Numerical and Categorical Spaces

Most proposals in the anomaly detection field focus exclusively on the d...

Please sign up or login with your details

Forgot password? Click here to reset