TRUST XAI: Model-Agnostic Explanations for AI With a Case Study on IIoT Security

05/02/2022
by   Maede Zolanvari, et al.
6

Despite AI's significant growth, its "black box" nature creates challenges in generating adequate trust. Thus, it is seldom utilized as a standalone unit in IoT high-risk applications, such as critical industrial infrastructures, medical systems, and financial applications, etc. Explainable AI (XAI) has emerged to help with this problem. However, designing appropriately fast and accurate XAI is still challenging, especially in numerical applications. Here, we propose a universal XAI model named Transparency Relying Upon Statistical Theory (TRUST), which is model-agnostic, high-performing, and suitable for numerical applications. Simply put, TRUST XAI models the statistical behavior of the AI's outputs in an AI-based system. Factor analysis is used to transform the input features into a new set of latent variables. We use mutual information to rank these variables and pick only the most influential ones on the AI's outputs and call them "representatives" of the classes. Then we use multi-modal Gaussian distributions to determine the likelihood of any new sample belonging to each class. We demonstrate the effectiveness of TRUST in a case study on cybersecurity of the industrial Internet of things (IIoT) using three different cybersecurity datasets. As IIoT is a prominent application that deals with numerical data. The results show that TRUST XAI provides explanations for new random samples with an average success rate of 98 Compared with LIME, a popular XAI model, TRUST is shown to be superior in the context of performance, speed, and the method of explainability. In the end, we also show how TRUST is explained to the user.

READ FULL TEXT

page 1

page 11

research
12/05/2020

BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations

A key impediment to the use of AI is the lacking of transparency, especi...
research
04/26/2022

User Trust on an Explainable AI-based Medical Diagnosis Support System

Recent research has supported that system explainability improves user t...
research
09/26/2021

Explainability Pitfalls: Beyond Dark Patterns in Explainable AI

To make Explainable AI (XAI) systems trustworthy, understanding harmful ...
research
02/27/2022

The Impact of Explanations on Layperson Trust in Artificial Intelligence-Driven Symptom Checker Apps: Experimental Study

To achieve the promoted benefits of an AI symptom checker, laypeople mus...
research
12/17/2022

Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design

Deep learning models for learning analytics have become increasingly pop...
research
09/05/2022

Visualization Of Class Activation Maps To Explain AI Classification Of Network Packet Captures

The classification of internet traffic has become increasingly important...
research
11/12/2020

Domain-Level Explainability – A Challenge for Creating Trust in Superhuman AI Strategies

For strategic problems, intelligent systems based on Deep Reinforcement ...

Please sign up or login with your details

Forgot password? Click here to reset