TrustyAI Explainability Toolkit

04/26/2021
by   Rob Geada, et al.
51

Artificial intelligence (AI) is becoming increasingly more popular and can be found in workplaces and homes around the world. However, how do we ensure trust in these systems? Regulation changes such as the GDPR mean that users have a right to understand how their data has been processed as well as saved. Therefore if, for example, you are denied a loan you have the right to ask why. This can be hard if the method for working this out uses "black box" machine learning techniques such as neural networks. TrustyAI is a new initiative which looks into explainable artificial intelligence (XAI) solutions to address trustworthiness in ML as well as decision services landscapes. In this paper we will look at how TrustyAI can support trust in decision services and predictive models. We investigate techniques such as LIME, SHAP and counterfactuals, benchmarking both LIME and counterfactual techniques against existing implementations. We also look into an extended version of SHAP, which supports background data selection to be evaluated based on quantitative data and allows for error bounds.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/04/2020

Using Explainable Artificial Intelligence to Increase Trust in Computer Vision

Computer Vision, and hence Artificial Intelligence-based extraction of i...
research
07/05/2021

A Review of Explainable Artificial Intelligence in Manufacturing

The implementation of Artificial Intelligence (AI) systems in the manufa...
research
06/07/2022

Explainable Artificial Intelligence (XAI) for Internet of Things: A Survey

Black-box nature of Artificial Intelligence (AI) models do not allow use...
research
10/31/2021

Explainable Artificial Intelligence for Smart City Application: A Secure and Trusted Platform

Artificial Intelligence (AI) is one of the disruptive technologies that ...
research
12/06/2022

A Time Series Approach to Explainability for Neural Nets with Applications to Risk-Management and Fraud Detection

Artificial intelligence is creating one of the biggest revolution across...
research
10/22/2021

ProtoShotXAI: Using Prototypical Few-Shot Architecture for Explainable AI

Unexplainable black-box models create scenarios where anomalies cause de...
research
04/13/2021

LioNets: A Neural-Specific Local Interpretation Technique Exploiting Penultimate Layer Information

Artificial Intelligence (AI) has a tremendous impact on the unexpected g...

Please sign up or login with your details

Forgot password? Click here to reset