Utilizing Explainable AI for improving the Performance of Neural Networks

10/07/2022
by   Huawei Sun, et al.
0

Nowadays, deep neural networks are widely used in a variety of fields that have a direct impact on society. Although those models typically show outstanding performance, they have been used for a long time as black boxes. To address this, Explainable Artificial Intelligence (XAI) has been developing as a field that aims to improve the transparency of the model and increase their trustworthiness. We propose a retraining pipeline that consistently improves the model predictions starting from XAI and utilizing state-of-the-art techniques. To do that, we use the XAI results, namely SHapley Additive exPlanations (SHAP) values, to give specific training weights to the data samples. This leads to an improved training of the model and, consequently, better performance. In order to benchmark our method, we evaluate it on both real-life and public datasets. First, we perform the method on a radar-based people counting scenario. Afterward, we test it on the CIFAR-10, a public Computer Vision dataset. Experiments using the SHAP-based retraining approach achieve a 4 people counting tasks. Moreover, on the CIFAR-10, our SHAP-based weighting strategy ends up with a 3 weighted samples.

READ FULL TEXT
research
02/04/2020

Using Explainable Artificial Intelligence to Increase Trust in Computer Vision

Computer Vision, and hence Artificial Intelligence-based extraction of i...
research
07/07/2021

Recurrence-Aware Long-Term Cognitive Network for Explainable Pattern Classification

Machine learning solutions for pattern classification problems are nowad...
research
07/09/2023

A Novel Explainable Artificial Intelligence Model in Image Classification problem

In recent years, artificial intelligence is increasingly being applied w...
research
12/18/2022

Bort: Towards Explainable Neural Networks with Bounded Orthogonal Constraint

Deep learning has revolutionized human society, yet the black-box nature...
research
12/05/2020

BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations

A key impediment to the use of AI is the lacking of transparency, especi...
research
12/14/2022

Explainable Artificial Intelligence in Retinal Imaging for the detection of Systemic Diseases

Explainable Artificial Intelligence (AI) in the form of an interpretable...
research
04/20/2021

Explainable artificial intelligence for mechanics: physics-informing neural networks for constitutive models

(Artificial) neural networks have become increasingly popular in mechani...

Please sign up or login with your details

Forgot password? Click here to reset