ProtoShotXAI: Using Prototypical Few-Shot Architecture for Explainable AI

10/22/2021
by   Samuel Hess, et al.
0

Unexplainable black-box models create scenarios where anomalies cause deleterious responses, thus creating unacceptable risks. These risks have motivated the field of eXplainable Artificial Intelligence (XAI) to improve trust by evaluating local interpretability in black-box neural networks. Unfortunately, the ground truth is unavailable for the model's decision, so evaluation is limited to qualitative assessment. Further, interpretability may lead to inaccurate conclusions about the model or a false sense of trust. We propose to improve XAI from the vantage point of the user's trust by exploring a black-box model's latent feature space. We present an approach, ProtoShotXAI, that uses a Prototypical few-shot network to explore the contrastive manifold between nonlinear features of different classes. A user explores the manifold by perturbing the input features of a query sample and recording the response for a subset of exemplars from any class. Our approach is the first locally interpretable XAI model that can be extended to, and demonstrated on, few-shot networks. We compare ProtoShotXAI to the state-of-the-art XAI approaches on MNIST, Omniglot, and ImageNet to demonstrate, both quantitatively and qualitatively, that ProtoShotXAI provides more flexibility for model exploration. Finally, ProtoShotXAI also demonstrates novel explainabilty and detectabilty on adversarial samples.

READ FULL TEXT

page 8

page 18

page 24

research
04/26/2023

GENIE-NF-AI: Identifying Neurofibromatosis Tumors using Liquid Neural Network (LTC) trained on AACR GENIE Datasets

In recent years, the field of medicine has been increasingly adopting ar...
research
02/10/2022

Trust in AI: Interpretability is not necessary or sufficient, while black-box interaction is necessary and sufficient

The problem of human trust in artificial intelligence is one of the most...
research
12/07/2018

Dice in the Black Box: User Experiences with an Inscrutable Algorithm

We demonstrate that users may be prone to place an inordinate amount of ...
research
03/30/2022

Interpretable Vertebral Fracture Diagnosis

Do black-box neural network models learn clinically relevant features fo...
research
04/26/2021

TrustyAI Explainability Toolkit

Artificial intelligence (AI) is becoming increasingly more popular and c...
research
03/24/2022

Explainable Artificial Intelligence for Exhaust Gas Temperature of Turbofan Engines

Data-driven modeling is an imperative tool in various industrial applica...
research
09/10/2020

TripleTree: A Versatile Interpretable Representation of Black Box Agents and their Environments

In explainable artificial intelligence, there is increasing interest in ...

Please sign up or login with your details

Forgot password? Click here to reset