Visual correspondence-based explanations improve AI robustness and human-AI team accuracy

07/26/2022
by   Giang Nguyen, et al.
14

Explaining artificial intelligence (AI) predictions is increasingly important and even imperative in many high-stakes applications where humans are the ultimate decision-makers. In this work, we propose two novel architectures of self-interpretable image classifiers that first explain, and then predict (as opposed to post-hoc explanations) by harnessing the visual correspondences between a query image and exemplars. Our models consistently improve (by 1 to 4 points) on out-of-distribution (OOD) datasets while performing marginally worse (by 1 to 2 points) on in-distribution tests than ResNet-50 and a k-nearest neighbor classifier (kNN). Via a large-scale, human study on ImageNet and CUB, our correspondence-based explanations are found to be more useful to users than kNN explanations. Our explanations help users more accurately reject AI's wrong decisions than all other tested methods. Interestingly, for the first time, we show that it is possible to achieve complementary human-AI team accuracy (i.e., that is higher than either AI-alone or human-alone), in ImageNet and CUB image classification tasks.

READ FULL TEXT

page 4

page 10

page 11

page 12

page 13

page 14

page 15

page 18

research
08/25/2023

AdvisingNets: Learning to Distinguish Correct and Wrong Classifications via Nearest-Neighbor Explanations

Besides providing insights into how an image classifier makes its predic...
research
11/22/2022

Towards Human-Interpretable Prototypes for Visual Assessment of Image Classification Models

Explaining black-box Artificial Intelligence (AI) models is a cornerston...
research
01/31/2022

Won't you see my neighbor?: User predictions, mental models, and similarity-based explanations of AI classifiers

Humans should be able work more effectively with artificial intelligence...
research
05/31/2021

The effectiveness of feature attribution methods and its correlation with automatic evaluation scores

Explaining the decisions of an Artificial Intelligence (AI) model is inc...
research
10/11/2022

On Explainability in AI-Solutions: A Cross-Domain Survey

Artificial Intelligence (AI) increasingly shows its potential to outperf...
research
11/30/2022

Optimizing Explanations by Network Canonization and Hyperparameter Search

Explainable AI (XAI) is slowly becoming a key component for many AI appl...
research
07/21/2023

Morphological Image Analysis and Feature Extraction for Reasoning with AI-based Defect Detection and Classification Models

As the use of artificial intelligent (AI) models becomes more prevalent ...

Please sign up or login with your details

Forgot password? Click here to reset