Right for the Right Reason: Making Image Classification Robust

07/23/2020
by   Adrian Oberföll, et al.
0

Convolutional neural networks (CNNs) have achieved astonishing performance on various image classification tasks. Although such models classify most images correctly, they do not provide any explanation for their decisions. Recently, there have been attempts to provide such an explanation by determining which parts of the input image the classifier focuses on most. It turns out that many models output the correct classification, but for the wrong reason (e.g., based on irrelevant parts of the image). In this paper, we propose a new score for automatically quantifying to which degree the model focuses on the right image parts. The score is calculated by considering the degree to which the most decisive image regions - given by applying an explainer to the CNN model - overlap with the silhouette of the object to be classified. In extensive experiments using VGG16, ResNet, and MobileNet as CNNs, Occlusion, LIME, and Grad-Cam/Grad-Cam++ as explanation methods, and Dogs vs. Cats and Caltech 101 as data sets, we can show that our metric can indeed be used for making CNN models for image classification more robust while keeping their accuracy.

READ FULL TEXT

page 1

page 4

research
04/06/2021

White Box Methods for Explanations of Convolutional Neural Networks in Image Classification Tasks

In recent years, deep learning has become prevalent to solve application...
research
09/20/2021

Explaining Convolutional Neural Networks by Tagging Filters

Convolutional neural networks (CNNs) have achieved astonishing performan...
research
09/02/2021

Cross-Model Consensus of Explanations and Beyond for Image Classification Models: An Empirical Study

Existing interpretation algorithms have found that, even deep models mak...
research
06/21/2021

Leveraging Conditional Generative Models in a General Explanation Framework of Classifier Decisions

Providing a human-understandable explanation of classifiers' decisions h...
research
07/26/2022

Visually explaining 3D-CNN predictions for video classification with an adaptive occlusion sensitivity analysis

This paper proposes a method for visually explaining the decision-making...
research
12/28/2020

Playing to distraction: towards a robust training of CNN classifiers through visual explanation techniques

The field of deep learning is evolving in different directions, with sti...
research
06/21/2023

Benchmark data to study the influence of pre-training on explanation performance in MR image classification

Convolutional Neural Networks (CNNs) are frequently and successfully use...

Please sign up or login with your details

Forgot password? Click here to reset