A Step Towards Exposing Bias in Trained Deep Convolutional Neural Network Models

12/03/2019
by   Daniel Omeiza, et al.
0

We present Smooth Grad-CAM++, a technique which combines two recent techniques: SMOOTHGRAD and Grad-CAM++. Smooth Grad-CAM++ has the capability of either visualizing a layer, subset of feature maps, or subset of neurons within a feature map at each instance. We experimented with few images, and we discovered that Smooth Grad-CAM++ produced more visually sharp maps with larger number of salient pixels highlighted in the given input images when compared with other methods. Smooth Grad-CAM++ will give insight into what our deep CNN models (including models trained on medical scan or imagery) learn. Hence informing decisions on creating a representative training set.

READ FULL TEXT
research
08/03/2019

Smooth Grad-CAM++: An Enhanced Inference Level Visualization Technique for Deep Convolutional Neural Network Models

Gaining insight into how deep convolutional neural network models perfor...
research
07/22/2017

PatchShuffle Regularization

This paper focuses on regularizing the training of the convolutional neu...
research
08/05/2021

Rotaflip: A New CNN Layer for Regularization and Rotational Invariance in Medical Images

Regularization in convolutional neural networks (CNNs) is usually addres...
research
04/02/2017

Understanding Deep Representations through Random Weights

We systematically study the deep representation of random weight CNN (co...
research
09/22/2022

Learning Visual Explanations for DCNN-Based Image Classifiers Using an Attention Mechanism

In this paper two new learning-based eXplainable AI (XAI) methods for de...
research
06/26/2018

Deep Feature Factorization For Concept Discovery

We propose Deep Feature Factorization (DFF), a method capable of localiz...

Please sign up or login with your details

Forgot password? Click here to reset