Adversarial TCAV – Robust and Effective Interpretation of Intermediate Layers in Neural Networks

02/10/2020
by   Rahul Soni, et al.
10

Interpreting neural network decisions and the information learned in intermediate layers is still a challenge due to the opaque internal state and shared non-linear interactions. Although <cit.> proposed to interpret intermediate layers by quantifying its ability to distinguish a user-defined concept (from random examples), the questions of robustness (variation against the choice of random examples) and effectiveness (retrieval rate of concept images) remain. We investigate these two properties and propose improvements to make concept activations reliable for practical use. Effectiveness: If the intermediate layer has effectively learned a user-defined concept, it should be able to recall — at the testing step — most of the images containing the proposed concept. For instance, we observed that the recall rate of Tiger shark and Great white shark from the ImageNet dataset with "Fins" as a user-defined concept was only 18.35 increase the effectiveness of concept learning, we propose A-CAV — the Adversarial Concept Activation Vector — this results in larger margins between user concepts and (negative) random examples. This approach improves the aforesaid recall to 76.83 For robustness, we define it as the ability of an intermediate layer to be consistent in its recall rate (the effectiveness) for different random seeds. We observed that <cit.> has a large variance in recalling a concept across different random seeds. For example, the recall of cat images (from a layer learning the concept of tail) varies from 18 with 20.85 modification that employs a Gram-Schmidt process to sample random noise from concepts and learn an average "concept classifier". This approach improves the aforesaid standard deviation from 20.85

READ FULL TEXT
research
05/21/2022

Exploring Concept Contribution Spatially: Hidden Layer Interpretation with Spatial Activation Concept Vector

To interpret deep learning models, one mainstream is to explore the lear...
research
11/30/2017

TCAV: Relative concept importance testing with Linear Concept Activation Vectors

Neural networks commonly offer high utility but remain difficult to inte...
research
02/05/2020

CHAIN: Concept-harmonized Hierarchical Inference Interpretation of Deep Convolutional Neural Networks

With the great success of networks, it witnesses the increasing demand f...
research
03/08/2023

Exploring Adversarial Attacks on Neural Networks: An Explainable Approach

Deep Learning (DL) is being applied in various domains, especially in sa...
research
10/28/2022

When does mixup promote local linearity in learned representations?

Mixup is a regularization technique that artificially produces new sampl...
research
05/04/2020

Explaining AI-based Decision Support Systems using Concept Localization Maps

Human-centric explainability of AI-based Decision Support Systems (DSS) ...

Please sign up or login with your details

Forgot password? Click here to reset