Optimizing Explanations by Network Canonization and Hyperparameter Search

11/30/2022
by   Frederik Pahde, et al.
0

Explainable AI (XAI) is slowly becoming a key component for many AI applications. Rule-based and modified backpropagation XAI approaches however often face challenges when being applied to modern model architectures including innovative layer building blocks, which is caused by two reasons. Firstly, the high flexibility of rule-based XAI methods leads to numerous potential parameterizations. Secondly, many XAI methods break the implementation-invariance axiom because they struggle with certain model components, e.g., BatchNorm layers. The latter can be addressed with model canonization, which is the process of re-structuring the model to disregard problematic components without changing the underlying function. While model canonization is straightforward for simple architectures (e.g., VGG, ResNet), it can be challenging for more complex and highly interconnected models (e.g., DenseNet). Moreover, there is only little quantifiable evidence that model canonization is beneficial for XAI. In this work, we propose canonizations for currently relevant model blocks applicable to popular deep neural network architectures,including VGG, ResNet, EfficientNet, DenseNets, as well as Relation Networks. We further suggest a XAI evaluation framework with which we quantify and compare the effect sof model canonization for various XAI methods in image classification tasks on the Pascal-VOC and ILSVRC2017 datasets, as well as for Visual Question Answering using CLEVR-XAI. Moreover, addressing the former issue outlined above, we demonstrate how our evaluation framework can be applied to perform hyperparameter search for XAI methods to optimize the quality of explanations.

READ FULL TEXT

page 19

page 20

page 21

page 22

page 23

page 24

page 25

page 26

research
02/14/2022

Measurably Stronger Explanation Reliability via Model Canonization

While rule-based attribution methods have proven useful for providing lo...
research
02/04/2021

EUCA: A Practical Prototyping Framework towards End-User-Centered Explainable Artificial Intelligence

The ability to explain decisions to its end-users is a necessity to depl...
research
07/26/2022

Visual correspondence-based explanations improve AI robustness and human-AI team accuracy

Explaining artificial intelligence (AI) predictions is increasingly impo...
research
07/02/2020

The Impact of Explanations on AI Competency Prediction in VQA

Explainability is one of the key elements for building trust in AI syste...
research
12/11/2018

A Main/Subsidiary Network Framework for Simplifying Binary Neural Network

To reduce memory footprint and run-time latency, techniques such as neur...
research
03/14/2022

Combining AI/ML and PHY Layer Rule Based Inference – Some First Results

In 3GPP New Radio (NR) Release 18 we see the first study item starting i...

Please sign up or login with your details

Forgot password? Click here to reset