Scale Matters: Attribution Meets the Wavelet Domain to Explain Model Sensitivity to Image Corruptions

05/24/2023
by   Gabriel Kasmi, et al.
0

Neural networks have shown remarkable performance in computer vision, but their deployment in real-world scenarios is challenging due to their sensitivity to image corruptions. Existing attribution methods are uninformative for explaining the sensitivity to image corruptions, while the literature on robustness only provides model-based explanations. However, the ability to scrutinize models' behavior under image corruptions is crucial to increase the user's trust. Towards this end, we introduce the Wavelet sCale Attribution Method (WCAM), a generalization of attribution from the pixel domain to the space-scale domain. Attribution in the space-scale domain reveals where and on what scales the model focuses. We show that the WCAM explains models' failures under image corruptions, identifies sufficient information for prediction, and explains how zoom-in increases accuracy.

READ FULL TEXT

page 16

page 22

page 23

page 24

page 25

page 29

page 30

page 31

research
12/28/2020

Enhanced Regularizers for Attributional Robustness

Deep neural networks are the default choice of learning models for compu...
research
06/11/2020

Smoothed Geometry for Robust Attribution

Feature attributions are a popular tool for explaining the behavior of D...
research
01/01/2021

On Explaining Your Explanations of BERT: An Empirical Study with Sequence Classification

BERT, as one of the pretrianed language models, attracts the most attent...
research
09/21/2023

Can We Reliably Improve the Robustness to Image Acquisition of Remote Sensing of PV Systems?

Photovoltaic (PV) energy is crucial for the decarbonization of energy sy...
research
06/19/2015

Exploring the influence of scale on artist attribution

Previous work has shown that the artist of an artwork can be identified ...
research
06/01/2020

Aligning Faithful Interpretations with their Social Attribution

We find that the requirement of model interpretations to be faithful is ...
research
07/05/2023

DARE: Towards Robust Text Explanations in Biomedical and Healthcare Applications

Along with the successful deployment of deep neural networks in several ...

Please sign up or login with your details

Forgot password? Click here to reset