Spatio-Temporal Perturbations for Video Attribution

09/01/2021
by   Zhenqiang Li, et al.
6

The attribution method provides a direction for interpreting opaque neural networks in a visual way by identifying and visualizing the input regions/pixels that dominate the output of a network. Regarding the attribution method for visually explaining video understanding networks, it is challenging because of the unique spatiotemporal dependencies existing in video inputs and the special 3D convolutional or recurrent structures of video understanding networks. However, most existing attribution methods focus on explaining networks taking a single image as input and a few works specifically devised for video attribution come short of dealing with diversified structures of video understanding networks. In this paper, we investigate a generic perturbation-based attribution method that is compatible with diversified video understanding networks. Besides, we propose a novel regularization term to enhance the method by constraining the smoothness of its attribution results in both spatial and temporal dimensions. In order to assess the effectiveness of different video attribution methods without relying on manual judgement, we introduce reliable objective metrics which are checked by a newly proposed reliability measurement. We verified the effectiveness of our method by both subjective and objective evaluation and comparison with multiple significant attribution methods.

READ FULL TEXT

page 1

page 5

page 9

page 10

page 11

page 12

page 14

research
05/01/2020

A Comprehensive Study on Visual Explanations for Spatio-temporal Networks

Identifying and visualizing regions that are significant for a given dee...
research
02/19/2020

Interpreting Interpretations: Organizing Attribution Methods by Criteria

Attribution methods that explains the behaviour of machine learning mode...
research
03/26/2019

Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Values Approximation

The problem of explaining the behavior of deep neural networks has gaine...
research
10/18/2019

Understanding Deep Networks via Extremal Perturbations and Smooth Masks

The problem of attribution is concerned with identifying the parts of an...
research
10/28/2020

Attribution Preservation in Network Compression for Reliable Network Interpretation

Neural networks embedded in safety-sensitive applications such as self-d...
research
05/20/2022

Towards Better Understanding Attribution Methods

Deep neural networks are very successful on many vision tasks, but hard ...
research
01/23/2020

Visual Summary of Value-level Feature Attribution in Prediction Classes with Recurrent Neural Networks

Deep Recurrent Neural Networks (RNN) is increasingly used in decision-ma...

Please sign up or login with your details

Forgot password? Click here to reset