Gradient Frequency Modulation for Visually Explaining Video Understanding Models

11/01/2021
by   Xinmiao Lin, et al.
11

In many applications, it is essential to understand why a machine learning model makes the decisions it does, but this is inhibited by the black-box nature of state-of-the-art neural networks. Because of this, increasing attention has been paid to explainability in deep learning, including in the area of video understanding. Due to the temporal dimension of video data, the main challenge of explaining a video action recognition model is to produce spatiotemporally consistent visual explanations, which has been ignored in the existing literature. In this paper, we propose Frequency-based Extremal Perturbation (F-EP) to explain a video understanding model's decisions. Because the explanations given by perturbation methods are noisy and non-smooth both spatially and temporally, we propose to modulate the frequencies of gradient maps from the neural network model with a Discrete Cosine Transform (DCT). We show in a range of experiments that F-EP provides more spatiotemporally consistent explanations that more faithfully represent the model's decisions compared to the existing state-of-the-art methods.

READ FULL TEXT

page 1

page 4

page 5

page 10

page 18

page 19

research
05/04/2020

LIMEtree: Interactively Customisable Explanations Based on Local Surrogate Multi-output Regression Trees

Systems based on artificial intelligence and machine learning models sho...
research
11/18/2019

Towards Visually Explaining Variational Autoencoders

Recent advances in Convolutional Neural Network (CNN) model interpretabi...
research
06/22/2017

MAGIX: Model Agnostic Globally Interpretable Explanations

Explaining the behavior of a black box machine learning model at the ins...
research
06/11/2018

Understanding Patch-Based Learning by Explaining Predictions

Deep networks are able to learn highly predictive models of video data. ...
research
10/14/2020

Explainability for fair machine learning

As the decisions made or influenced by machine learning models increasin...
research
05/12/2021

What's wrong with this video? Comparing Explainers for Deepfake Detection

Deepfakes are computer manipulated videos where the face of an individua...
research
06/28/2020

Interpretable Deepfake Detection via Dynamic Prototypes

Deepfake is one notorious application of deep learning research, leading...

Please sign up or login with your details

Forgot password? Click here to reset