Gradient-Adjusted Neuron Activation Profiles for Comprehensive Introspection of Convolutional Speech Recognition Models

02/19/2020
by   Andreas Krug, et al.
0

Deep Learning based Automatic Speech Recognition (ASR) models are very successful, but hard to interpret. To gain better understanding of how Artificial Neural Networks (ANNs) accomplish their tasks, introspection methods have been proposed. Adapting such techniques from computer vision to speech recognition is not straight-forward, because speech data is more complex and less interpretable than image data. In this work, we introduce Gradient-adjusted Neuron Activation Profiles (GradNAPs) as means to interpret features and representations in Deep Neural Networks. GradNAPs are characteristic responses of ANNs to particular groups of inputs, which incorporate the relevance of neurons for prediction. We show how to utilize GradNAPs to gain insight about how data is processed in ANNs. This includes different ways of visualizing features and clustering of GradNAPs to compare embeddings of different groups of inputs in any layer of a given network. We demonstrate our proposed techniques using a fully-convolutional ASR model.

READ FULL TEXT
research
02/01/2022

Visualizing Automatic Speech Recognition – Means for a Better Understanding?

Automatic speech recognition (ASR) is improving ever more at mimicking h...
research
12/17/2018

Persian phonemes recognition using PPNet

In this paper a new approach for recognition of Persian phonemes on the ...
research
02/02/2023

Complex Dynamic Neurons Improved Spiking Transformer Network for Efficient Automatic Speech Recognition

The spiking neural network (SNN) using leaky-integrated-and-fire (LIF) n...
research
09/19/2018

Interpretable Textual Neuron Representations for NLP

Input optimization methods, such as Google Deep Dream, create interpreta...
research
02/18/2021

Echo State Speech Recognition

We propose automatic speech recognition (ASR) models inspired by echo st...
research
06/12/2023

Adversarial Attacks on the Interpretation of Neuron Activation Maximization

The internal functional behavior of trained Deep Neural Networks is noto...
research
12/11/2018

Diagnostic Visualization for Deep Neural Networks Using Stochastic Gradient Langevin Dynamics

The internal states of most deep neural networks are difficult to interp...

Please sign up or login with your details

Forgot password? Click here to reset