Reliable Local Explanations for Machine Listening

by   Saumitra Mishra, et al.
KTH Royal Institute of Technology
The Alan Turing Institute
Queen Mary University of London

One way to analyse the behaviour of machine learning models is through local explanations that highlight input features that maximally influence model predictions. Sensitivity analysis, which involves analysing the effect of input perturbations on model predictions, is one of the methods to generate local explanations. Meaningful input perturbations are essential for generating reliable explanations, but there exists limited work on what such perturbations are and how to perform them. This work investigates these questions in the context of machine listening models that analyse audio. Specifically, we use a state-of-the-art deep singing voice detection (SVD) model to analyse whether explanations from SoundLIME (a local explanation method) are sensitive to how the method perturbs model inputs. The results demonstrate that SoundLIME explanations are sensitive to the content in the occluded input regions. We further propose and demonstrate a novel method for quantitatively identifying suitable content type(s) for reliably occluding inputs of machine listening models. The results for the SVD model suggest that the average magnitude of input mel-spectrogram bins is the most suitable content type for temporal explanations.


page 2

page 5

page 6

page 7


How Sensitive are Sensitivity-Based Explanations?

We propose a simple objective evaluation measure for explanations of a c...

Teaching Meaningful Explanations

The adoption of machine learning in high-stakes applications such as hea...

Inferring Sensitive Attributes from Model Explanations

Model explanations provide transparency into a trained machine learning ...

Manifold Restricted Interventional Shapley Values

Shapley values are model-agnostic methods for explaining model predictio...

On Interactive Explanations as Non-Monotonic Reasoning

Recent work shows issues of consistency with explanations, with methods ...

Can I trust you more? Model-Agnostic Hierarchical Explanations

Interactions such as double negation in sentences and scene interactions...

Please sign up or login with your details

Forgot password? Click here to reset