Reliable Local Explanations for Machine Listening

05/15/2020
by   Saumitra Mishra, et al.
0

One way to analyse the behaviour of machine learning models is through local explanations that highlight input features that maximally influence model predictions. Sensitivity analysis, which involves analysing the effect of input perturbations on model predictions, is one of the methods to generate local explanations. Meaningful input perturbations are essential for generating reliable explanations, but there exists limited work on what such perturbations are and how to perform them. This work investigates these questions in the context of machine listening models that analyse audio. Specifically, we use a state-of-the-art deep singing voice detection (SVD) model to analyse whether explanations from SoundLIME (a local explanation method) are sensitive to how the method perturbs model inputs. The results demonstrate that SoundLIME explanations are sensitive to the content in the occluded input regions. We further propose and demonstrate a novel method for quantitatively identifying suitable content type(s) for reliably occluding inputs of machine listening models. The results for the SVD model suggest that the average magnitude of input mel-spectrogram bins is the most suitable content type for temporal explanations.

READ FULL TEXT

page 2

page 5

page 6

page 7

research
07/19/2021

On the Veracity of Local, Model-agnostic Explanations in Audio Classification: Targeted Investigations with Adversarial Examples

Local explanation methods such as LIME have become popular in MIR as too...
research
01/27/2019

How Sensitive are Sensitivity-Based Explanations?

We propose a simple objective evaluation measure for explanations of a c...
research
05/29/2018

Teaching Meaningful Explanations

The adoption of machine learning in high-stakes applications such as hea...
research
08/21/2022

Inferring Sensitive Attributes from Model Explanations

Model explanations provide transparency into a trained machine learning ...
research
01/10/2023

Manifold Restricted Interventional Shapley Values

Shapley values are model-agnostic methods for explaining model predictio...
research
07/30/2022

On Interactive Explanations as Non-Monotonic Reasoning

Recent work shows issues of consistency with explanations, with methods ...
research
12/12/2018

Can I trust you more? Model-Agnostic Hierarchical Explanations

Interactions such as double negation in sentences and scene interactions...

Please sign up or login with your details

Forgot password? Click here to reset