Explaining with Attribute-based and Relational Near Misses: An Interpretable Approach to Distinguishing Facial Expressions of Pain and Disgust

08/27/2023
by   Bettina Finzel, et al.
0

Explaining concepts by contrasting examples is an efficient and convenient way of giving insights into the reasons behind a classification decision. This is of particular interest in decision-critical domains, such as medical diagnostics. One particular challenging use case is to distinguish facial expressions of pain and other states, such as disgust, due to high similarity of manifestation. In this paper, we present an approach for generating contrastive explanations to explain facial expressions of pain and disgust shown in video sequences. We implement and compare two approaches for contrastive explanation generation. The first approach explains a specific pain instance in contrast to the most similar disgust instance(s) based on the occurrence of facial expressions (attributes). The second approach takes into account which temporal relations hold between intervals of facial expressions within a sequence (relations). The input to our explanation generation approach is the output of an interpretable rule-based classifier for pain and disgust.We utilize two different similarity metrics to determine near misses and far misses as contrasting instances. Our results show that near miss explanations are shorter than far miss explanations, independent from the applied similarity metric. The outcome of our evaluation indicates that pain and disgust can be distinguished with the help of temporal relations. We currently plan experiments to evaluate how the explanations help in teaching concepts and how they could be enhanced by further modalities and interaction.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/15/2021

Generating Contrastive Explanations for Inductive Logic Programming Based on a Near Miss Approach

In recent research, human-understandable explanations of machine learnin...
research
05/28/2018

Local Rule-Based Explanations of Black Box Decision Systems

The recent years have witnessed the rise of accurate but obscure decisio...
research
05/26/2019

Why do These Match? Explaining the Behavior of Image Similarity Models

Explaining a deep learning model can help users understand its behavior ...
research
06/25/2019

Explaining Deep Learning Models with Constrained Adversarial Examples

Machine learning algorithms generally suffer from a problem of explainab...
research
10/04/2019

Enriching Visual with Verbal Explanations for Relational Concepts – Combining LIME with Aleph

With the increasing number of deep learning applications, there is a gro...
research
09/30/2022

Contrastive Corpus Attribution for Explaining Representations

Despite the widespread use of unsupervised models, very few methods are ...
research
04/25/2022

Generating and Visualizing Trace Link Explanations

Recent breakthroughs in deep-learning (DL) approaches have resulted in t...

Please sign up or login with your details

Forgot password? Click here to reset