Disentangled representations: towards interpretation of sex determination from hip bone

12/17/2021
by   Kaifeng Zou, et al.
0

By highlighting the regions of the input image that contribute the most to the decision, saliency maps have become a popular method to make neural networks interpretable. In medical imaging, they are particularly well-suited to explain neural networks in the context of abnormality localization. However, from our experiments, they are less suited to classification problems where the features that allow to distinguish between the different classes are spatially correlated, scattered and definitely non-trivial. In this paper we thus propose a new paradigm for better interpretability. To this end we provide the user with relevant and easily interpretable information so that he can form his own opinion. We use Disentangled Variational Auto-Encoders which latent representation is divided into two components: the non-interpretable part and the disentangled part. The latter accounts for the categorical variables explicitly representing the different classes of interest. In addition to providing the class of a given input sample, such a model offers the possibility to transform the sample from a given class to a sample of another class, by modifying the value of the categorical variables in the latent representation. This paves the way to easier interpretation of class differences. We illustrate the relevance of this approach in the context of automatic sex determination from hip bones in forensic medicine. The features encoded by the model, that distinguish the different classes were found to be consistent with expert knowledge.

READ FULL TEXT

page 13

page 14

research
04/06/2023

DSVAE: Interpretable Disentangled Representation for Synthetic Speech Detection

Tools to generate high quality synthetic speech signal that is perceptua...
research
06/15/2020

ICAM: Interpretable Classification via Disentangled Representations and Feature Attribution Mapping

Feature attribution (FA), or the assignment of class-relevance to differ...
research
03/23/2021

Extracting Causal Visual Features for Limited label Classification

Neural networks trained to classify images do so by identifying features...
research
05/07/2022

Determination of class-specific variables in nonparametric multiple-class classification

As technology advanced, collecting data via automatic collection devices...
research
10/31/2019

Learning Disentangled Representations for Recommendation

User behavior data in recommender systems are driven by the complex inte...

Please sign up or login with your details

Forgot password? Click here to reset