Comparison of attention models and post-hoc explanation methods for embryo stage identification: a case study

05/13/2022
by   Tristan Gomez, et al.
0

An important limitation to the development of AI-based solutions for In Vitro Fertilization (IVF) is the black-box nature of most state-of-the-art models, due to the complexity of deep learning architectures, which raises potential bias and fairness issues. The need for interpretable AI has risen not only in the IVF field but also in the deep learning community in general. This has started a trend in literature where authors focus on designing objective metrics to evaluate generic explanation methods. In this paper, we study the behavior of recently proposed objective faithfulness metrics applied to the problem of embryo stage identification. We benchmark attention models and post-hoc methods using metrics and further show empirically that (1) the metrics produce low overall agreement on the model ranking and (2) depending on the metric approach, either post-hoc methods or attention models are favored. We conclude with general remarks about the difficulty of defining faithfulness and the necessity of understanding its relationship with the type of approach that is favored.

READ FULL TEXT

page 7

page 12

research
02/18/2019

Regularizing Black-box Models for Improved Interpretability

Most work on interpretability in machine learning has focused on designi...
research
05/31/2019

Regularizing Black-box Models for Improved Interpretability (HILL 2019 Version)

Most of the work on interpretable machine learning has focused on design...
research
02/22/2020

The Pragmatic Turn in Explainable Artificial Intelligence (XAI)

In this paper I argue that the search for explainable models and interpr...
research
12/31/2020

Quantitative Evaluations on Saliency Methods: An Experimental Study

It has been long debated that eXplainable AI (XAI) is an important topic...
research
12/02/2022

Evaluation of FEM and MLFEM AI-explainers in Image Classification tasks with reference-based and no-reference metrics

The most popular methods and algorithms for AI are, for the vast majorit...
research
05/04/2020

Evaluating Explanation Methods for Neural Machine Translation

Recently many efforts have been devoted to interpreting the black-box NM...
research
06/01/2022

ILMART: Interpretable Ranking with Constrained LambdaMART

Interpretable Learning to Rank (LtR) is an emerging field within the res...

Please sign up or login with your details

Forgot password? Click here to reset