Interpretability of Machine Learning Methods Applied to Neuroimaging

04/14/2022
by   Elina Thibeau-Sutre, et al.
17

Deep learning methods have become very popular for the processing of natural images, and were then successfully adapted to the neuroimaging field. As these methods are non-transparent, interpretability methods are needed to validate them and ensure their reliability. Indeed, it has been shown that deep learning models may obtain high performance even when using irrelevant features, by exploiting biases in the training set. Such undesirable situations can potentially be detected by using interpretability methods. Recently, many methods have been proposed to interpret neural networks. However, this domain is not mature yet. Machine learning users face two major issues when aiming to interpret their models: which method to choose, and how to assess its reliability? Here, we aim at providing answers to these questions by presenting the most common interpretability methods and metrics developed to assess their reliability, as well as their applications and benchmarks in the neuroimaging context. Note that this is not an exhaustive survey: we aimed to focus on the studies which we found to be the most representative and relevant.

READ FULL TEXT

page 9

page 12

page 18

page 23

page 28

page 32

page 34

page 36

research
07/18/2018

Machine Learning Interpretability: A Science rather than a tool

The term "interpretability" is oftenly used by machine learning research...
research
01/25/2019

Proceedings of AAAI 2019 Workshop on Network Interpretability for Deep Learning

This is the Proceedings of AAAI 2019 Workshop on Network Interpretabilit...
research
12/03/2020

Interpretability and Explainability: A Machine Learning Zoo Mini-tour

In this review, we examine the problem of designing interpretable and ex...
research
07/01/2021

Quality Metrics for Transparent Machine Learning With and Without Humans In the Loop Are Not Correlated

The field explainable artificial intelligence (XAI) has brought about an...
research
03/24/2023

Improving Prediction Performance and Model Interpretability through Attention Mechanisms from Basic and Applied Research Perspectives

With the dramatic advances in deep learning technology, machine learning...
research
03/01/2021

Interpretable Artificial Intelligence through the Lens of Feature Interaction

Interpretation of deep learning models is a very challenging problem bec...
research
12/05/2020

Understanding Interpretability by generalized distillation in Supervised Classification

The ability to interpret decisions taken by Machine Learning (ML) models...

Please sign up or login with your details

Forgot password? Click here to reset