Looking deeper into interpretable deep learning in neuroimaging: a comprehensive survey

07/14/2023
by   Md Mahfuzur Rahman, et al.
0

Deep learning (DL) models have been popular due to their ability to learn directly from the raw data in an end-to-end paradigm, alleviating the concern of a separate error-prone feature extraction phase. Recent DL-based neuroimaging studies have also witnessed a noticeable performance advancement over traditional machine learning algorithms. But the challenges of deep learning models still exist because of the lack of transparency in these models for their successful deployment in real-world applications. In recent years, Explainable AI (XAI) has undergone a surge of developments mainly to get intuitions of how the models reached the decisions, which is essential for safety-critical domains such as healthcare, finance, and law enforcement agencies. While the interpretability domain is advancing noticeably, researchers are still unclear about what aspect of model learning a post hoc method reveals and how to validate its reliability. This paper comprehensively reviews interpretable deep learning models in the neuroimaging domain. Firstly, we summarize the current status of interpretability resources in general, focusing on the progression of methods, associated challenges, and opinions. Secondly, we discuss how multiple recent neuroimaging studies leveraged model interpretability to capture anatomical and functional brain alterations most relevant to model predictions. Finally, we discuss the limitations of the current practices and offer some valuable insights and guidance on how we can steer our future research directions to make deep learning models substantially interpretable and thus advance scientific understanding of brain disorders.

READ FULL TEXT

page 11

page 14

page 17

page 19

page 37

page 39

page 41

page 42

research
11/01/2021

Interpretable and Explainable Machine Learning for Materials Science and Chemistry

While the uptake of data-driven approaches for materials science and che...
research
04/04/2023

A Brief Review of Explainable Artificial Intelligence in Healthcare

XAI refers to the techniques and methods for building AI applications wh...
research
12/05/2021

Explainable Deep Learning in Healthcare: A Methodological Survey from an Attribution View

The increasing availability of large collections of electronic health re...
research
08/04/2023

Explaining Relation Classification Models with Semantic Extents

In recent years, the development of large pretrained language models, su...
research
03/13/2022

Algebraic Learning: Towards Interpretable Information Modeling

Along with the proliferation of digital data collected using sensor tech...
research
03/21/2023

The Representational Status of Deep Learning Models

This paper aims to clarify the representational status of Deep Learning ...
research
07/07/2023

A Survey of Deep Learning in Sports Applications: Perception, Comprehension, and Decision

Deep learning has the potential to revolutionize sports performance, wit...

Please sign up or login with your details

Forgot password? Click here to reset