An explainability framework for cortical surface-based deep learning

03/15/2022
by   Fernanda L. Ribeiro, et al.
7

The emergence of explainability methods has enabled a better comprehension of how deep neural networks operate through concepts that are easily understood and implemented by the end user. While most explainability methods have been designed for traditional deep learning, some have been further developed for geometric deep learning, in which data are predominantly represented as graphs. These representations are regularly derived from medical imaging data, particularly in the field of neuroimaging, in which graphs are used to represent brain structural and functional wiring patterns (brain connectomes) and cortical surface models are used to represent the anatomical structure of the brain. Although explainability techniques have been developed for identifying important vertices (brain areas) and features for graph classification, these methods are still lacking for more complex tasks, such as surface-based modality transfer (or vertex-wise regression). Here, we address the need for surface-based explainability approaches by developing a framework for cortical surface-based deep learning, providing a transparent system for modality transfer tasks. First, we adapted a perturbation-based approach for use with surface data. Then, we applied our perturbation-based method to investigate the key features and vertices used by a geometric deep learning model developed to predict brain function from anatomy directly on a cortical surface model. We show that our explainability framework is not only able to identify important features and their spatial location but that it is also reliable and valid.

READ FULL TEXT

page 1

page 3

page 7

page 8

page 9

research
09/02/2023

A 3D explainability framework to uncover learning patterns and crucial sub-regions in variable sulci recognition

Precisely identifying sulcal features in brain MRI is made challenging b...
research
10/14/2021

Brittle interpretations: The Vulnerability of TCAV and Other Concept-based Explainability Tools to Adversarial Attack

Methods for model explainability have become increasingly critical for t...
research
12/20/2013

Deep learning for neuroimaging: a validation study

Deep learning methods have recently made notable advances in the tasks o...
research
05/26/2020

DeepRetinotopy: Predicting the Functional Organization of Human Visual Cortex from Structural MRI Data using Geometric Deep Learning

Whether it be in a man-made machine or a biological system, form and fun...
research
06/29/2020

Surface-based 3D Deep Learning Framework for Segmentation of Intracranial Aneurysms from TOF-MRA Images

Segmentation of intracranial aneurysms is an important task in medical d...
research
02/15/2022

Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis

A variety of methods have been proposed to try to explain how deep neura...
research
07/26/2022

ScoreCAM GNN: une explication optimale des réseaux profonds sur graphes

The explainability of deep networks is becoming a central issue in the d...

Please sign up or login with your details

Forgot password? Click here to reset