A 3D explainability framework to uncover learning patterns and crucial sub-regions in variable sulci recognition

09/02/2023
by   Michail Mamalakis, et al.
0

Precisely identifying sulcal features in brain MRI is made challenging by the variability of brain folding. This research introduces an innovative 3D explainability frame-work that validates outputs from deep learning networks in their ability to detect the paracingulate sulcus, an anatomical feature that may or may not be present on the frontal medial surface of the human brain. This study trained and tested two networks, amalgamating local explainability techniques GradCam and SHAP with a dimensionality reduction method. The explainability framework provided both localized and global explanations, along with accuracy of classification results, revealing pertinent sub-regions contributing to the decision process through a post-fusion transformation of explanatory and statistical features. Leveraging the TOP-OSLO dataset of MRI acquired from patients with schizophrenia, greater accuracies of paracingulate sulcus detection (presence or absence) were found in the left compared to right hemispheres with distinct, but extensive sub-regions contributing to each classification outcome. The study also inadvertently highlighted the critical role of an unbiased annotation protocol in maintaining network performance fairness. Our proposed method not only offers automated, impartial annotations of a variable sulcus but also provides insights into the broader anatomical variations associated with its presence throughout the brain. The adoption of this methodology holds promise for instigating further explorations and inquiries in the field of neuroscience.

READ FULL TEXT
research
03/15/2022

An explainability framework for cortical surface-based deep learning

The emergence of explainability methods has enabled a better comprehensi...
research
06/26/2022

Detecting Schizophrenia with 3D Structural Brain MRI Using Deep Learning

Schizophrenia is a chronic neuropsychiatric disorder that causes distinc...
research
10/16/2021

TorchEsegeta: Framework for Interpretability and Explainability of Image-based Deep Learning Models

Clinicians are often very sceptical about applying automatic image proce...
research
01/07/2023

Multiclass Semantic Segmentation to Identify Anatomical Sub-Regions of Brain and Measure Neuronal Health in Parkinson's Disease

Automated segmentation of anatomical sub-regions with high precision has...
research
09/04/2021

Predicting isocitrate dehydrogenase mutation status in glioma using structural brain networks and graph neural networks

Glioma is a common malignant brain tumor with distinct survival among pa...
research
08/05/2022

Parameter Averaging for Robust Explainability

Neural Networks are known to be sensitive to initialisation. The explana...

Please sign up or login with your details

Forgot password? Click here to reset