Invertible Network for Classification and Biomarker Selection for ASD

07/23/2019
by   Juntang Zhuang, et al.
0

Determining biomarkers for autism spectrum disorder (ASD) is crucial to understanding its mechanisms. Recently deep learning methods have achieved success in the classification task of ASD using fMRI data. However, due to the black-box nature of most deep learning models, it's hard to perform biomarker selection and interpret model decisions. The recently proposed invertible networks can accurately reconstruct the input from its output, and have the potential to unravel the black-box representation. Therefore, we propose a novel method to classify ASD and identify biomarkers for ASD using the connectivity matrix calculated from fMRI as the input. Specifically, with invertible networks, we explicitly determine the decision boundary and the projection of data points onto the boundary. Like linear classifiers, the difference between a point and its projection onto the decision boundary can be viewed as the explanation. We then define the importance as the explanation weighted by the gradient of prediction w.r.t the input, and identify biomarkers based on this importance measure. We perform a regression task to further validate our biomarker selection: compared to using all edges in the connectivity matrix, using the top 10% important edges we generate a lower regression error on 6 different severity scores. Our experiments show that the invertible network is both effective at ASD classification and interpretable, allowing for discovery of reliable biomarkers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/30/2019

Decision Explanation and Feature Importance for Invertible Networks

Deep neural networks are vulnerable to adversarial attacks and hard to i...
research
07/04/2022

Interpretable Fusion Analytics Framework for fMRI Connectivity: Self-Attention Mechanism and Latent Space Item-Response Model

There have been several attempts to use deep learning based on brain fMR...
research
09/27/2019

Interpreting Undesirable Pixels for Image Classification on Black-Box Models

In an effort to interpret black-box models, researches for developing ex...
research
10/09/2018

What made you do this? Understanding black-box decisions with sufficient input subsets

Local explanation frameworks aim to rationalize particular decisions mad...
research
07/26/2018

High Dimensional Model Representation as a Glass Box in Supervised Machine Learning

Prediction and explanation are key objects in supervised machine learnin...
research
12/14/2018

Efficient Interpretation of Deep Learning Models Using Graph Structure and Cooperative Game Theory: Application to ASD Biomarker Discovery

Discovering imaging biomarkers for autism spectrum disorder (ASD) is cri...
research
02/14/2020

Ensemble Deep Learning on Large, Mixed-Site fMRI Datasets in Autism and Other Tasks

Deep learning models for MRI classification face two recurring problems:...

Please sign up or login with your details

Forgot password? Click here to reset