Learn-Explain-Reinforce: Counterfactual Reasoning and Its Guidance to Reinforce an Alzheimer's Disease Diagnosis Model

by   Kwanseok Oh, et al.

Existing studies on disease diagnostic models focus either on diagnostic model learning for performance improvement or on the visual explanation of a trained diagnostic model. We propose a novel learn-explain-reinforce (LEAR) framework that unifies diagnostic model learning, visual explanation generation (explanation unit), and trained diagnostic model reinforcement (reinforcement unit) guided by the visual explanation. For the visual explanation, we generate a counterfactual map that transforms an input sample to be identified as an intended target label. For example, a counterfactual map can localize hypothetical abnormalities within a normal brain image that may cause it to be diagnosed with Alzheimer's disease (AD). We believe that the generated counterfactual maps represent data-driven and model-induced knowledge about a target task, i.e., AD diagnosis using structural MRI, which can be a vital source of information to reinforce the generalization of the trained diagnostic model. To this end, we devise an attention-based feature refinement module with the guidance of the counterfactual maps. The explanation and reinforcement units are reciprocal and can be operated iteratively. Our proposed approach was validated via qualitative and quantitative analysis on the ADNI dataset. Its comprehensibility and fidelity were demonstrated through ablation studies and comparisons with existing methods.


page 4

page 9

page 10

page 12


Born Identity Network: Multi-way Counterfactual Map Generation to Explain a Classifier's Decision

There exists an apparent negative correlation between performance and in...

TDLS: A Top-Down Layer Searching Algorithm for Generating Counterfactual Visual Explanation

Explanation of AI, as well as fairness of algorithms' decisions and the ...

Counterfactual diagnosis

Causal knowledge is vital for effective reasoning in science and medicin...

Diagnostics-Guided Explanation Generation

Explanations shed light on a machine learning model's rationales and can...

Counterfactual Explanation with Multi-Agent Reinforcement Learning for Drug Target Prediction

Motivation: Several accurate deep learning models have been proposed to ...

Clusters in Explanation Space: Inferring disease subtypes from model explanations

Identification of disease subtypes and corresponding biomarkers can subs...

DreaMR: Diffusion-driven Counterfactual Explanation for Functional MRI

Deep learning analyses have offered sensitivity leaps in detection of co...

Please sign up or login with your details

Forgot password? Click here to reset