Explainable CNN-attention Networks (C-Attention Network) for Automated Detection of Alzheimer's Disease

06/25/2020
by   Ning Wang, et al.
0

In this work, we propose three explainable deep learning architectures to automatically detect patients with Alzheimer`s disease based on their language abilities. The architectures use: (1) only the part-of-speech features; (2) only language embedding features and (3) both of these feature classes via a unified architecture. We use self-attention mechanisms and interpretable 1-dimensional ConvolutionalNeural Network (CNN) to generate two types of explanations of the model`s action: intra-class explanation and inter-class explanation. The inter-class explanation captures the relative importance of each of the different features in that class, while the inter-class explanation captures the relative importance between the classes. Note that although we have considered two classes of features in this paper, the architecture is easily expandable to more classes because of its modularity. Extensive experimentation and comparison with several recent models show that our method outperforms these methods with an accuracy of 92.2 DementiaBank dataset while being able to generate explanations. We show by examples, how to generate these explanations using attention values.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/21/2020

Explainable Rumor Detection using Inter and Intra-feature Attention Networks

With social media becoming ubiquitous, information consumption from this...
research
11/23/2022

MEGAN: Multi-Explanation Graph Attention Network

Explainable artificial intelligence (XAI) methods are expected to improv...
research
05/03/2021

LFI-CAM: Learning Feature Importance for Better Visual Explanation

Class Activation Mapping (CAM) is a powerful technique used to understan...
research
12/13/2022

Examining the Difference Among Transformers and CNNs with Explanation Methods

We propose a methodology that systematically applies deep explanation al...
research
03/23/2022

Spatial self-attention network with self-attention distillation for fine-grained image recognition

The underlining task for fine-grained image recognition captures both th...
research
09/30/2019

Interpretations are useful: penalizing explanations to align neural networks with prior knowledge

For an explanation of a deep learning model to be effective, it must pro...
research
09/29/2021

Explanation-Aware Experience Replay in Rule-Dense Environments

Human environments are often regulated by explicit and complex rulesets....

Please sign up or login with your details

Forgot password? Click here to reset