Exploiting Multi-Modal Features From Pre-trained Networks for Alzheimer's Dementia Recognition

09/09/2020
by   Junghyun Koo, et al.
0

Collecting and accessing a large amount of medical data is very time-consuming and laborious, not only because it is difficult to find specific patients but also because it is required to resolve the confidentiality of a patient's medical records. On the other hand, there are deep learning models, trained on easily collectible, large scale datasets such as Youtube or Wikipedia, offering useful representations. It could therefore be very advantageous to utilize the features from these pre-trained networks for handling a small amount of data at hand. In this work, we exploit various multi-modal features extracted from pre-trained networks to recognize Alzheimer's Dementia using a neural network, with a small dataset provided by the ADReSS Challenge at INTERSPEECH 2020. The challenge regards to discern patients suspicious of Alzheimer's Dementia by providing acoustic and textual data. With the multi-modal features, we modify a Convolutional Recurrent Neural Network based structure to perform classification and regression tasks simultaneously and is capable of computing conversations with variable lengths. Our test results surpass baseline's accuracy by 18.75 result for the regression task shows the possibility of classifying 4 classes of cognitive impairment with an accuracy of 78.70

READ FULL TEXT
research
05/30/2020

Blended Multi-Modal Deep ConvNet Features for Diabetic Retinopathy Severity Prediction

Diabetic Retinopathy (DR) is one of the major causes of visual impairmen...
research
02/20/2023

Large-scale Multi-Modal Pre-trained Models: A Comprehensive Survey

With the urgent demand for generalized deep models, many pre-trained big...
research
05/16/2019

Understanding of Object Manipulation Actions Using Human Multi-Modal Sensory Data

Object manipulation actions represent an important share of the Activiti...
research
08/22/2023

CLIP Multi-modal Hashing: A new baseline CLIPMH

The multi-modal hashing method is widely used in multimedia retrieval. I...
research
12/29/2020

Detecting Hate Speech in Multi-modal Memes

In the past few years, there has been a surge of interest in multi-modal...
research
09/05/2017

Multi-Modal Multi-Scale Deep Learning for Large-Scale Image Annotation

Large-scale image annotation is a challenging task in image content anal...
research
08/04/2023

Towards Generalist Foundation Model for Radiology

In this study, we aim to initiate the development of Radiology Foundatio...

Please sign up or login with your details

Forgot password? Click here to reset