Log In Sign Up

Fusing multimodal neuroimaging data with a variational autoencoder

by   Eloy Geenjaar, et al.

Neuroimaging studies often involve the collection of multiple data modalities. These modalities contain both shared and mutually exclusive information about the brain. This work aims at finding a scalable and interpretable method to fuse the information of multiple neuroimaging modalities using a variational autoencoder (VAE). To provide an initial assessment, this work evaluates the representations that are learned using a schizophrenia classification task. A support vector machine trained on the representations achieves an area under the curve for the classifier's receiver operating characteristic (ROC-AUC) of 0.8610.


page 1

page 2

page 3

page 4


A Multimodal Anomaly Detector for Robot-Assisted Feeding Using an LSTM-based Variational Autoencoder

The detection of anomalous executions is valuable for reducing potential...

Generating lyrics with variational autoencoder and multi-modal artist embeddings

We present a system for generating song lyrics lines conditioned on the ...

Assessing glaucoma in retinal fundus photographs using Deep Feature Consistent Variational Autoencoders

One of the leading causes of blindness is glaucoma, which is challenging...

Using Convolutional Variational Autoencoders to Predict Post-Trauma Health Outcomes from Actigraphy Data

Depression and post-traumatic stress disorder (PTSD) are psychiatric con...

Multimodal representation models for prediction and control from partial information

Similar to humans, robots benefit from interacting with their environmen...

Autoencoding beyond pixels using a learned similarity metric

We present an autoencoder that leverages learned representations to bett...