Generating EEG features from Acoustic features

02/29/2020
by   Gautam Krishna, et al.
0

In this paper we demonstrate predicting electroencephalograpgy (EEG) features from acoustic features using recurrent neural network (RNN) based regression model and generative adversarial network (GAN). We predict various types of EEG features from acoustic features. We compare our results with the previously studied problem on speech synthesis using EEG and our results demonstrate that EEG features can be generated from acoustic features with lower root mean square error (RMSE), normalized RMSE values compared to generating acoustic features from EEG features (ie: speech synthesis using EEG) when tested using the same data sets.

READ FULL TEXT
research
02/22/2020

Speech Synthesis using EEG

In this paper we demonstrate speech synthesis using different electroenc...
research
04/09/2020

Advancing Speech Synthesis using EEG

In this paper we introduce attention-regression model to demonstrate pre...
research
05/29/2020

Predicting Different Acoustic Features from EEG and towards direct synthesis of Audio Waveform from EEG

In [1,2] authors provided preliminary results for synthesizing speech fr...
research
09/16/2019

MFCC-based Recurrent Neural Network for Automatic Clinical Depression Recognition and Assessment from Speech

Major depression, also known as clinical depression, is a constant sense...
research
11/11/2019

Modeling EEG data distribution with a Wasserstein Generative Adversarial Network to predict RSVP Events

Electroencephalography (EEG) data are difficult to obtain due to complex...
research
01/26/2016

Recurrent Neural Network Postfilters for Statistical Parametric Speech Synthesis

In the last two years, there have been numerous papers that have looked ...
research
05/29/2018

Focal onset seizure prediction using convolutional networks

Objective: This work investigates the hypothesis that focal seizures can...

Please sign up or login with your details

Forgot password? Click here to reset