Speech Synthesis using EEG

02/22/2020
by   Gautam Krishna, et al.
0

In this paper we demonstrate speech synthesis using different electroencephalography (EEG) feature sets recently introduced in [1]. We make use of a recurrent neural network (RNN) regression model to predict acoustic features directly from EEG features. We demonstrate our results using EEG features recorded in parallel with spoken speech as well as using EEG recorded in parallel with listening utterances. We provide EEG based speech synthesis results for four subjects in this paper and our results demonstrate the feasibility of synthesizing speech directly from EEG features.

READ FULL TEXT
research
02/29/2020

Generating EEG features from Acoustic features

In this paper we demonstrate predicting electroencephalograpgy (EEG) fea...
research
04/09/2020

Advancing Speech Synthesis using EEG

In this paper we introduce attention-regression model to demonstrate pre...
research
05/16/2020

Predicting Video features from EEG and Vice versa

In this paper we explore predicting facial or lip video features from el...
research
11/04/2020

Correlation based Multi-phasal models for improved imagined speech EEG recognition

Translation of imagined speech electroencephalogram(EEG) into human unde...
research
02/12/2021

Mind the beat: detecting audio onsets from EEG recordings of music listening

We propose a deep learning approach to predicting audio event onsets in ...
research
02/27/2023

Predicting EEG Responses to Attended Speech via Deep Neural Networks for Speech

Attending to the speech stream of interest in multi-talker environments ...
research
04/08/2019

Hierarchical Deep Feature Learning For Decoding Imagined Speech From EEG

We propose a mixed deep neural network strategy, incorporating parallel ...

Please sign up or login with your details

Forgot password? Click here to reset