EEG based Continuous Speech Recognition using Transformers

12/31/2019
by   Gautam Krishna, et al.
0

In this paper we investigate continuous speech recognition using electroencephalography (EEG) features using recently introduced end-to-end transformer based automatic speech recognition (ASR) model. Our results show that transformer based model demonstrate faster inference and training compared to recurrent neural network (RNN) based sequence-to-sequence EEG models but performance of the RNN based models were better than transformer based model during test time on a limited English vocabulary.

READ FULL TEXT
research
12/16/2019

Continuous Speech Recognition using EEG and Video

In this paper we investigate whether electroencephalography (EEG) featur...
research
01/16/2023

BayesSpeech: A Bayesian Transformer Network for Automatic Speech Recognition

Recent developments using End-to-End Deep Learning models have been show...
research
12/15/2021

EEG-Transformer: Self-attention from Transformer Architecture for Decoding EEG of Imagined Speech

Transformers are groundbreaking architectures that have changed a flow o...
research
06/01/2020

Constrained Variational Autoencoder for improving EEG based Speech Recognition Systems

In this paper we introduce a recurrent neural network (RNN) based variat...
research
09/13/2019

A Comparative Study on Transformer vs RNN in Speech Applications

Sequence-to-sequence models have been widely used in end-to-end speech p...
research
05/20/2020

A Comparison of Label-Synchronous and Frame-Synchronous End-to-End Models for Speech Recognition

End-to-end models are gaining wider attention in the field of automatic ...
research
02/18/2021

Echo State Speech Recognition

We propose automatic speech recognition (ASR) models inspired by echo st...

Please sign up or login with your details

Forgot password? Click here to reset