Correlation based Multi-phasal models for improved imagined speech EEG recognition

11/04/2020
by   Rini A Sharon, et al.
0

Translation of imagined speech electroencephalogram(EEG) into human understandable commands greatly facilitates the design of naturalistic brain computer interfaces. To achieve improved imagined speech unit classification, this work aims to profit from the parallel information contained in multi-phasal EEG data recorded while speaking, imagining and performing articulatory movements corresponding to specific speech units. A bi-phase common representation learning module using neural networks is designed to model the correlation and reproducibility between an analysis phase and a support phase. The trained Correlation Network is then employed to extract discriminative features of the analysis phase. These features are further classified into five binary phonological categories using machine learning models such as Gaussian mixture based hidden Markov model and deep neural networks. The proposed approach further handles the non-availability of multi-phasal data during decoding. Topographic visualizations along with result-based inferences suggest that the multi-phasal correlation modelling approach proposed in the paper enhances imagined-speech EEG recognition performance.

READ FULL TEXT
research
02/22/2020

Speech Synthesis using EEG

In this paper we demonstrate speech synthesis using different electroenc...
research
07/26/2023

Diff-E: Diffusion-based Learning for Decoding Imagined Speech EEG

Decoding EEG signals for imagined speech is a challenging task due to th...
research
02/03/2023

Relating EEG to continuous speech using deep neural networks: a review

Objective. When a person listens to continuous speech, a corresponding r...
research
05/29/2020

Understanding effect of speech perception in EEG based speech recognition systems

The electroencephalography (EEG) signals recorded in parallel with speec...
research
04/08/2019

Deep Learning the EEG Manifold for Phonological Categorization from Active Thoughts

Speech-related Brain Computer Interfaces (BCI) aim primarily at finding ...
research
06/19/2023

Performance of data-driven inner speech decoding with same-task EEG-fMRI data fusion and bimodal models

Decoding inner speech from the brain signal via hybridisation of fMRI an...
research
04/02/2020

Improving auditory attention decoding performance of linear and non-linear methods using state-space model

Identifying the target speaker in hearing aid applications is crucial to...

Please sign up or login with your details

Forgot password? Click here to reset