Predicting Video features from EEG and Vice versa

05/16/2020
by   Gautam Krishna, et al.
0

In this paper we explore predicting facial or lip video features from electroencephalography (EEG) features and predicting EEG features from recorded facial or lip video frames using deep learning models. The subjects were asked to read out loud English sentences shown to them on a computer screen and their simultaneous EEG signals and facial video frames were recorded. Our model was able to generate very broad characteristics of the facial or lip video frame from input EEG features. Our results demonstrate the first step towards synthesizing high quality facial or lip video from recorded EEG features. We demonstrate results for a data set consisting of seven subjects.

READ FULL TEXT
research
02/22/2020

Speech Synthesis using EEG

In this paper we demonstrate speech synthesis using different electroenc...
research
08/29/2023

Detection of Mild Cognitive Impairment Using Facial Features in Video Conversations

Early detection of Mild Cognitive Impairment (MCI) leads to early interv...
research
09/07/2017

Automated Dyadic Data Recorder (ADDR) Framework and Analysis of Facial Cues in Deceptive Communication

We developed an online framework that can automatically pair two crowd-s...
research
12/15/2021

Overview of the EEG Pilot Subtask at MediaEval 2021: Predicting Media Memorability

The aim of the Memorability-EEG pilot subtask at MediaEval'2021 is to pr...
research
02/24/2010

Feature Importance in Bayesian Assessment of Newborn Brain Maturity from EEG

The methodology of Bayesian Model Averaging (BMA) is applied for assessm...
research
03/26/2023

Driver Drowsiness Detection with Commercial EEG Headsets

Driver Drowsiness is one of the leading causes of road accidents. Electr...
research
04/09/2020

Advancing Speech Synthesis using EEG

In this paper we introduce attention-regression model to demonstrate pre...

Please sign up or login with your details

Forgot password? Click here to reset