BENDR: using transformers and a contrastive self-supervised learning task to learn from massive amounts of EEG data

01/28/2021
by   Demetres Kostas, et al.
0

Deep neural networks (DNNs) used for brain-computer-interface (BCI) classification are commonly expected to learn general features when trained across a variety of contexts, such that these features could be fine-tuned to specific contexts. While some success is found in such an approach, we suggest that this interpretation is limited and an alternative would better leverage the newly (publicly) available massive EEG datasets. We consider how to adapt techniques and architectures used for language modelling (LM), that appear capable of ingesting awesome amounts of data, towards the development of encephalography modelling (EM) with DNNs in the same vein. We specifically adapt an approach effectively used for automatic speech recognition, which similarly (to LMs) uses a self-supervised training objective to learn compressed representations of raw data signals. After adaptation to EEG, we find that a single pre-trained model is capable of modelling completely novel raw EEG sequences recorded with differing hardware, and different subjects performing different tasks. Furthermore, both the internal representations of this model and the entire architecture can be fine-tuned to a variety of downstream BCI and EEG classification tasks, outperforming prior work in more task-specific (sleep stage classification) self-supervision.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/27/2021

Self-supervised EEG Representation Learning for Automatic Sleep Staging

Objective: In this paper, we aim to learn robust vector representations ...
research
09/16/2021

Self-supervised Contrastive Learning for EEG-based Sleep Staging

EEG signals are usually simple to obtain but expensive to label. Althoug...
research
09/03/2023

Acoustic-to-articulatory inversion for dysarthric speech: Are pre-trained self-supervised representations favorable?

Acoustic-to-articulatory inversion (AAI) involves mapping from the acous...
research
09/03/2022

Transfer Learning of an Ensemble of DNNs for SSVEP BCI Spellers without User-Specific Training

Objective: Steady-state visually evoked potentials (SSVEPs), measured wi...
research
02/19/2022

Do Transformers use variable binding?

Increasing the explainability of deep neural networks (DNNs) requires ev...
research
11/26/2019

Universal EEG Encoder for Learning Diverse Intelligent Tasks

Brain Computer Interfaces (BCI) have become very popular with Electroenc...
research
10/21/2016

Deep Models for Engagement Assessment With Scarce Label Information

Task engagement is defined as loadings on energetic arousal (affect), ta...

Please sign up or login with your details

Forgot password? Click here to reset