See What You See: Self-supervised Cross-modal Retrieval of Visual Stimuli from Brain Activity

08/07/2022
by   Zesheng Ye, et al.
0

Recent studies demonstrate the use of a two-stage supervised framework to generate images that depict human perception to visual stimuli from EEG, referring to EEG-visual reconstruction. They are, however, unable to reproduce the exact visual stimulus, since it is the human-specified annotation of images, not their data, that determines what the synthesized images are. Moreover, synthesized images often suffer from noisy EEG encodings and unstable training of generative models, making them hard to recognize. Instead, we present a single-stage EEG-visual retrieval paradigm where data of two modalities are correlated, as opposed to their annotations, allowing us to recover the exact visual stimulus for an EEG clip. We maximize the mutual information between the EEG encoding and associated visual stimulus through optimization of a contrastive self-supervised objective, leading to two additional benefits. One, it enables EEG encodings to handle visual classes beyond seen ones during training, since learning is not directed at class annotations. In addition, the model is no longer required to generate every detail of the visual stimulus, but rather focuses on cross-modal alignment and retrieves images at the instance level, ensuring distinguishable model output. Empirical studies are conducted on the largest single-subject EEG dataset that measures brain activities evoked by image stimuli. We demonstrate the proposed approach completes an instance-level EEG-visual retrieval task which existing methods cannot. We also examine the implications of a range of EEG and visual encoder structures. Furthermore, for a mostly studied semantic-level EEG-visual classification task, despite not using class annotations, the proposed method outperforms state-of-the-art supervised EEG-visual reconstruction approaches, particularly on the capability of open class recognition.

READ FULL TEXT
research
01/31/2019

Self-Supervised Visual Representations for Cross-Modal Retrieval

Cross-modal retrieval methods have been significantly improved in last y...
research
07/27/2023

Seeing through the Brain: Image Reconstruction of Visual Perception from Human Brain Signals

Seeing is believing, however, the underlying mechanism of how human visu...
research
08/15/2022

Self-Supervised Learning for Anomalous Channel Detection in EEG Graphs: Application to Seizure Analysis

Electroencephalogram (EEG) signals are effective tools towards seizure a...
research
07/12/2022

Self-supervised Group Meiosis Contrastive Learning for EEG-Based Emotion Recognition

The progress of EEG-based emotion recognition has received widespread at...
research
04/09/2020

Object classification from randomized EEG trials

New results suggest strong limits to the feasibility of classifying huma...
research
12/18/2018

Training on the test set? An analysis of Spampinato et al. [31]

A recent paper [31] claims to classify brain processing evoked in subjec...
research
09/08/2023

Mapping EEG Signals to Visual Stimuli: A Deep Learning Approach to Match vs. Mismatch Classification

Existing approaches to modeling associations between visual stimuli and ...

Please sign up or login with your details

Forgot password? Click here to reset