EMMT: A simultaneous eye-tracking, 4-electrode EEG and audio corpus for multi-modal reading and translation scenarios

04/06/2022
by   Sunit Bhattacharya, et al.
0

We present the Eyetracked Multi-Modal Translation (EMMT) corpus, a dataset containing monocular eye movement recordings, audio and 4-electrode electroencephalogram (EEG) data of 43 participants. The objective was to collect cognitive signals as responses of participants engaged in a number of language intensive tasks involving different text-image stimuli settings when translating from English to Czech. Each participant was exposed to 32 text-image stimuli pairs and asked to (1) read the English sentence, (2) translate it into Czech, (3) consult the image, (4) translate again, either updating or repeating the previous translation. The text stimuli consisted of 200 unique sentences with 616 unique words coupled with 200 unique images as the visual stimuli. The recordings were collected over a two week period and all the participants included in the study were Czech natives with strong English skills. Due to the nature of the tasks involved in the study and the relatively large number of participants involved, the corpus is well suited for research in Translation Process Studies, Cognitive Sciences among other disciplines.

READ FULL TEXT

page 1

page 2

page 4

page 6

research
04/28/2022

The Copenhagen Corpus of Eye Tracking Recordings from Natural Reading of Danish Texts

Eye movement recordings from reading are one of the richest signals of h...
research
11/28/2018

Unsupervised Multi-modal Neural Machine Translation

Unsupervised neural machine translation (UNMT) has recently achieved rem...
research
06/20/2023

HK-LegiCoST: Leveraging Non-Verbatim Transcripts for Speech Translation

We introduce HK-LegiCoST, a new three-way parallel corpus of Cantonese-E...
research
05/21/2019

Automated Pupillary Light Reflex Test on a Portable Platform

In this paper, we introduce a portable eye imaging device denoted as lab...
research
12/12/2021

Reading Task Classification Using EEG and Eye-Tracking Data

The Zurich Cognitive Language Processing Corpus (ZuCo) provides eye-trac...
research
06/15/2022

Predicting Gender via Eye Movements

In this paper, we report the first stable results on gender prediction v...
research
09/30/2020

Ethically Collecting Multi-Modal Spontaneous Conversations with People that have Cognitive Impairments

In order to make spoken dialogue systems (such as Amazon Alexa or Google...

Please sign up or login with your details

Forgot password? Click here to reset