Dynamic Graph Modeling of Simultaneous EEG and Eye-tracking Data for Reading Task Identification
We present a new approach, that we call AdaGTCN, for identifying human reader intent from Electroencephalogram (EEG) and Eye movement (EM) data in order to help differentiate between normal reading and task-oriented reading. Understanding the physiological aspects of the reading process (the cognitive load and the reading intent) can help improve the quality of crowd-sourced annotated data. Our method, Adaptive Graph Temporal Convolution Network (AdaGTCN), uses an Adaptive Graph Learning Layer and Deep Neighborhood Graph Convolution Layer for identifying the reading activities using time-locked EEG sequences recorded during word-level eye-movement fixations. Adaptive Graph Learning Layer dynamically learns the spatial correlations between the EEG electrode signals while the Deep Neighborhood Graph Convolution Layer exploits temporal features from a dense graph neighborhood to establish the state of the art in reading task identification over other contemporary approaches. We compare our approach with several baselines to report an improvement of 6.29 on the ZuCo 2.0 dataset, along with extensive ablation experiments
READ FULL TEXT