When reading, humans process language “automatically” without reflecting on each step — Humans string words together into sentences, understand the meaning of spoken and written ideas, and process language without thinking too much about how the underlying cognitive process happens. This process generates cognitive signals that could potentially facilitate natural language processing tasks.
In recent years, collecting these signals has become increasingly easy and less expensive Papoutsaki et al. (2016); as a result, using cognitive features to improve NLP tasks has become more popular. For example, researchers have proposed a range of work that uses eye-tracking or gaze signals to improve part-of-speech tagging (Barrett et al., 2016), sentiment analysis (Mishra et al., 2017), named entity recognition Hollenstein and Zhang (2019)
, among other tasks. Moreover, these signals have been used successfully to regularize attention in neural networks for NLPBarrett et al. (2018).
However, most previous work leverages only eye-tracking data, presumably because it is the most accessible form of cognitive language processing signal. In addition, most state-of-the-art work focused on improving a single task with a single type of cognitive signal. But can cognitive processing signal bring consistent improvements across modality (e.g., eye-tracking and/or EEG) and across various NLP tasks? And if so, does the combination of different sources of cognitive signals bring incremental improvements?
In this paper, we aim at shedding light on these questions. We present, to the best of our knowledge, the first comprehensive study to analyze the benefits and limitations of using cognitive language processing signals to improve NLP across multiple tasks and modalities (types of signals). Specifically, we go beyond state-of-the-art in two ways:
(Multiple Signals) We consider both eye-tracking and electroencephalography (EEG) data as examples of cognitive language processing data. Eye-tracking records the readers gaze positions on the screen and serves as an indirect measure of the cognitive reading process. EEG records electrical brain activity along the scalp and is a more direct measure of physiological processes, including language processing. This is also the first application leveraging EEG data to improve NLP tasks.
We then construct named entity recognition, relation classification, and sentiment analysis models with gaze and EEG features. We analyze three methods of adding these cognitive signals to machine learning architectures for NLP. First, we simply add the features to existing systems (Section4). Second, we show how these features can be generalized so that recorded data is not required at test data (Section 5.1). And third, in a multi-task setting we learn gaze and EEG features as auxiliary tasks to aid the main NLP task (Section 6).
In summary, the most important insights gained from this work include:
1. Using cognitive features shows consistent improvements over a range of NLP tasks even without large amounts of recorded cognitive signals.
2. While integrating gaze or EEG signals separately significantly outperforms the baselines, the combination of both does not further improve the results.
3. We identify multiple directions of future research: How can cognitive signals, such as EEG data, be preprocessed and de-noised more efficiently for NLP tasks? How can cognitive features of different sources be combined more effectively for natural language processing?
All experiments presented in this paper are available111https://github.com/DS3Lab/zuco-nlp/ to provide a foundation for future work to better understand these questions.
2 Related Work
The benefits of eye movement data have been assessed in various domains, including NLP and computer vision. Eye-trackers provide millisecond-accurate records about where humans look when they are reading. Although it is mostly still being recorded in controlled environments, recent approaches have shown substantial improvements in recording gaze data by using cameras of mobile devices(Gómez-Poveda and Gaudioso, 2016; Papoutsaki et al., 2016). Hence, gaze data will become more accessible and available in much larger volumes in the next few years San Agustin et al. (2009); Sewell and Komogortsev (2010), which will facilitate the creation of sizable datasets enormously.
The benefit of eye-tracking in human language processing is supported by intensive study in psycholinguistics during the 20th century and onwards. For example, when humans read a text, they do not focus on every single word. The number of fixations and the fixation duration on a word depends on a number of linguistic factors Clifton et al. (2007); Demberg and Keller (2008). Different features even allow us to study early and late cognitive processing separately.
First, word length, frequency and predictability from context affect fixation duration and counts. The frequency effect was first noted by Rayner (1977) and has been consistently reported in various studies since, e.g. Just and Carpenter (1980); Rayner and Duffy (1986); Cop et al. (2017). Second, readers are more likely to fixate on open-class words (Carpenter and Just, 1983). It even appears that eye movements are reliable indicators of syntactical categories (Barrett and Søgaard, 2015).
Word familiarity also influences how long readers look at a word. Although two words may have the same frequency value, they may differ in familiarity and predictability from context. Effects of word familiarity on fixation time have also been demonstrated in a number of studies Juhasz and Rayner (2003); Williams and Morris (2004) as have word predictability effects, e.g. McDonald and Shillcock (2003).
A range of work of using eye-tracking signals to improve natural language processing tasks has been proposed and shows promising results. Gaze data has been used to improve tasks such as part-of-speech tagging (Barrett et al., 2016), sentiment analysis (Mishra et al., 2017), prediction of multiword expressions (Rohanian et al., 2017), sentence compression (Klerke et al., 2016), and word embedding evaluation (Søgaard, 2016). Furthermore, gaze data has been used to regularize attention in neural architectures on NLP classification tasks Barrett et al. (2018).
To the best of our knowledge, there are no applications leveraging EEG data to improve NLP tasks. There are, however, good reasons to try to combine the two sources. EEG could provide the missing information in the eye movements to disambiguate different cognitive processes. An extended fixation duration only tells us that extended cognitive processing occurs, but not which process.
EEG and eye-tracking use the same temporal resolution with non-invasive technologies (Sereno and Rayner, 2003). Dambacher and Kliegl (2007) found that longer fixation duration correlates with larger N400 amplitude effects. N400 is part of the normal brain response to words and other meaningful stimuli Kutas and Federmeier (2000). Effects of word predictability on eye movements and EEG co-registration have also been studied in serialized word representation and in natural reading Dimigen et al. (2011).
Other aspects relevant for linguistic processing can be observed in the EEG signal itself. For instance, term relevance can be associated with brain activity with significant changes in certain brain areas Eugster et al. (2014), differences in processing verbs and noun, concrete nouns and abstract nouns, as well as common nouns and proper nouns are also observed Weiss and Mueller (2003). Furthermore, there is a correspondence between computational grammar models and certain EEG effects Hale et al. (2018).
Collecting EEG data is more expensive and time-consuming than collecting eye-tracking data, which is why brain activity data is commonly less accessible. Moreover, collecting EEG data from subjects in a naturalistic reading environment is even more challenging. Hence, related work in this area is very limited. Subsequently, while we rely on standard practices when leveraging gaze data, our experiments using EEG data are more experimental.
The Zurich Cognitive Language Processing Corpus (ZuCo; Hollenstein et al. (2018)) is the main data source of this work. It is the first freely available dataset222The data is available here: https://osf.io/q3zws/ of simultaneous eye-tracking and EEG recordings of natural sentence reading. This corpus includes recordings of 12 adult, native speakers reading approximately 1100 English sentences.
The corpus contains both natural reading and task-solving reading paradigms. For this work, we make use of the first two reading paradigms of ZuCo, during which the subjects read naturally at their own speed and without any specific task other than answering some control questions testing their reading comprehension. The first paradigm includes 300 sentences (7737 tokes) from Wikipedia articles Culotta et al. (2006) that contained semantic relations such as employer, award and job_title. The second paradigm contains 400 positive, negative and neutral sentences (8138 tokens) from the Stanford Sentiment Treebank Socher et al. (2013), to analyze the elicitation of emotions and opinions during reading. The same sentences were read by all 12 subjects.
3.1 Gaze features
ZuCo readily provides 5 eye-tracking features: number of fixations (NFIX), the number of all fixations landing on a word; first fixation duration (FFD), the duration of the first fixation on the current word; total reading time (TRT), the sum of all fixation durations on the current word; gaze duration (GD), the sum of all fixations on the current word in the first-pass reading before the eye moves out of the word; and go-past time (GPT), the sum of all fixations prior to progressing to the right of the current word, including regressions to previous words that originated from the current word. Fixations shorter than 100 ms were excluded, since these are unlikely to reflect language processing Sereno and Rayner (2003). To increase the robustness of the signal, the eye-tracking features are averaged over all subjects.
3.2 EEG features
Since eye-tracking and EEG were recorded simultaneously, we were able to extract word-level EEG features. During the preprocessing of ZuCo 23 electrodes in the outermost circumference (chin and neck) were used to detect muscular artifacts and were removed for subsequent analyses. Thus, each EEG feature, corresponding to the duration of a specific fixation, contains 105 electrode values. The EEG signal is split into 8 frequency bands, which are fixed ranges of wave frequencies and amplitudes over a time scale: theta1 (4-6 Hz), theta2 (6.5-8 Hz), alpha1 (8.5-10 Hz), alpha2 (10.5-13 Hz), beta1, (13.5-18 Hz) beta2 (18.5-30 Hz), gamma1 (30.5-40 Hz) and gamma2 (40-49.5 Hz). These frequency ranges are known to correlate with certain cognitive functions. For instance, theta activity reflects cognitive control and working memory Williams et al. (2019), alpha activity has been related to attentiveness Klimesch (2012), gamma-band activity has been used to detect emotions Li and Lu (2009) and beta frequencies affect decisions regarding relevance Eugster et al. (2014). Even though the variability between subjects is much higher in the EEG signal, we also average all features over all subjects.
To thoroughly evaluate the potential of gaze and brain activity data, we perform experiments on the three information extraction tasks described in this section. Current state-of-the-art systems are used for all tasks and different combinations of cognitive features are evaluated.
4.1 Named Entity Recognition
The performance of named entity recognition (NER) systems can successfully be improved with eye-tracking features Hollenstein and Zhang (2019). However, this has not been explored for EEG signals. We use the state-of-the-art neural architecture for NER by Lample et al. (2016)333https://github.com/glample/tagger. Their model successfully combines word-level and character-level embeddings, which we augment with embedding layers for gaze and/or EEG features. Word length and frequency are known to correlate and interact with gaze features (e.g. Just and Carpenter (1980); Rayner (1977)), which is why we selected a base model that allows us to combine the cognitive features with word-level and character-level information. We use the named entity annotations from https://github.com/DS3Lab/ner-at-first-sight.
For this task, we used the 17 gaze features proposed by Hollenstein and Zhang (2019)
for NER. These features include relevant information from early and late word processing as well as context features from the surrounding words. We extracted 8 word-level EEG features, one for each frequency band (The neural architecture of this system does not allow for raw normalized EEG and gaze features as is the case for relation classification and sentiment analysis.). The feature values were averaged over the 105 electrode values. These features are mapped to the duration of the gaze features. Thus, in the experiments we tested EEG features during total reading time of the words and EEG features merely during the first fixations. The latter yielded better results. The gaze and EEG features values (originally in milliseconds (for gaze) and microvolts (for EEG)) were normalized and concatenated to the character and word embeddings as one-hot vectors.
All models were trained on both ZuCo paradigms described above (15875 tokens) with 10-fold cross validation (80% training, 10% development, 10% test) and early stopping was performed after 20 epochs of no improvement on the development set to reduce training time. For the experiments, the default values for all parameters were maintained. The word embeddings were initialized with the pre-trained GloVe vectors of 100 dimensions(Pennington et al., 2014) and the character-based embeddings were trained on the corpus at hand (25 dimensions).
4.2 Relation Classification
The second information extraction task we analyze is classifying semantic relations in sentences. As a state-of-the art relation classification method we use the winning system from SemEval 2018Rotsztejn et al. (2018)
, which combines convolutional and recurrent neural networks to leverage the best architecture for different sentence lengths. We consider the following 11 relation types:award, employer, education, founder, visited, wife, political-affiliation, nationality, job-title, birth-place and death-place. We use the annotations provided by Culotta et al. (2006).
For this task, we employed the 5 gaze features on word-level provided in the ZuCo data: number of fixations, first fixation duration, total reading time, gaze duration and go-past time. The eye-tracking feature values were normalized over all occurrences in the corpus. The EEG features were extracted by averaging the 105 electrode values over all fixations for each word and then normalized. All word features in a sentence were concatenated and finally padded to the maximum sentence length. The eye-tracking and/or EEG feature vectors were appended to the word embeddings.
We performed 5-fold cross validation over 566 samples (sentences can include more than one relation type). We split the data into 80% training data and 20% test data. Due to the small size of the dataset, we used the same preprocessing steps and parameters as proposed by the SemEval 2018 system. The word embeddings were initialized with the pre-trained GloVe vectors of 300 dimensions.
|NER||RelClass||Sentiment (2)||Sentiment (3)|
4.3 Sentiment Analysis
The third NLP task we choose for this work is sentiment analysis. Based on the analysis by Barnes et al. (2017), we implemented a bidirectional LSTM with an attention layer for the classification of sentence-level sentiment labels.
Analogous to the relation classification, the 5 word-level eye-tracking features were normalized and concatenated before being appended to the sentence embeddings. The raw EEG data (105 electrode values per word) were averaged and normalized.
10-fold cross validation was performed over the 400 sentences with available sentiment labels from ZuCo (123 neutral, 137 negative and 140 positive sentences). We test ternary classification as well as binary classification. For the latter, we remove all neutral sentences from the training data. Word embeddings were initialized with pre-trained vectors of 300 dimensions Mikolov et al. (2013). All models are trained for 10 epochs with batch sizes of 32. The initial learning rate is set to 0.001. It was halved every 3 passes or every 10 passes, for binary classification and ternary classification respectively (due to the larger training set).
For each information extraction task described in the previous section we trained baseline models, models augmented with gaze features, with EEG features, and with both. All the baseline models were trained solely on textual information (i.e. word embeddings without any gaze or EEG features). We trained single-subject models and models in which the features values are averaged over all subjects.
The results of the averaged models are shown in Table 1. We observe consistent improvements over the baselines for all tasks when augmented with cognitive features. The models with gaze features, EEG features and the combination thereof all outperform the baseline. Notably, while the combination of gaze and EEG features also outperform the baseline, they do not improve over using gaze or EEG individually.
We perform statistical significance testing using permutation (as described in Dror et al. (2018)
) over all tasks. In addition, we apply the conservative Bonferroni correction for multiple hypotheses, where the global null hypothesis is rejected if, where is the number of hypotheses Dror et al. (2017). In our setting, and , accounting for the combination of the 4 tasks and 3 configurations (EEG, gaze, EEG+gaze). The improvements in 11 configurations out of 12 are also statistically significant under the Bonferroni correction. Despite the limited amount of data, this result suggests that augmenting NLP systems with cognitive features is a generalizable approach.
In an additional analysis we also evaluate the single-subject models to test the robustness of averaging the feature values over all readers. By the example of binary and ternary sentiment analysis, Figure 1 depicts the variability of the results between the subjects. In contrast to the averaged models, the best subject for binary sentiment classification reaches an F1-score of 85% with the combination of gaze and EEG data. Moreover, it shows how the averaged models perform almost as good as the best subject. Note that the best-performing subject for gaze is not necessarily the same subject as for the best EEG model. We also trained models that only take into account the feature values of the five best subjects. However, when averaging over all subjects, the signal-to-noise ratio is higher and provides better results than training on the best five subjects only. While previous research had shown the same effect for using eye-tracking data from multiple subjects in NLP, this had no yet been shown for EEG data.
5.1 No real-time recorded data required
While adding these cognitive features to a system show the potential of this type of data, it is not very practical if real-time recordings of EEG and/or eye-tracking are required at prediction time. Following Barrett et al. (2016)
, we evaluate feature aggregation on word-type level. This means that all cognitive features are averaged over the word occurrences. As a result a lexicon of lower-cased word types with their averaged gaze and EEG feature values was compiled. Words in the training data as well as in the test set are assigned these features if the words occurs in the type-aggregated lexicon or receives unknown features values otherwise. Thus, recorded human data is not required at test time.
We evaluate the concept of type aggregation on the tasks described above. We choose 3 benchmark datasets and add the aggregated EEG and/or eye-tracking features to words occurring in ZuCo. For NER we use the CoNLL-2003 corpus Sang and De Meulder (2003), for relation classification we use the full Wikipedia dataset provided by Culotta et al. (2006) and for sentiment analysis we use the Stanford Sentiment Treebank (SST). The same experiment settings as above were applied here. To avoid overfitting we did not use the official train/test splits but performed cross validation.
Table 2 shows the details about these datasets and the results. We can observe a consistent improvement using type-aggregated gaze features. However, the effect of type-aggregated EEG features is mixed.
|NER||RelClass||Sentiment (2)||Sentiment (3)|
|gaze + EEG||94.63**||77.01||79.74||54.80|
Type aggregation shows not only that recorded gaze or EEG data is not necessary at test time, but also that improvements can be achieved with human data without requiring large quantities of recorded data.
6 Multi-task learning
|main task||aux task(s)||accuracy|
We further investigate multi-task learning (MTL) as an additional machine learning strategy to benefit from cognitive features. The intuition behind MTL is that training signals of one task, the auxiliary task, improves the performance of the main task, by sharing information throughout the training process. In our case, we learn gaze and EEG features as auxiliary tasks to improve the main NLP task.
In previous work, it has been shown that MTL can be used successfully for sequence labelling tasks Bingel and Søgaard (2017) due to some compelling benefits, including its potential to efficiently regularize models and to reduce the need for labeled data. Moreover, gaze duration has been predicted as an auxiliary task to improve sentence compression Klerke et al. (2016), and to better predict the readability of texts González-Garduno and Søgaard (2018). To the best of our knowledge, EEG features have not been used in MTL to improve NLP tasks.
In multi-task learning it is important that the tasks that are learned simultaneously are related to a certain extent Caruana (1997); Collobert et al. (2011). Assuming that the cognitive processes in the human brain during reading are related, there should be a gain from training on gaze and EEG data when learning to extract information from text. Thus, we assess the hypothesis that MTL might also be useful in our scenario.
We utilized the Sluice networks Ruder et al. (2017), where the network learns to which extent the layers are shared between the tasks. Thus, we re-formulated the sentiment analysis as sequence labelling tasks on phrase level. For binary sentiment analysis, the classes NEUTRAL and NOT-NEUTRAL were predicted. We did not have to modify the named entity recognition task and the relation classification was not tested since only sentence level labels are available.
We ran 5-fold cross validation for all experiments over the same data as described in Section 3. As our baselines we used single-task learning and learning word frequency as an auxiliary task to an NLP task. Word frequencies were extracted from the British National Corpus Kilgarriff (1995). The experiments ran with the default settings recommended by Ruder et al. (2017). In accordance to their results, the Sluice networks yielded consistently higher results than hard parameter sharing.
As a main task the network learned to predict NER, binary or ternary sentiment labels. As auxiliary tasks the network learned a single gaze or EEG feature. We used five eye-tracking features: number of fixations (NFIX), mean fixation duration (MFD), first fixation duration (FFD), total reading time (TRT), and fixation probability (FIXP). Additionally, we tested four EEG features, one for each combined frequency band: EEG(i.e. the average values of theta1 and theta2), EEG, EEG, EEG. The features were discretized and binned.
|gaze features||EEG features|
Table 3 shows the results of these experiments. Note that only the best feature combinations are included in the table. Learning word frequency as an auxiliary task is a strong baseline. Learning gaze and EEG features as auxiliary tasks does not improve the performance over the single-task baseline for NER and only minimally for sentiment analysis. Learning two auxiliary tasks, a gaze of EEG feature and word frequency in parallel yields modest improvements over the frequency baseline.
Adding further auxiliary tasks with additional gaze or EEG features did not yield better results. Moreover, the combination of learning gaze and brain activity features did also not bring further improvements.
As we know that gaze and frequency band EEG features represent different cognitive processes involved in reading, our main and auxiliary tasks should in fact be related. However, it seems like the noise-to-signal ratio in the EEG features is too high to achieve significant results. As stated by González-Garduno and Søgaard (2018), it is important to establish whether the same feature representation can yield good results for all tasks independently. To gain further insights into these results, we analyze how well these human features can be learned.
6.1 Learning cognitive features
Using the same experiment setting as for the above described MTL experiments, we first trained single-task baselines for each of the gaze and EEG features. Then, we trained each gaze feature in 3 MTL settings: (1) word frequency as an auxiliary task, (2) the remaining gaze features as parallel auxiliary tasks and (3) the EEG features as parallel auxiliary tasks. The same applies to EEG features as main tasks. The results in Table 4
show that gaze features have far higher baselines than EEG features. Presumably EEG is harder to learn because it has larger variance in the data. Moreover, while the eye-tracking data is limited to the visual component of the cognitive processes, EEG data additionally contains a motor component and a semantic component during the reading process.
Learning word frequency as an auxiliary task considerably helps all gaze and EEG features. The known correlation between eye-tracking and word frequency Rayner and Duffy (1986) is clearly beneficial for learning gaze features. Moreover, a frequency effect can also be found in early EEG signals, i.e. during the first 200ms of reading a word Hauk and Pulvermüller (2004).
In accordance with previous work (e.g. Barrett et al. (2016); Mishra and Bhattacharyya (2018)), we showed consistent improvements when using gaze data in a range of information extraction tasks, with recorded token-level features and with type-aggregated features on benchmark corpora. The patterns in the results are less consistent when enhancing NLP methods with EEG signals. While we can still show significant improvements over the baseline models, in general the models leveraging EEG features yield lower performance than the ones with gaze features. A plausible explanation for this is that the combination of gaze and EEG features decreases the signal-to-noise ratio even more than for only one type of cognitive data. Another interpretation is that the eye-tracking and EEG signal contain information that is (too) similar. Thus, the combination does not improve yield better results.
Consequently, there are some open questions: How can EEG signals be preprocessed and de-noised more efficiently for NLP tasks? How can EEG and eye-tracking (and other cognitive processing signals or fortuitous data Plank (2016)) be combined more effectively to improve NLP applications?
The models leveraging type-aggregated cognitive features show that improvements can be achieved without requiring large amounts of recorded data and provide evidence that this type of data can be generalized on word type level. Although these results indicate that huge amounts of recorded data are not necessary for performance gains, one of the limitations of this work is the effort of collecting cognitive processing signals from humans. However, webcam-based eye-trackers (e.g. Papoutsaki et al. (2016)) and commercially available EEG devices (e.g Stytsenko et al. (2011)) are becoming more accurate and user-friendly.
Finally, the multi-task learning experiments provide insights into the correlation of learning NLP tasks together with word frequency and cognitive features. While the results are not as promising as our main experiments, it reveals qualities of the individual gaze and EEG features. For future work, a possible approach to combine the potential of exceptionally good single-subject models and multi-task learning, would be to learn gaze and/or EEG features from multiple subjects at the same time. This has been shown to improve accuracy on brain-computer interface tasks and helps to further reduce the variability between subjects Panagopoulos (2017).
One of the challenges of NLP is to learn as much as possible from limited resources. Using cognitive language processing data may allow us take a step towards meta-reasoning, the process of discovering the cognitive processes that are used to tackle a task in the human brain Griffiths et al. (2019), and in turn be able to improve NLP.
We presented an extensive study of improving NLP tasks with eye-tracking and electroencephalography data as instances of cognitive processing signals. We showed how adding gaze and/or EEG features to a range of information extraction tasks, namely named entity recognition, relation classification and sentiment analysis, yields significant improvements over the baselines. Moreover, we showed how these features can be generalized at word type-level so that no recorded data is required during prediction time. Finally, we explored a multi-task learning setting to simultaneously learn NLP tasks and cognitive features.
In conclusion, the gaze and EEG signals of humans reading text, even though noisy and available in limited amounts, show great potential in improving NLP tasks and facilitate insights into language processing which can be applied to NLP, but need to be investigated in more depth.
- Barnes et al. (2017) Jeremy Barnes, Roman Klinger, and Sabine Schulte im Walde. 2017. Assessing state-of-the-art sentiment models on state-of-the-art sentiment datasets. Proceedings of the 2013 Conference on empirical methods in natural language processing (EMNLP), page 2.
- Barrett et al. (2018) Maria Barrett, Joachim Bingel, Nora Hollenstein, Marek Rei, and Anders Søgaard. 2018. Sequence classification with human attention. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 302–312.
- Barrett et al. (2016) Maria Barrett, Joachim Bingel, Frank Keller, and Anders Søgaard. 2016. Weakly supervised part-of-speech tagging using eye-tracking data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, volume 2, pages 579–584.
- Barrett and Søgaard (2015) Maria Barrett and Anders Søgaard. 2015. Reading behavior predicts syntactic categories. In Proceedings of the 19th Conference on Computational Natural Language Learning, pages 345–349.
- Bingel and Søgaard (2017) Joachim Bingel and Anders Søgaard. 2017. Identifying beneficial task relations for multi-task learning in deep neural networks. Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, page 164.
- Carpenter and Just (1983) Patricia A Carpenter and Marcel Adam Just. 1983. What your eyes do while your mind is reading. In Eye movements in reading, pages 275–307. Elsevier.
- Caruana (1997) Rich Caruana. 1997. Multitask learning. Machine learning, 28(1):41–75.
- Clifton et al. (2007) Charles Clifton, Adrian Staub, and Keith Rayner. 2007. Eye movements in reading words and sentences. In Eye Movements, pages 341–371. Elsevier.
- Collobert et al. (2011) Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of machine learning research, 12(Aug):2493–2537.
- Cop et al. (2017) Uschi Cop, Nicolas Dirix, Denis Drieghe, and Wouter Duyck. 2017. Presenting GECO: An eyetracking corpus of monolingual and bilingual sentence reading. Behavior research methods, 49(2):602–615.
- Culotta et al. (2006) Aron Culotta, Andrew McCallum, and Jonathan Betz. 2006. Integrating probabilistic extraction models and data mining to discover relations and patterns in text. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 296–303.
- Dambacher and Kliegl (2007) Michael Dambacher and Reinhold Kliegl. 2007. Synchronizing timelines: Relations between fixation durations and N400 amplitudes during sentence reading. Brain research, 1155:147–162.
- Demberg and Keller (2008) Vera Demberg and Frank Keller. 2008. Data from eye-tracking corpora as evidence for theories of syntactic processing complexity. Cognition, 109(2):193–210.
- Dimigen et al. (2011) Olaf Dimigen, Werner Sommer, Annette Hohlfeld, Arthur M Jacobs, and Reinhold Kliegl. 2011. Coregistration of eye movements and EEG in natural reading: analyses and review. Journal of Experimental Psychology: General, 140(4):552.
- Dror et al. (2017) Rotem Dror, Gili Baumer, Marina Bogomolov, and Roi Reichart. 2017. Replicability analysis for natural language processing: Testing significance with multiple datasets. Transactions of the Association for Computational Linguistics, 5:471–486.
Dror et al. (2018)
Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018.
The hitchhiker’s guide to testing statistical significance in natural language processing.In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 1383–1392.
- Eugster et al. (2014) Manuel JA Eugster, Tuukka Ruotsalo, Michiel M Spapé, Ilkka Kosunen, Oswald Barral, Niklas Ravaja, Giulio Jacucci, and Samuel Kaski. 2014. Predicting term-relevance from brain signals. In Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval, pages 425–434.
- Gómez-Poveda and Gaudioso (2016) Jose Gómez-Poveda and Elena Gaudioso. 2016. Evaluation of temporal stability of eye tracking algorithms using webcams. Expert Systems with Applications, 64:69–83.
González-Garduno and Søgaard (2018)
Ana V González-Garduno and Anders Søgaard. 2018.
Learning to predict readability using eye-movement data from natives
Thirty-Second AAAI Conference on Artificial Intelligence.
- Griffiths et al. (2019) Thomas L Griffiths, Fred Callaway, Michael B Chang, Erin Grant, Paul M Krueger, and Falk Lieder. 2019. Doing more with less: Meta-reasoning and meta-learning in humans and machines. Current Opinion in Behavioral Sciences.
- Hale et al. (2018) John Hale, Chris Dyer, Adhiguna Kuncoro, and Jonathan Brennan. 2018. Finding syntax in human encephalography with beam search. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 2727–2736.
- Hauk and Pulvermüller (2004) Olaf Hauk and Friedemann Pulvermüller. 2004. Effects of word length and frequency on the human event-related potential. Clinical Neurophysiology, 115(5):1090–1103.
- Hollenstein et al. (2018) Nora Hollenstein, Jonathan Rotsztejn, Marius Troendle, Andreas Pedroni, Ce Zhang, and Nicolas Langer. 2018. ZuCo, a simultaneous EEG and eye-tracking resource for natural sentence reading. Scientific Data.
- Hollenstein and Zhang (2019) Nora Hollenstein and Ce Zhang. 2019. Entity recognition at first sight: Improving NER with eye movement information. In arXiv preprint arXiv:1902.10068.
- Juhasz and Rayner (2003) Barbara J Juhasz and Keith Rayner. 2003. Investigating the effects of a set of intercorrelated variables on eye fixation durations in reading. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29(6):1312.
- Just and Carpenter (1980) Marcel A Just and Patricia A Carpenter. 1980. A theory of reading: From eye fixations to comprehension. Psychological review, 87(4):329.
- Kilgarriff (1995) Adam Kilgarriff. 1995. BNC database and word frequency lists. Retrieved Dec. 2017.
- Klerke et al. (2016) Sigrid Klerke, Yoav Goldberg, and Anders Søgaard. 2016. Improving sentence compression by learning to predict gaze. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1528–1533.
- Klimesch (2012) Wolfgang Klimesch. 2012. Alpha-band oscillations, attention, and controlled access to stored information. Trends in cognitive sciences, 16(12):606–617.
- Kutas and Federmeier (2000) Marta Kutas and Kara D Federmeier. 2000. Electrophysiology reveals semantic memory use in language comprehension. Trends in cognitive sciences, 4(12):463–470.
- Lample et al. (2016) Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
- Li and Lu (2009) Mu Li and Bao-Liang Lu. 2009. Emotion classification based on gamma-band EEG. In Engineering in Medicine and Biology Society, 2009. EMBC 2009. Annual International Conference of the IEEE, pages 1223–1226. IEEE.
- McDonald and Shillcock (2003) Scott A McDonald and Richard C Shillcock. 2003. Eye movements reveal the on-line computation of lexical probabilities during reading. Psychological science, 14(6):648–652.
- Mikolov et al. (2013) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
- Mishra and Bhattacharyya (2018) Abhijit Mishra and Pushpak Bhattacharyya. 2018. Cognitively Inspired Natural Language Processing: An Investigation Based on Eye-tracking. Springer.
- Mishra et al. (2017) Abhijit Mishra, Diptesh Kanojia, Seema Nagar, Kuntal Dey, and Pushpak Bhattacharyya. 2017. Leveraging cognitive features for sentiment analysis. Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning.
- Panagopoulos (2017) George Panagopoulos. 2017. Multi-task learning for commercial brain computer interfaces. In 2017 IEEE 17th International Conference on Bioinformatics and Bioengineering (BIBE), pages 86–93. IEEE.
- Papoutsaki et al. (2016) Alexandra Papoutsaki, Patsorn Sangkloy, James Laskey, Nediyana Daskalova, Jeff Huang, and James Hays. 2016. WebGazer: Scalable webcam eye tracking using user interactions. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence-IJCAI 2016.
- Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543.
- Plank (2016) Barbara Plank. 2016. What to do about non-standard (or non-canonical) language in NLP. In KONVENS.
- Rayner (1977) Keith Rayner. 1977. Visual attention in reading: Eye movements reflect cognitive processes. Memory & Cognition, 5(4):443–448.
- Rayner and Duffy (1986) Keith Rayner and Susan A Duffy. 1986. Lexical complexity and fixation times in reading: Effects of word frequency, verb complexity, and lexical ambiguity. Memory & cognition, 14(3):191–201.
- Rohanian et al. (2017) Omid Rohanian, Shiva Taslimipoor, Victoria Yaneva, and Le An Ha. 2017. Using gaze data to predict multiword expressions. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, pages 601–609.
Rotsztejn et al. (2018)
Jonathan Rotsztejn, Nora Hollenstein, and Ce Zhang. 2018.
ETH-DS3Lab at SemEval-2018 Task 7: Effectively Combining Recurrent and Convolutional Neural Networks for Relation Classification and Extraction.In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 689–696.
- Ruder et al. (2017) Sebastian Ruder, Joachim Bingel, Isabelle Augenstein, and Anders Søgaard. 2017. Learning what to share between loosely related tasks. arXiv preprint arXiv:1705.08142.
- San Agustin et al. (2009) Javier San Agustin, Henrik Skovsgaard, John Paulin Hansen, and Dan Witzner Hansen. 2009. Low-cost gaze interaction: ready to deliver the promises. In CHI’09 Extended Abstracts on Human Factors in Computing Systems, pages 4453–4458.
- Sang and De Meulder (2003) Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 142–147.
- Sereno and Rayner (2003) Sara C Sereno and Keith Rayner. 2003. Measuring word recognition in reading: eye movements and event-related potentials. Trends in cognitive sciences, 7(11):489–493.
- Sewell and Komogortsev (2010) Weston Sewell and Oleg Komogortsev. 2010. Real-time eye gaze tracking with an unmodified commodity webcam employing a neural network. In CHI’10 Extended Abstracts on Human Factors in Computing Systems, pages 3739–3744.
- Socher et al. (2013) Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing (EMNLP), pages 1631–1642.
- Søgaard (2016) Anders Søgaard. 2016. Evaluating word embeddings with fMRI and eye-tracking. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 116–121.
- Stytsenko et al. (2011) Kirill Stytsenko, Evaldas Jablonskis, and Cosima Prahm. 2011. Evaluation of consumer EEG device Emotiv EPOC. In MEi: CogSci Conference 2011, Ljubljana.
- Weiss and Mueller (2003) Sabine Weiss and Horst M Mueller. 2003. The contribution of EEG coherence to the investigation of language. Brain and language, 85(2):325–343.
- Williams et al. (2019) Chad C Williams, Mitchel Kappen, Cameron D Hassall, Bruce Wright, and Olave E Krigolson. 2019. Thinking theta and alpha: Mechanisms of intuitive and analytical reasoning. NeuroImage.
- Williams and Morris (2004) Rihana Williams and Robin Morris. 2004. Eye movements, word familiarity, and vocabulary acquisition. European Journal of Cognitive Psychology, 16(1-2):312–339.