During reading, the eye proceeds in a series of rapid movements, called saccades, instead of smoothly wandering over the text. Between two saccades, the eye remains almost still for about 200 to 300 milliseconds on average, fixating a certain position in text to obtain visual input. Saccades serve as a relocation mechanism of the eye moving the focus on average seven to nine characters wide from one fixation position to the next. Eye movements during reading are driven by complex cognitive processes involving vision, attention, language and oculomotor control [1, 2]. Since a reader’s eye movement behavior is precisely observable and reflects the interplay of internal processes and external stimuli for the generation of complex action , it is a popular research subject in cognitive psychology.
One common insight of various studies in the field is that eye movement patterns vary significantly between individuals [3, 4, 5]. This property makes them interesting for biometrics. Indeed, identification based on eye movements during reading may offer several advantages in many application areas. Users can be identified unobtrusively while having access to a document they would read anyway, which saves time and attention. For biometric identification during reading, nearest-neighbor  and generative probabilistic models [7, 8] of eye-gaze patterns have been explored.
Eye movements are believed to mirror different levels of comprehension processes involved in reading . Experimental studies have shown that reader’s fixations are influenced by syntactic comprehension , semantic plausibility , background knowledge , text difficulty, and inconsistencies . These findings motivate our goal of estimating readers’ levels of text comprehension based on their eye-gaze.
Gaze patterns, also referred to as scanpaths
, that occur during reading are sequences of fixations and saccades. One can easily extract vectors of aggregated distributional features that standard learning algorithms can process—for instance, the average fixation duration and saccade amplitude—albeit at a great loss of information. Generative graphical models 
allow to infer the likelihood of a scanpath under reader-specific model parameters. However, since both identification and assessing text comprehension are discriminative tasks, it appears plausible that discriminatively trained models would be better suited to this task. Classifying sequences by a discriminative model involves engineering a suitable sequence kernel or other form of data representation. Recurrent neural networks tend to work well for problems for which large data collections are available to train high-capacity models. By contrast, eye movement data cannot be collected at a large scale, because their collection requires laboratory equipment and test subjects. We therefore focus on the development of a suitable sequence kernel. We will follow the approach of Fisher kernels because it allows us to use background knowledge in the form of a plausible generative model as the representation of scanpaths.
Based on an existing generative model by , we develop a model that takes into account lexical features of the fixated words to generate a scanpath. This model is then used to map eye scanpaths into Fisher score vectors. We classify with an SVM and the Fisher kernel function such that we exploit both the advantages of generative modeling and the strengths of discriminative classification.
The rest of this paper is organized as follows. Section 2 reviews related work. Section 3 introduces the problem setting and notation. In Section 4, we develop a generative model of scanpaths that takes into account the lexical features of the fixated word, and derive the corresponding Fisher kernel in Section 5. In Section 6, we show the empirical evaluation; Section 7 concludes.
2 Related Work
Eye movements are assumed to mirror cognitive processes involved in reading . A large body of psycholinguistic evidence shows that language comprehension processes at the syntactic, semantic and pragmatic level are significant predictors for a reader’s fixation durations and saccadic behavior [10, 14, 2, 13, 15].
For our purposes, effects of higher level text comprehension (i.e., on the level of the discourse) on a reader’s eye movements are most relevant. For example, it has been shown that conceptual difficulty of a text leads to a larger proportion of regressions, an increase in fixation durations, and a decrease in saccade amplitudes [16, 17]. Rayner et al.  show that higher global and local discourse difficulty of a text increases the number and average duration of fixations as well as the proportion of regressive saccades. Semantically impossible or implausible words have been shown to increase the first-pass reading time and the total reading time of a word, respectively [18, 19]. Moreover, background knowledge decreases both the sum of all fixation durations on a word when reading it for the first time and the proportion of skipped words .
Existing attempts to exploit this eye-mind connection and actually use a reader’s eye movements to predict text comprehension have crucial limitations. Copeland et al. [20, 21, 22] use the saccades between a comprehension question and the text as a feature to predict the response accuracy on this very question. Hence, these models are not trained to infer reading comprehension from the eye movements while reading a text, as claimed by the authors, but rather predict response accuracy on a question from the answer-seeking eye movements of the user. Indeed, the practical relevance of predicting text comprehension from reading is that no questions would be needed anymore to assess a reader’s comprehension of a text. Underwood et al.  also claim to predict text comprehension from a reader’s fixation durations. However, they use the same data for training and testing their model.
Compared to the usually rather small size of the effects reflecting cognitive processes, individual variability of eye movements in reading is very large. This has been observed consistently in the psychological literature [4, 24, 25]. The idea behind eye movements as a biometric feature is to exploit this individual variability. Some biometric studies are based on eye movements observed in response to an artificial visual stimulus, such as a moving [26, 27, 28] or fixed  dot on a computer screen, or a specific image stimulus . Other studies, like our paper, focus on the problem of identifying subjects while they process an arbitrary stimulus, which has the advantage that the identity can be inferred unobtrusively during routine access to a device or document. Holland and Komogortsev study identification of subjects from eye movements on arbitrary text, based on aggregated statistical features such as the average fixation duration and average saccade amplitude . Rigas et al. extend this approach with additional dynamic saccadic features . However, by reducing observations to a small set of real-valued features, much of the information in eye movements is lost. Landwehr et al.  show that by fitting subject-specific generative probabilistic models to eye movements, much higher identification accuracies can be achieved. They develop a parametric generative model ; as this model serves as a starting point for our method, details are given in Section 4.1. Abdelwahab et al.  extend this model to a fully Bayesian approach, in which distributions are defined by nonparametric densities inferred under a Gaussian process prior that is centered at the gamma family of distributions . Both methods serve as reference methods in our experiments.
3 Problem Setting
When reading a text , a reader generates a scanpath that is given by a sequence of fixation positions (position in text that was fixated, measured in characters) and fixation durations (measured in milliseconds). This scanpath can be observed with an eye-tracking system.
Each word fixated at time possesses lexical features that can be aggregated into a vector . Some of the models that we will study will allow the distributions of saccade amplitudes and durations to depend on such lexical features. Lexical features—for instance, word frequency or part of speech—are derived from the text itself.
We study the problems of reader identification and assessing text comprehension. In reader identification, the model output is the conjectured identity of the reader that generates scanpath for text , from a set of individuals that are known at training time. In assessing text comprehension, the model output is the conjectured level of the reader’s comprehension of text . In order to annotate training and evaluation data, the ground-truth level of text comprehension can be determined, for instance, by a question-answering protocol carried out after reading. In an actual application setting, no comprehension questions are asked.
In both settings, training data consists of a set of scanpaths that have been obtained from subjects reading texts , annotated with labels .
4 Generative Models of Scan Paths
Landwehr et al. 
define a parametric modelof scanpaths given a text . Fitting this model to the subset of scanpaths and texts in the training data generated by reader yields reader-specific models . At application time, the prediction for a scanpath on a novel text can be obtained as . We first review this generative model , and then develop it into a generative model of scanpaths that takes into account the lexical features of the fixated words in Section 4.2. In Section 5, we derive the Fisher kernel and arrive at a discriminative model.
4.1 The Model of Landwehr et al., 2014
This section presents a slightly simplified version of the generative model of scanpaths 
. It reflects how readers generate fixations while reading a text and models the type and amplitude of saccadic movements and fixation durations. The joint distribution over all fixation positions and durations is assumed to factorize as
To model the conditional distribution of the next fixation position and duration given the current fixation position, the model distinguishes five saccade types : a reader can refixate the current word at a character position before the current position (), refixate the current word at a position after the current position (), fixate the next word in the text (), move the fixation to a word after the next word (), or regress to fixate a word occurring earlier in the text (). At each time , the model first draws a saccade type
from a multinomial distribution. It then draws a (signed) saccade amplitude111Throughout our work, saccade amplitude is measured in number of characters as this metric is relatively insensitive to differences in the eye-to-screen distance, which might become relevant for practical applications of the model .
from type-specific gamma distributions; that is,
where , and is the gamma distribution parameterized by shape and scale . The current fixation position is then updated as . The model finally draws the fixation duration , also from type-specific gamma distributions
where , . All parameters of the model are aggregated into a parameter vector .
The difference between this simplified variant and the original model  is that the original model truncates the gamma distributions in order to fit within the limits of the text interval defined by the saccade type; for instance, to the currently fixated word for refixations. Since this truncation causes unsteadiness of the Fisher scores, we instead let the amplitudes be governed by regular gamma distributions with scale parameter or and shape parameter or . Furthermore, Landwehr et al. distinguish the same five saccade types for modeling saccade amplitude, but only four saccade types for modeling fixation durations, while we distinguish five saccade types for both distributions.
4.2 Generative Model with Lexical Features
We extend the model presented in Section 4.1 by allowing the distributions of fixation durations and saccade amplitudes to depend on lexical features of each fixated word.
Let the random variabledenote a vector of features of the word that is fixated at time step , such as word frequency or length (Section 6 gives more details on the features under study). We allow these features to influence the scale and shape of the gamma distributions from which the saccade amplitudes and fixation durations are generated. Hence, we model the scale and shape parameters in Equations 3, 4 and 5
as linear regressions on the word featureswith an exponential link to ensure positivity of the gamma parameters. That is, we replace Equations 3, 4 and 5 by
Note that are now vectors of regression weights from which the respective gamma parameters are computed, which are aggregated into the parameterizations , , , and . Figure 1 shows a graphical model representation.
4.3 Parameter Estimation
Given a set of scanpaths and texts , model parameters can be estimated by maximum likelihood. In a generative setting, models for a specific reader or a specific discrete competence level can be be estimated on a data subset . For the discriminative setting we develop in Section 5, generative parameters are estimated on all training data , and a Fisher score representation is derived from this generative model. We optimize a regularized maximum likelihood criterion
Given , all fixation positions , saccade types and word features are known. Equation 9 thus factorizes into separate likelihood terms depending on saccade type, amplitude, and duration parameters:
where , , , and denote saccade types, amplitudes, fixation durations, and word features in , the number of fixations in sequence is written as , and we have split up the regularizer into separate regularizers and (parameter is not regularized). Equation 10 can be optimized independently in saccade type parameters , amplitude parameters , and duration parameters . Optimization in is straightforward. Because given , saccades types are known, amplitude parameters can be optimized independently for each saccade type; that is, optimization is independent for each . Let , then
where is the number of lexical features used to predict the gamma parameters (including a bias), and , denote the -th element of parameter vectors , respectively. Note that as the linear regression on the word features is scaled using an exponential function (Equations 6, 7), we use an exponential regularizer . Analogously, for fixation durations,
and are optimized using a truncated Newton method .
5 Discriminative Classification with Fisher Kernels
Fisher kernels  provide a commonly used framework that exploits generative probabilistic models as a representation of instances within discriminative classifiers. Specifically, the Fisher kernel approach involves a feature mapping of structured input—for instance, sequential input—by a projection into the gradient space of a generative probabilistic model that is previously fit on the training data via maximum likelihood. We use the generative probabilistic model developed in Section 4.2 to map scanpaths and lexical features into feature vectors . The Fisher score representation for a scanpath is the gradient of the log likelihood of with respect to the model parameters, evaluated at the maximum likelihood estimate.
5.1 Fisher Kernel Function
The Fisher kernel function calculates the similarity of two scanpaths , as the inner product of their Fisher score representations and , relative to the Riemannian metric that is given by the inverse of the Fisher information matrix .
Definition 1 (Fisher kernel function of model with lexical features)
Let be the maximum likelihood estimate of the model defined in Section 4.2 on all training data. Let , denote scanpaths on texts , . The fisher kernel between , is
where and we employ the empirical version of the Fisher information matrix given by The gradient of the log-likelihood function is derived in Proposition 1.
Proposition 1 (Gradient of log-likelihood of generative model with lexical features)
Let denote a scanpath obtained on text . Let denote the saccade amplitudes, and denote the saccade types in . Define for the set . Let , , and the matrix with row vectors for . Then the gradient of the logarithmic likelihood of the model defined in Section 4.2 is
and denotes the Hadamard product.
A proof of Proposition 1 is given in the appendix.
5.2 Applying the Fisher Kernel to Identification and Text Comprehension
Applying the Fisher Kernel to both prediction problems first requires to estimate the parameters of the generative model parameters on the training data. Note that we fit a global model, instead of class-specific models. In both prediction problems, we treat the scanpaths of each single line of text as an instance, and train a dual SVM with the resulting Fisher kernel. At application time, the scanpath of a text that is comprised of multiple lines is processed as multiple instances by the Fisher SVM. In order to obtain one decision-function value for the entire text, we average the decision-function values of all individual lines.
6 Empirical Study
6.1 Data collection
6.1.1 Experimental design and materials
We let a group of 62 advanced and first-semester students read a total of 12 scientific texts on biology (6 texts) and on physics adopted from various German language textbooks [35, 36, 37, 38, 39, 40, 41]. All students are native speakers of German with normal or corrected-to-normal vision and are majoring in either physics or biology. We determine each reader’s comprehension of each text by presenting three comprehension questions after each text. All questions are multiple-choice questions with always one out of four options being correct. Texts have 158 words on average (minimally 126 and maximally 180).
6.1.2 Technical set-up and Procedure
Participants’ eye movements are recorded with an SR Research Eyelink 1000 eyetracker (right eye monocular tracking) at a sampling rate of 1000 Hz using a desktop mounted camera system with a 35 mm lens and head stabilization. After setting up the camera and familiarizing the participant with the procedure, the twelve texts are presented in randomized order. Each text fits onto a single screen. We impose no restrictions regarding the time spent on reading one text. After each text, three comprehension questions are presented on separate screens together with 4 multiple choice options. Participants cannot backtrack to the text or previous questions, or undo an answer. The total duration of the experiment is approximately 90 minutes; participants were paid for participating.
6.1.3 Lexical features
Lexical frequency and word length are well known to affect a reader’s fixation durations and saccadic behavior, such as whether a word is skipped or a regressive saccade is initiated [42, 43, 44, 45, 46, 47]. Hence, for each word of the stimuli, we extract different kinds of word frequency and word length measures using dlexDB [48, 49], which is based on the reference corpus underlying the Digital Dictionary of the German Language (DWDS) corpus . Specifically, we extract type frequency (i.e., the number of occurrences of a type in the corpus per million tokens), annotated type frequency (i.e., the number of occurrences of a unique combination of a type, its part-of-speech, and its lemma in the corpus per million tokens), lemma frequency (i.e., the total number of occurrences of types associated with this lemma in the corpus per million tokens), document frequency (i.e.,
the number of documents with at least one occurrence of this type per 10,000 documents), type length in number of characters, type length in number of syllables, and lemma length in number of characters. All corpus-based features are log-transformed and z-score normalized. Moreover, we tag each word with the following binary lexical features: whether the word is a technical term, a technical term from physics, a technical term from biology, an abbreviation, the first word of a sentence.
6.2 Reference Methods
We compare the Fisher SVM with lexical features to several reference methods. The first natural baseline is the generative model with lexical features developed in Section 4.2; this comparison allows us to measure the merit of the discriminative Fisher kernel compared to the underlying generative model. The next baseline is the Fisher SVM without lexical features—that is, an SVM with the Fisher kernel derived from the generative model described in Section 4.1. We compare this discriminative model to the full generative model (Landwehr et al., 2014)  without lexical features and without the simplification introduced in Section 4.1.
The current gold-standard model for reader identification is the model of Abdelwahab et al., 2016 . Note that no Fisher kernel can be derived from this non-parametric generative model for lack of explicit model parameters. Since this model has been shown to outperform all previous approaches [6, 7], we exclude  from our comparison.
6.3 Experimental Setting
For reader identification, data are split along texts, so that the same text does not appear in training and test data. We conduct a leave-one-text-out cross-validation protocol: The models are trained on 11 texts per reader and a reader is identified on the left-out text. Identification accuracy is averaged across the resulting 12 training- and test-splits and is studied as a function of the number of text lines read at test time.
For text comprehension, data are split (50/50) across readers and texts, so that neither the same reader nor the same text appears in both training and test data. This setup leads to four train-test splits, across which we average the classification accuracy.
For both problem settings, we execute another nested cross-validation inside the top-level cross-validation in which we tune the hyperparameters of all learning methods (e.g., regularization parameters of the SVM and the linear model for lexical features and parameter of the non-parametric method of Abdelwahab et al.) by grid search. We also perform feature subset selection on vector by backward elimination in this inner cross-validation step. The nested cross-validation protocol ensures that all hyperparameters are tuned on the training part of the data.
6.4 Reader Identification
We measure the percentage of correctly identified readers from the set of 62 readers. Figure 2 shows the identification accuracy for the different models. The Fisher-SVM achieves an identification accuracy of up to and outperforms the other evaluated models. Figure 3 shows the -value of a Wilcoxon signed-rank test for a comparison of several pairs of methods. We conclude that the Fisher-SVM with lexical features outperforms Abdelwahab significantly () for 4 and 8 lines read, the Fisher-SVM with lexical features always outperforms the Fisher-SVM without lexical features, the Fisher-SVM always outperforms the underlying generative model, and the generative model with lexical features outperforms the generative model of Landwehr at al. without lexical features for 3 or more lines read. Including lexical features significantly improves the generative model by Landwehr et al. , as well as the Fisher-SVM.
6.4.1 Execution Time
We compare the time required to train reader-identification models for all methods under investigation as a function of the number of training texts per reader. Figure 4 shows that training the nonparametric model of Abdelwahab et al. is one to three orders of magnitude slower than all other models. The model of Landwehr et al. uses a quasi-Newton method to fit the gamma distributions, the generative model with lexical features additionally fits several linear models. Generative models are fit for each reader. By contrast, the Fisher kernel requires fitting one single model to all data and training a linear model; this turns out to be faster in some cases.
6.4.2 Text Comprehension
After reading a text, each subject answers three text comprehension questions. We study a binary classification problem where one class corresponds to zero or one correct answers and the other class to two or three correct answers.
Table 1 shows the classification accuracies of the evaluated models222The main memory requirement of the model of Abdelwahab et al. is quadratic in the number of instances per class; we had to discard 80% of the data at random for this problem. No methods exceeds the classification accuracy of a model that always predicts the majority class. The discriminative models minimize the hinge loss—which is an upper bound of the zero-one loss—and reach the minimal loss by almost always predicting the majority class. The generative models are not trained to minimize any classification loss at all. They fall far short of the accuracy of the majority class but attain an AUC that is marginally above random guessing. The AUC of all three models is significantly higher than 0.5 (, paired -test). In order to validate this interpretation, we additionally train the Fisher SVM on a class-balanced data subset; with balanced classes, the Fisher SVM cannot minimize the loss without also increasing the AUC. Here, the Fisher SVM achieves an AUC of which is significantly higher than 0.5. We conclude that estimating the level of text comprehension is a difficult problem that cannot be solved at any useful level by any of the models under investigation.
|Fisher-SVM (lexical features)|
|Fisher-SVM (without lexical features)|
|Abdelwahab et al. (2016)|
|Generative model (with lexical features)|
|Generative model [Landwehr et al., 2014]|
We developed a discriminative model for the classification of scanpaths in reading. The aim was to i) predict the readers’ identity, and ii) their level of text comprehension. To this end, we built on the work of  and developed a generative graphical model of scanpaths that takes into account lexical features of the fixated word, derived a Fisher representation of scanpaths from this model, and subsequently used this Fisher kernel to classify the data using an SVM. We collected eye-tracking data of 62 readers who read 12 scientific texts and answered comprehension questions for each text.
We can conclude that the inclusion of lexical features leads to a significant improvement compared to the original generative model , and that a discriminative model using a Fisher kernel gives an additional considerable improvement over the generative model. We conclude that this model significantly outperforms the semiparametric model of  in some cases, which, to the best of our knowledge, is the best published biometric model that is based on eye movements. None of the considered models was able to reliably predict reading comprehension from a reader’s eye movements.
Appendix 0.A Appendix
Proof (Proposition 1)
As discussed in Section 4.3, the likelihood factorizes as
For the multinomial distribution,
and thus for , we have that Since the likelihoods of the saccade amplitudes and the fixation durations are analogous (see Equations 6–8), we only derive the gradient of the amplitude likelihood. As discussed in Section 4.3 (Equation 11), the likelihood of saccade amplitudes and fixation durations further factorizes over the different saccade types . Therefore, if denotes the -th entry of parameter vector , its partial derivative is
This work was partially funded by the German Science Foundation under grants SFB1294, SFB1287, and LA3270/1-1, and by the German Federal Ministry of Research and Education under grant 16DII116-DII.
-  Kliegl, R., Nuthmann, A., Engbert, R.: Tracking the mind during reading: The influence of past, present, and future words on fixation durations. Journal of Experimental Psychology: General, 135(1) (2006) 12–35
-  Rayner, K.: Eye movements in reading and information processing: 20 years of research. Psychological Bulletin 124(3) (1998) 372–422
-  Erdmann, B., Dodge, R.: Psychologische Untersuchungen über das Lesen auf experimenteller Grundlage. Books on Demand (1898)
-  Huey, E.B.: The psychology and pedagogy of reading. The Macmillan Company (1908)
-  Afflerbach, P., ed.: Handbook of Individual Differences in Reading. Routledge (2015)
-  Holland, C., Komogortsev, O.V.: Biometric identification via eye movement scanpaths in reading. In: Proceedings of the 2011 International Joint Conference on Biometrics. IJCB ’11, Washington, DC, IEEE (2011) 1–8
-  Landwehr, N., Arzt, S., Scheffer, T., Kliegl, R.: A model of individual differences in gaze control during reading. In: EMNLP. (2014) 1810–1815
Abdelwahab, A., Kliegl, R., Landwehr, N.:
A semiparametric model for bayesian reader identification.
In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP-2016), Austin, TX (2016)
-  Just, M.A., Carpenter, P.A.: A theory of reading: From eye fixations to comprehension. Psychological Review 87(4) (1980) 329–354
-  Frazier, L., Rayner, K.: Making and correcting errors during sentence comprehension: Eye movements in the analysis of structurally ambiguous sentences. Cognitive Psychology 14(2) (1982) 178–210
-  Staub, A., Rayner, K., Pollatsek, A., Hyönä, J., Majewski, H.: The time course of plausibility effects on eye movements in reading: Evidence from noun-noun compounds. Journal of Experimental Psychology: Learning, Memory, and Cognition 33(6) (2007) 1162–1169
-  Kaakinen, J.K., Hyönä, J.: Perspective effects in repeated reading: An eye movement study. Memory and Cognition 35 (2007) 1323–1336
-  Rayner, K., Chace, K.H., Slattery, T.J., Ashby, J.: Eye movements as reflections of comprehension processes in reading. Scientific Studies of Reading 10(3) (2006) 241–255
-  Rayner, K., Sereno, S.C.: Eye movements in reading: Psycholinguistic studies. In Gernsbacher, M.A., ed.: Handbook of Psycholinguistics. Academic Press, San Diego (1994) 57–81
-  Clifton, C., Staub, A., Rayner, K.: Eye movements in reading words and sentences. In Van Gompel, R.P., Fischer, M.H., Murray, W.S., Hill, R.L., eds.: Eye Movements: A Window on Mind and Brain. Elsevier, Oxford, UK (2007) 341–372
-  Jacobson, J.Z., Dodwell, P.C.: Saccadic eye movements during reading. Brain and Language 8 (1979) 303–314
-  Rayner, K., Pollatsek, A.: The Psychology of Reading. Prentice Hall, Englewood Cliffs, NJ (1989)
-  Rayner, K., Warren, T., Juhasz, B.J., Liversedge, S.P.: The effect of plausibility on eye movements in reading. Journal of Experimental Psychology: Learning, Memory, and Cognition 30 (2004) 1290–1301
-  Warren, T., McConnell, K., Rayner, K.: Effects of context on eye movements when reading about possible and impossible events. Journal of Experimental Psychology: Learning, Memory, and Cognition 34 (2008) 1001–1007
-  Copeland, L., Gedeon, T.: Measuring reading comprehension using eye movements. In: 4th IEEE International Conference on Cognitive (CogInfoCom 2013). (2013) 791–796
-  Copeland, L., Gedeon, T., Mendis, S.: Predicting reading comprehension scores from eye movements using artificial neural networks and fuzzy output error. Artificial Intelligence Research 3(3) (2014) 35–48
-  Copeland, L., Gedeon, T., Mendis, S.: Fuzzy output error as the performance function for training artificial neural networks to predict reading comprehension from eye gaze. In Loo, C., Keem Siah, Y., Wong, K., Beng Jin, A., Huang, K., eds.: The 21st International Conference on Neural Information Processing 2014 (ICONIP 2014). Volume 1 of Lecture Notes in Computer Science (LNCS) 8834. (2014) 586–593
-  Underwood, G., Hubbard, A., Wilkinson, H.: Eye fixations predict reading comprehension: The relationships between reading skill, reading speed, and visual inspection. Language and Speech 33(1) (1990) 69–81
-  Dixon, W.R.: Studies in the psychology of reading. In Morse, W.S., Ballantine, P.A., Dixon, W.R., eds.: Univ. of Michigan Monographs in Education No. 4. Univ. of Michigan Press (1951)
-  Rayner, K., Pollatsek, A., Ashby, J., Clifton Jr, C.: Psychology of reading. Psychology Press (2012)
-  Komogortsev, O.V., Jayarathna, S., Aragon, C.R., Mahmoud, M.: Biometric identification via an oculomotor plant mathematical model. In: Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications. (2010)
-  Rigas, I., Economou, G., Fotopoulos, S.: Human eye movements as a trait for biometrical identification. In: Proceedings of the IEEE 5th International Conference on Biometrics: Theory, Applications and Systems. (2012)
-  Zhang, Y., Juhola, M.: On biometric verification of a user by means of eye movement data mining. In: Proceedings of the 2nd International Conference on Advances in Information Mining and Management. (2012)
-  Bednarik, R., Kinnunen, T., Mihaila, A., Fränti, P.: Eye-movements as a biometric. In: Proceedings of the 14th Scandinavian Conference on Image Analysis. (2005)
-  Rigas, I., Economou, G., Fotopoulos, S.: Biometric identification based on the eye movements and graph matching techniques. Pattern Recognition Letters 33(6) (2012)
-  Rigas, I., Komogortsev, O., Shadmehr, R.: Biometric recognition via eye movements: Saccadic vigor and acceleration cues. ACM Transaction on Applied Perception 13(2) (2016) 1–21
-  Morrison, R.E., Rayner, K.: Saccade size in reading depends upon character spaces and not visual angle. Perception and Psychophysics 30 (1981) 395–396
-  Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, New York (2006)
-  Jaakkola, T., Haussler, D.: Exploiting generative models in discriminative classifiers. In: Advances in neural information processing systems. (1999) 487–493
-  Demtröder, W.: Experimentalphysik 2: Elektrizität und Optik. 5th edn. Springer, Berlin (2009)
-  Demtröder, W.: Experimentalphysik 3: Atome, Moleküle und Festkörper. 4th edn. Springer, Berlin (2010)
-  Demtröder, W.: Experimentalphysik 4: Kern-, Teilchen- und Astrophysik. 4th edn. Springer, Berlin (2014)
-  Ableitner, O.: Einführung in die Molekularbiologie. Basiswissen für das Arbeiten im Labor. Springer, Wiesbaden (2014)
-  Townsend, C.R., Begon, M., Harper, J.L.: Ökologie. Springer, Berlin (2003)
-  Graw, J.: Genetik. 6th edn. Springer, Berlin (2015)
-  Boujard, D., Anselme, B., Cullin, C., Raguénès-Nicol, C.: Zell- und Molekularbiologie im Überblick. Springer, Berlin (2014)
-  Rayner, K., McConkie, G.W.: What guides a reader’s eye movements. Vision Research 16 (1976) 829–837
-  Rayner, K., Duffy, S.A.: Lexical complexity and fixation times in reading: Effects of word frequency, verb complexity, and lexical ambiguity. Memory and Cognition 14 (1986) 191–201
-  Rayner, K., Duffy, S.A.: Parafoveal word processing during eye fixations in reading: Effects of word frequency. Perception and Psychophysics 40 (1986) 431–440
-  Kliegl, R., Nuthmann, A., Engbert, R.: Tracking the mind during reading: The influence of past, present, and future words on fixation durations. Journal of Experimental Psychology: General 135 (2003) 12–35
-  Kliegl, R., Grabner, E., Rolfs, M., Engbert, R.: Length, frequency, and predictability effects of words on eye movements in reading. European Journal of Cognitive Psychology 16(1–2) (2004) 262–284
-  Juhasz, B.J., White, S.J., Liversedge, S.P., Rayner, K.: Eye movements and the use of parafoveal word length information in reading. Journal of Experimental Psychology: Human Perception and Performance 34 (2008) 1560–1579
-  of Science, B.B.A., of Potsdam, U. http://dlexdb.de (2011)
-  Heister, J., Würzner, K.M., Bubenzer, J., Pohl, E., Hanneforth, T., Geyken, A., Kliegl, R.: dlexDB – eine lexikalische Datenbank für die psychologische und linguistische Forschung. Psychologische Rundschau 62(1) (2011) 10–20
-  Klein, W., Geyken, A., eds.: Das Digitale Wörterbuch der deutschen Sprache (DWDS). Berlin-Brandenburg Academy of Science (2016) http://www.dwds.de.