1 Introduction
Since the early 80s there is a vision that gazebased interfaces could make our interaction with computer easier and more efficient [2]. Gazebased interfaces have many promises: they work over distances, they are hygienic as there is nothing to touch, they keep the hands free for other tasks, they are silent, and they are maintenancefree as eye trackers have no moving parts. At the same time, gazebased interfaces usually need a time consuming calibration, they lack high accuracy, and they are prone to the socalled Midas touch problem [12].
In 2013, Vidal et al. introduced a novel concept for gaze interaction based on smooth pursuit eye movements [27]. In interfaces with moving targets, they compare the user’s gaze and the movement of the target, hence allowing a matching pursuit movement to be detected by calculating Pearson’s correlation coefficient. The strength of this approach is its independence from offset and scaling and, therefore, the eye tracker does not need to be calibrated but can be instantly used. Another advantage is that due to being independent from scale, interfaces on small areas, such as a smartwatch display [10], can be built. A typical interface based on smooth pursuits offers several targets to give the user a choice. Esteves et al. [10] showed that it is possible to distinguish eight targets moving on a circle. However, they reported false positive rates of 12% for pursuitsbased interaction. We argue that to make gazebased interaction usable in everyday life, this rate needs to be significantly reduced. Similarly, Vidal et al. showed that detection accuracy drops significantly when showing more than 8 targets moving at the same speed and trajectory [10].
This underpins an inherent challenge in Pursuitsbased interfaces – the number of targets and reliability present a tradeoff: reducing the number of targets increases the detection reliability and vice versa. At the same time, today’s interfaces provide many different elements, such as the number of application icons on a smartphone or the keys on a soft keyboard.
To address this, we introduce a new Pursuits detection method that increases the accuracy of selections even with high numbers of onscreen targets. Rather than the widely used Pearson correlation [10, 11, 14, 15, 16, 17, 18, 24, 25], our novel method uses the slope of a regression line.
In a study (N=16), we compared the performance of our approach to the stateoftheart for pursuitsbased interfaces. In particular, we compared the influence of the number of targets on input speed and error rate. Results show that our approach allows up to 24 targets to be distinguished. For eight or more targets it reduces the error rate by factor 5 to 10 compared to the stateoftheart approach. We built a sample application and discuss how our approach supports designers in building highly reliable calibrationfree gazebased interfaces.
The contribution of this work is twofold: First, we describe a novel detection method for smooth pursuits eye movements. Second, we report on a comparison of the approach with stateoftheart, revealing a significant increase in number of detectable targets as well as in accuracy.
2 Background and Related Work
While early works in gazebased interaction relied mostly on fixations, the research community started to move towards detecting gaze behavior, such as gaze gestures [9], and more recently smooth pursuit [27]. Smooth pursuit eye movements are naturally performed when gazing at a moving target. Interaction using smooth pursuit (aka Pursuits) is promising since it does not require calibration because it relies on relative eye movements rather than precise fixation points.
2.1 Applications of Pursuits
Pursuits has been utilized in several applications and domains. Being a calibrationfree and contactless gazeonly modality, a large body of work investigated its use on public displays, where immediate usability is essential. For example, Vidal et al. used Pursuits on public displays for gaming and entertainment applications [27]. In EyeVote, Pursuits was used for voting on public displays [18]. Pursuits was also successfully deployed in active eye tracking settings, where the tracker moved on a rail system to follow users as they pass by large public displays [15]. Lutz et al. used Pursuits for entering text on public displays [19]. They worked around Pursuits’ limitations by performing each letter’s selection on two stages: the user first selects one of 5 groups of letters, the group then expands to allow the user to finally select the desired letter.
Other ubiquitous technologies leveraged Pursuits as well. Esteves et al. [10] used Pursuits for gaze interaction with smart watches. Velloso et al. [26] utilized Pursuits in smart homes.
Pursuits were also used in mixed reality. VR benefits from using Pursuits during interaction, especially when moving in VR [16], and when interacting with occluded targets [21]. Pursuits was also employed in augmented reality glasses [11].
Eye movements are subtle and hard to observe. Hence Pursuits was used for authentication [8, 22, 23].
As for desktop settings, Kangas et al. [13] and Špakov et al. [28] employed Pursuits in the form of a continuous signal to control onscreen widget to, for example, adjust volume.
In addition to using it as a calibrationfree gaze interaction technique, Pursuits can also be used for calibration. Pfeuffer et al. [20] introduced a method to calibrate the eye tracker as users follow onscreen moving targets. Similarly, Celebi et al. [4] used Pursuits for eye tracker calibration. Khamis et al. [17] used gradually revealing text to calibrate the eye tracker while users readandpursue. A major drawback of previous works is that the interfaces often has a limited number of targets shown at once. Previous implementations could distinguish up to 8 targets reliably [10, 24, 27]. We show that it is possible to distinguish 24 targets with significantly higher accuracy compared to state of the art.
2.2 Implementations of Pursuits
There are two predominant implementations of Pursuits detection for interaction, one of which uses the Euclidean distance between the gaze estimates and target positions
[13, 22, 23, 28], while the other one employ Pearson’s product moment correlation
[10, 11, 14, 15, 16, 17, 18, 24, 25].The Euclidean distance method is susceptible to inaccurate detection in the presence of an offset between the real gaze point and the estimated one. This means that it is not reliable when the eye tracker is not calibrated, or when the gaze estimation is not accurate. In contrast, the correlation method is independent of offsets and scaling. For this reason, it works reliably without calibration [18, 24, 27], and even on small interfaces such that of smart watches [10]. On the downside, the accuracy of the correlationbased detection drops significantly in the presence of more than 8 targets [10, 24, 27].
3 Regression Slopebased Detection of Pursuits
We introduce a novel approach of detecting Pursuits and start with theoretical foundations before describing our enhancements and implementation.
3.1 Theoretical Background
A smooth pursuit detection algorithm receives the gaze coordinates and the coordinates of the onscreen targets as input. It collects a certain number of data samples, calculates a metric function for each target, and then compares the metric values of each target with a threshold or threshold interval. Targets whose metric values match the threshold condition are reported as detected. Typical metric functions for pursuit detection are Euclidean distance [13, 22, 23, 28] or correlation [10, 11, 14, 15, 16, 17, 18, 24, 25]. Detection algorithms using Euclidean distance need a calibrated eye tracker [22, 23, 28], while detection methods using correlation are independent from offset and scaling [24, 27]. The implicit assumption behind this statement is that the calibration error can be described by an affine transformation.
3.2 From Correlationbased to Slopebased detection
The algorithm described here works with linear regression which is, in terms of mathematics, closely related to correlation. Linear regression and correlation need a list of value pairs which in our case are the xcoordinates of the gaze g_{x} and the target position t_{x}
or the ycoordinates, respectively. Every value pair can be plotted in a plane. The linear regression analysis finds the straight line that best fits the plotted data.
The regression coefficient is the slope of the line, the intercept is the value where the line crosses the abscissa and the correlation is a measure for the quality of the fit. If the gaze follows the target perfectly and the eye tracker provides accurate positions, then g_{x} = t_{x} and the plot is a bisecting line of ordinate and abscissa with intercept=0.0, slope=1.0, and correlation=1.0. If the gaze does not follow the target, the values for intercept, slope and correlation are very different from the perfect values.
The correlation detection method typically requires a correlation value higher than 0.8 [24]. A calibration error results in an intercept (=offset) different from zero and an only slightly changed value for the slope (=scaling factor) while the correlation does not change. Our pilot studies showed that calibration errors for the scaling factor are in a range from 0.9 to 1.1.
3.3 Advantages of Slopebased Pursuits detection
Our method presented here requires the slope to be close to 1.0 – hence, we refer to this method as slope detection. For the study, we used a threshold interval from 0.77 to 1.3.
Similar to the correlation method, the slope is independent from offsets. Consequently, the slope method detects Pursuits without calibration.
The slope detection has a further advantage: It distinguishes between synchronously moving targets of different trajectory sizes, while the correlation method does not. The reason is that the correlation is insensitive to offsets and scaling, while the regression line’s slope is only insensitive to offsets.
3.4 Implementation
We implemented both detection methods, the correlation and the slope method. We used the following formulas:
Regression analysis:
Correlation:
where x is a gaze coordinate, y the corresponding target coordinate, and n the size of the data window.
In contrast to formulas which require mean values and consequently need to sum up values over all data in the data window, these formulas allow a sliding window by only subtracting an old value and adding a new value. As a result, the algorithm’s run time is independent from the data window size.
We further enhanced the algorithm. For a positive detection, rather than relying on a single sample as in previous work [24, 27], our threshold condition needs to be met for a number of consecutive samples, hence introducing a minimum signal duration. The minimum signal duration reduces false positives. Reducing false positives is also possible by increasing the data window size. However, pilot studies showed that a small data window and a minimum signal duration excludes more false positives compared to a larger data window.
As a further enhancement, we added some smoothing to the gaze signal by calculating the average over the last k samples. Smoothing the gaze signal improves the detection with the slope method but increases the false positive rate for the correlation detection. For a fair comparison in the user study, we used the smoothed signal only for the slope method. We also adjusted the minimum signal duration for best results.
While pilot testing, we observed that a false positive detection of the same target often followed successful detections (despite clearing all buffers after a successful detection). We found the reason to be the reaction time of the user who usually continues gazing at the target after successful detection. To address this, we dropped some samples after a positive detection.
Table 1 shows the parameters used for both detection methods. We used an eye tracker which delivers 60 samples per second.
Parameter  Correlation Method  Slope Method 

Window size  30 samples  30 samples 
Smoothing  0 samples  20 samples 
Minimum duration  20 samples  15 samples 
Threshold  0.8  0.77 – 1.3 
Skipped samples  30 samples  30 samples 
4 Evaluation
We conducted a user study to compare our approach to the state of the art method for detecting Pursuits.
4.1 Apparatus
To evaluate our Pursuits detection approach, we developed a sample application (see Figure 1) in which users can enter digits (0 to 9) and letters (A to N) via Pursuits.
The application runs on an Acer Aspire V17 Nitro laptop with integrated Tobii IS4 Base AC eye tracker (60 Hz). The display has a resolution of 1920 times 1080 pixels on 38.4 cm times 21.7 cm, which results in 0.2 mm for one pixel or 50 px per centimeter. The average distance between the participants’ eyes to the display is around 50 cm +/ 5 cm, which corresponds to 0.02 per pixel or around 50 px per degree. The targets move clockwise on a circle with a radius of 130 px (2.6), except for the ‘cancel’target which moves counterclockwise on a circle with a radius of 80 px (1.6). The radius of each target is 20 px (0.4) and they move at 6.5/s (2.5 seconds per rotation).
The interface provides visual and acoustic feedback for the detection. Every target that matches the threshold condition is filled with color, whose intensity increases the longer the threshold condition stays true, and reaches its maximum once the minimum signal duration is reached. Different beeps are used for correct and wrong entries.
4.2 Study Design
The study was designed as a repeatedmeasures experiment with two independent variables. The first was the Pursuits detection method, with two conditions: correlationbased detection (baseline) and slopebased detection (our approach). The second was the number of targets; participants went through 10 blocks, with the first block showing 6 targets, and gradually incremented the number of targets by 2 up to 24 simultaneously moving targets. The order of methods was randomized. The task of the user was to enter 4 symbols in each block.
4.3 Procedure
We invited 16 participants (3 females) with normal or corrected to normal vision aged between 24 and 58. After arriving at the lab, participants filled out a form with the demographic data and received a short introduction to the system (Figure 1). To test how well the methods work for spontaneous gaze interaction, we did not calibrate the eye tracker for each participant. Instead, it was calibrated only once by one of the authors. The participants’ task was to enter a fourdigit number by following the clockwise rotating number targets using gaze. In case of entering a wrong digit, the participants had to delete it by selecting the the counterclockwise rotating ‘cancel’target.
Participants first completed a training task with six targets in which they entered four symbols (digits and letters), and tried to cancel an entry. These entries were excluded from the analysis. Participants then went through the 10 blocks, each covering a number of targets (6,8,10,12,14,16,18,20,22,24) and consisting of two selection tasks (one per detection method).
Every selection task had a timeout of 90 seconds. If a participant was not able to fulfill the task in time or wished to abort, the study continued with the other method until the participant failed. We concluded with a semistructured interview.
4.4 Results
Apart from the qualitative feedback and observations, we logged the maximum number of targets shown simultaneously from which participants could still perform successful selections. We further logged the errors, which correspond to the number of times users canceled their input. We also logged the average task completion time, which denotes the time taken to enter all 4 symbols correctly. Finally we logged the average entry time for entering each symbol.
4.4.1 Interviews and Observations
All participants understood immediately how to operate the system and how to enter the digits, but it seemed that they were at the beginning of a steep learning curve. Many saw the user study like a computer game and were highly ambitious to reach a high score. All participants reported that the task required a lot of focussing. All participants reported that they found the slopebased method more accurate and easier, some of them even mentioned their preference before being asked.
4.4.2 Maximum Selectable Targets
We counted the maximum number of displayed targets from which participants were able to enter the four demanded symbols (see the bars in Figure 2). The slope detection approach outperformed the correlation detection method. Only one participant was able to select more targets with the latter.
A Wilcoxcon signed ranked test revealed that the slope detection method results in a significant increase of the number of displayed targets from which participants successfully made selections (, ). Using the correlation method, the maximum target number with which the participants were able to accomplish the task was between 10 and 24 , . Using the slope method, the maximum target number was between 8 and 24 , .
4.4.3 Errors
Whenever the participant entered a wrong digit she or he had to cancel the entry with the ‘cancel’target. Every entry of the ‘cancel’target was counted as error. As seen in Figure 3, the average number of errors increases in the presence of more targets, however the increase in errors is sharper in case of the correlation method. For example, while both methods yielded almost no errors at 6 targets across all participants, the mean number of errors at 8 targets was 1.25 and 0.13 for the correlation and slope methods respectively. Similarly, at 24 targets, participants performed 22 errors on average in case of correlation, but only 3 errors on average in case of slope method. Note, Figure 3 displays an average over the participants who were successful in the respective conditions.
4.4.4 Task Completion Time
We measured the completion time for successfully entering 4 symbols, starting from the moment displaying the symbols, until the moment the fourth symbol was entered. This also includes cancellations. As illustrated by Figure 2, the average completion time is almost similar across both methods for up to 8 targets, but then increases sharply when using the correlation method compared to when using the slope method. Similar to the errors, successful completion times exclude cases where participants failed to enter the 4 symbols, and hence the average is calculated over a varying number of participants. Completion times are longer for the correlation method. This is mainly due to the many cancellations that participants had to perform.
4.4.5 Symbol Entry Time
Unlike the overall completion time which accounts for entering 4 symbols, including the cancellations, this metric reflects the average time it took to select a single entry from the moment the target was gazed at until the moment the target was deemed selected. Figure 4 shows the times for entering a digit or a cancel operation. The slight decrease in selection times could be the results of a learning effect or from the fact that the entry times for the higher target numbers are calculated based on successful participants only, who might as well be wellperforming. Entry times did not vary a lot across the detection methods. A Wilcoxon signed rank test showed no evidence of significant effects of detection method on entry time.
One interesting observation is that the time per entry does not increase with the number of targets. The other interesting observation is that the times for the slope detection method are higher than the times for the correlation detection method (see discussion). This is remarkable as the slope detection uses a smaller minimum signal duration.
5 Discussion
5.1 Comparing both Methods
The detection methods studied here depend on different parameters – the threshold, the data window size, the minimum signal duration, and the smoothing window size. A systematic approach with five different values for each parameter would have led to 625 combinations for each detection method. Hence, we decided to compare the methods using optimal parameters for each of them. In particular, we used the same correlation value of 0.8 and a data window size of 30 samples as Vidal et al. [27]. We showed that using a different approach it is possible to almost triple the number of targets. Note, that in our implementation, the correlation method performs even better than in previous work [27], hence supporting our endeavour to provide a fair comparison.
5.2 Understanding the Results
The evaluation yielded significant differences in both methods. We explain and discuss the reasons for these findings.
If we assume a perfectly calibrated and accurate eye tracker, and a user whose gaze follows exactly a target on a circle, the x and the y coordinate of the gaze over time would have the shape of a sine and would be shifted /2 against each other. If there are n targets on the circle, the coordinates of the previous and next target are phase shifted 2/n against the gaze coordinates. The situation for n=20 is depicted in Figure 5. The gray area in the figure indicates the current data window. Figure 6 shows the regression analysis for the data window in Figure 5.
The points all lie on a Lissajous curve which has the shape of an ellipsis. The phase shift affects the eccentricity; the smaller the phase shift the closer the shape is to a diagonal line. The data window size determines the fraction of the ellipsis on which the points lie.
This allows the influence of the data window size on the detection to be understood. If the data window covers a full cycle, meaning the time for the data window is the time for a target to complete a full circle, the data points form a complete ellipsis. In this case, the slope of the regression line and the correlation will be constant over time. If the phase shift is small, both values are close to 1.0. In the depicted case with 20 targets, these values are around 0.95.
With a smaller data window (Figure 5), the data points fill only a part of the ellipsis (Figure 6), which moves over time. At the time shown here for the x values, the data points are on an almost straight line and the correlation and the slope are close to 1.0. At the same time, the y values fill the ellipsis tip and the slope of the regression line and the correlation are different from 1.0. As the threshold condition has to be true for the x and y coordinate, this means that there is no positive detection at that moment. However, there are moments in between, where the detection algorithm reports a positive detection.
To get smooth pursuit eye movements, the target speed has to be in a certain range, typically 5–20/s. If we reduce the circle radius and keep the target speed, the cycles per second increase. If we also keep the data window size this means that the data window covers more of the ellipse shown in Figure 6.
After understanding the relations of target speed, data window size, and phase difference the question why the slope method performed better remains open. Figure 7 shows the correlation values for the given example and Figure 8 shows the values for the slope from the linear regression. The dashdotted line indicates the thresholds and the bars indicate whether the threshold condition is true. The bars have light color before the minimum signal duration is reached. The lowest bars indicate whether both threshold conditions are true.
The correlation is close to 1.0 most of the time and satisfies the threshold condition (Figure 7). The correlation value drops, when the data window covers the ellipsis’ tip. As the threshold condition has to be true for the x and the y coordinate, the correlation method signals detection between both drops.
The slope values pass the threshold interval quite quickly and satisfy the threshold condition for a shorter time (see Figure 8). The overlap of both signals for the x and y coordinate is minimal. Together with the concept of minimum signal duration, the slope method does not report false positives for 20 targets on a circle (under optimal conditions) while the correlation method does. This is the reason why the slope method can distinguish more targets on a circle. On the other hand, this means also that the correlation method detects more easily and more quickly (but at the expense of more false positives). This could explain, why the entry time for the correlation method is slightly shorter (Figure 4).
6 Conclusions and Future Work
The introduction of a minimum signal duration improves both detection methods, correlation and slope method, as it filters out false positives. The new detection algorithm based on the slope of the regression line performs better in separating targets on a circle. This does not mean that the slope detection is better in general. It seems that the slope detection does not detect true positives as well as the correlation method but creates fewer false positives. In the situation of selecting a target from a circle, it is not necessary to have a continuous signal for true positives. The first occurrence of a positive signal triggers the entry and possible gaps in the detection signal later do not matter. The property of fewer false positives seems to be more important in our scenario.
The improvement of smooth pursuit detection with the slope method can be either used for increasing the number of targets which offers the user more options or to provide a more robust interface with fewer false positives. We discussed the capabilities of two detection algorithms under idealistic conditions. Future work could try to explain the influence of noise in the gaze data on a theoretical level.
Directions for future work also include testing the new detection method in specific application scenarios and with other eye trackers. Researchers could investigate, how quickly users adapt to such interfaces and whether the need to strongly focus on the target decreases over time. Furthermore, researchers and practitioners could apply and evaluate the slopebased method in domains other than gaze, such as motion matching for body movements [5, 6, 7], and midair gestures [3].
References
 [1]
 [2] Richard A. Bolt. 1981. Gazeorchestrated Dynamic Windows. In Proceedings of the 8th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH ’81). ACM, New York, NY, USA, 109–119. DOI:http://dx.doi.org/10.1145/800224.806796
 [3] Marcus Carter, Eduardo Velloso, John Downs, Abigail Sellen, Kenton O’Hara, and Frank Vetere. 2016. PathSync: MultiUser Gestural Interaction with Touchless Rhythmic Path Mimicry. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, 3415–3427. DOI:http://dx.doi.org/10.1145/2858036.2858284
 [4] Feridun M. Celebi, Elizabeth S. Kim, Quan Wang, Carla A. Wall, and Frederick Shic. 2014. A Smooth Pursuit Calibration Technique. In Proceedings of the Symposium on Eye Tracking Research and Applications (ETRA ’14). ACM, New York, NY, USA, 377–378. DOI:http://dx.doi.org/10.1145/2578153.2583042
 [5] Christopher Clarke, Alessio Bellino, Augusto Esteves, and Hans Gellersen. 2017. Remote Control by Body Movement in Synchrony with Orbiting Widgets: An Evaluation of TraceMatch. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 1, 3, Article 45 (Sept. 2017), 22 pages. DOI:http://dx.doi.org/10.1145/3130910

[6]
Christopher Clarke, Alessio Bellino, Augusto Esteves, Eduardo Velloso,
and Hans Gellersen. 2016.
TraceMatch: A Computer Vision Technique for User Input by Tracing of Animated Controls. In
Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp ’16). ACM, New York, NY, USA, 298–303. DOI:http://dx.doi.org/10.1145/2971648.2971714  [7] Christopher Clarke and Hans Gellersen. 2017. MatchPoint: Spontaneous Spatial Coupling of Body Movement for Touchless Pointing. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology (UIST ’17). ACM, New York, NY, USA, 179–192. DOI:http://dx.doi.org/10.1145/3126594.3126626
 [8] Dietlind Helene Cymek, Antje Christine Venjakob, Stefan Ruff, Otto HansMartin Lutz, Simon Hofmann, and Matthias Roetting. 2014. Entering PIN codes by smooth pursuit eye movements. Journal of Eye Movement Research 7, 4 (2014). https://bop.unibe.ch/index.php/JEMR/article/view/2384
 [9] Heiko Drewes and Albrecht Schmidt. 2007. Interacting with the Computer Using Gaze Gestures. Springer Berlin Heidelberg, Berlin, Heidelberg, 475–488. DOI:http://dx.doi.org/10.1007/9783540748007_43
 [10] Augusto Esteves, Eduardo Velloso, Andreas Bulling, and Hans Gellersen. 2015. Orbits: Gaze Interaction for Smart Watches Using Smooth Pursuit Eye Movements. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST ’15). ACM, New York, NY, USA, 457–466. DOI:http://dx.doi.org/10.1145/2807442.2807499
 [11] Augusto Esteves, David Verweij, Liza Suraiya, Rasel Islam, Youryang Lee, and Ian Oakley. 2017. SmoothMoves: Smooth Pursuits Head Movements for Augmented Reality. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology (UIST ’17). ACM, New York, NY, USA, 167–178. DOI:http://dx.doi.org/10.1145/3126594.3126616
 [12] Robert J. K. Jacob. 1990. What You Look at is What You Get: Eye Movementbased Interaction Techniques. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’90). ACM, New York, NY, USA, 11–18. DOI:http://dx.doi.org/10.1145/97243.97246
 [13] Jari Kangas, Oleg Špakov, Poika Isokoski, Deepak Akkil, Jussi Rantala, and Roope Raisamo. 2016. Feedback for Smooth Pursuit Gaze Tracking Based Control. In Proceedings of the 7th Augmented Human International Conference 2016 (AH ’16). ACM, New York, NY, USA, Article 6, 8 pages. DOI:http://dx.doi.org/10.1145/2875194.2875209
 [14] Mohamed Khamis, Florian Alt, and Andreas Bulling. 2015. A Field Study on Spontaneous Gazebased Interaction with a Public Display Using Pursuits. In Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers (UbiComp/ISWC’15 Adjunct). ACM, New York, NY, USA, 863–872. DOI:http://dx.doi.org/10.1145/2800835.2804335
 [15] Mohamed Khamis, Axel Hoesl, Alexander Klimczak, Martin Reiss, Florian Alt, and Andreas Bulling. 2017. EyeScout: Active Eye Tracking for Position and Movement Independent Gaze Interaction with Large Public Displays. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology (UIST ’17). ACM, New York, NY, USA, 155–166. DOI:http://dx.doi.org/10.1145/3126594.3126630
 [16] Mohamed Khamis, Carl Oechsner, Florian Alt, and Andreas Bulling. 2018. VRPursuits: Interaction in Virtual Reality using Smooth Pursuit Eye Movements. In Proceedings of the 2018 International Conference on Advanced Visual Interfaces (AVI ’18). ACM, New York, NY, USA, 7.
 [17] Mohamed Khamis, Ozan Saltuk, Alina Hang, Katharina Stolz, Andreas Bulling, and Florian Alt. 2016a. TextPursuits: Using Text for Pursuitsbased Interaction and Calibration on Public Displays. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp ’16). ACM, New York, NY, USA, 274–285. DOI:http://dx.doi.org/10.1145/2971648.2971679
 [18] Mohamed Khamis, Ludwig Trotter, Markus Tessmann, Christina Dannhart, Andreas Bulling, and Florian Alt. 2016b. EyeVote in the Wild: Do Users Bother Correcting System Errors on Public Displays?. In Proceedings of the 15th International Conference on Mobile and Ubiquitous Multimedia (MUM ’16). ACM, New York, NY, USA, 57–62. DOI:http://dx.doi.org/10.1145/3012709.3012743
 [19] Otto HansMartin Lutz, Antje Christine Venjakob, and Stefan Ruff. 2015. SMOOVS: Towards calibrationfree text entry by gaze using smooth pursuit movements. Journal of Eye Movement Research 8, 1 (2015). https://bop.unibe.ch/index.php/JEMR/article/view/2394
 [20] Ken Pfeuffer, Melodie Vidal, Jayson Turner, Andreas Bulling, and Hans Gellersen. 2013. Pursuit Calibration: Making Gaze Calibration Less Tedious and More Flexible. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology (UIST ’13). ACM, New York, NY, USA, 261–270. DOI:http://dx.doi.org/10.1145/2501988.2501998
 [21] Thammathip Piumsomboon, Gun Lee, Robert W. Lindeman, and Mark Billinghurst. 2017. Exploring natural eyegazebased interaction for immersive virtual reality. In 2017 IEEE Symposium on 3D User Interfaces (3DUI). 36–39. DOI:http://dx.doi.org/10.1109/3DUI.2017.7893315
 [22] Vijay Rajanna, Adil Hamid Malla, Rahul Ashok Bhagat, and Tracy Hammond. 2018. DyGazePass: A gaze gesturebased dynamic authentication system to counter shoulder surfing and video analysis attacks. In 2018 IEEE 4th International Conference on Identity, Security, and Behavior Analysis (ISBA). 1–8. DOI:http://dx.doi.org/10.1109/ISBA.2018.8311458
 [23] Vijay Rajanna, Seth Polsley, Paul Taele, and Tracy Hammond. 2017. A Gaze GestureBased User Authentication System to Counter ShoulderSurfing Attacks. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’17). ACM, New York, NY, USA, 1978–1986. DOI:http://dx.doi.org/10.1145/3027063.3053070
 [24] Eduardo Velloso, Marcus Carter, Joshua Newn, Augusto Esteves, Christopher Clarke, and Hans Gellersen. 2017. Motion Correlation: Selecting Objects by Matching Their Movement. ACM Trans. Comput.Hum. Interact. 24, 3, Article 22 (April 2017), 35 pages. DOI:http://dx.doi.org/10.1145/3064937
 [25] Eduardo Velloso, Markus Wirth, Christian Weichel, Augusto Esteves, and Hans Gellersen. 2016a. AmbiGaze: Direct Control of Ambient Devices by Gaze. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems (DIS ’16). ACM, New York, NY, USA, 812–817. DOI:http://dx.doi.org/10.1145/2901790.2901867
 [26] Eduardo Velloso, Markus Wirth, Christian Weichel, Augusto Esteves, and Hans Gellersen. 2016b. AmbiGaze: Direct Control of Ambient Devices by Gaze. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems (DIS ’16). ACM, New York, NY, USA, 812–817. DOI:http://dx.doi.org/10.1145/2901790.2901867
 [27] Mélodie Vidal, Andreas Bulling, and Hans Gellersen. 2013. Pursuits: Spontaneous Interaction with Displays Based on Smooth Pursuit Eye Movement and Moving Targets. In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp ’13). ACM, New York, NY, USA, 439–448. DOI:http://dx.doi.org/10.1145/2493432.2493477
 [28] Oleg Špakov, Poika Isokoski, Jari Kangas, Deepak Akkil, and Päivi Majaranta. 2016. PursuitAdjuster: An Exploration into the Design Space of Smooth Pursuit –based Widgets. In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications (ETRA ’16). ACM, New York, NY, USA, 287–290. DOI:http://dx.doi.org/10.1145/2857491.2857526
Comments
There are no comments yet.