Differential Privacy for Eye-Tracking Data

As large eye-tracking datasets are created, data privacy is a pressing concern for the eye-tracking community. De-identifying data does not guarantee privacy because multiple datasets can be linked for inferences. A common belief is that aggregating individuals' data into composite representations such as heatmaps protects the individual. However, we analytically examine the privacy of (noise-free) heatmaps and show that they do not guarantee privacy. We further propose two noise mechanisms that guarantee privacy and analyze their privacy-utility tradeoff. Analysis reveals that our Gaussian noise mechanism is an elegant solution to preserve privacy for heatmaps. Our results have implications for interdisciplinary research to create differentially private mechanisms for eye tracking.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 6

02/20/2020

Differential Privacy for Eye Tracking with Temporal Correlations

Head mounted displays bring eye tracking into daily use and this raises ...
06/02/2019

Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness

In this paper, we propose a novel Heterogeneous Gaussian Mechanism (HGM)...
06/10/2015

Truthful Linear Regression

We consider the problem of fitting a linear model to data held by indivi...
06/19/2018

Self-adaptive Privacy Concern Detection for User-generated Content

To protect user privacy in data analysis, a state-of-the-art strategy is...
04/23/2021

On a Utilitarian Approach to Privacy Preserving Text Generation

Differentially-private mechanisms for text generation typically add care...
10/28/2020

Automatic selection of eye tracking variables in visual categorization in adults and infants

Visual categorization and learning of visual categories exhibit early on...
02/27/2018

Privacy Preserving Controller Synthesis via Belief Abstraction

Privacy is a crucial concern in many systems in addition to their given ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

With advances in mobile and ubiquitous eye tracking, there is ample opportunity to collect eye tracking data at scale. A user’s gaze encodes valuable information including attention, intent, emotional state, cognitive ability, and health. This information can be used to gain insight into human behavior (e.g. in marketing and user experience design), create computational models (e.g. for smart environments and vehicles), and enable interventions (e.g. health and education). When combined with physiological sensing and contextual data, this information facilitates the modeling and prediction of human behavior and decision making. As users become increasingly conscious about what their data reveals about them, there is mounting pressure on policymakers and corporations to introduce robust privacy regulations and processes [gdpr:2018, ; usatoday:2018, ]. The eye tracking community must actively pursue research about privacy for broad public acceptance of this technology.

Data privacy for eye tracking has been raised as a concern in the community [ling2014synergies, ; KAB18, ]. At a recent Dagstuhl seminar on ubiquitous gaze sensing and interaction111https://www.dagstuhl.de/18252 , privacy considerations were highlighted in a number of papers in the proceedings [CDQW18, ]. Privacy as a general term has a wide range of meanings and different levels of importance for different users. Privacy can obviously be preserved by distorting or randomizing the answers to queries, however doing so renders the information in the dataset useless.

To maintain privacy while preserving the utility of the information, we propose to apply the concept of differential privacy (DP) which has been developed by theoretical computer scientists and applied to database applications over the past decade [Dwork:2011:privatedataanalysis, ]. Differential privacy can be summarized as follows:

Privacy is maintained if an individual’s records cannot be accurately identified, even in the worst case when all other data has been exposed by adversaries.

Our technical contributions are: (1) We introduce the notion of differential privacy for eye tracking data. (2) We formally examine the privacy of aggregating eye tracking data as heatmaps and show that aggregating into heatmaps does not guarantee privacy from a DP perspective. (3) We propose two mechanisms to improve the privacy of aggregated gaze data. (4) We analyze the privacy-utility trade-off of these mechanisms from a DP-point of view.

Figure 1. Workflow for researchers and practioners to create the desired strength of privacy level. The solid lines illustrate the standard workflow for generating an aggregate static heatmap from eye tracking data. The dotted lines show how to implement a privacy protocol with small modifications to this workflow. The hotspots on the privacy enhanced heatmap are visually in the same locations as the original heatmap. The supplementary materials show several examples of privacy enhanced heatmaps for the same noise level.
Type of data Example of intended use What adversary can access in worst case What adversary can do now Does DP apply?
Raw eye movements Foveated rendering Raw eye movements Neurological diagnoses (see Scenario 1) yes, future work
Aggregated data without temporal information (static heatmaps) Marketing, UX design, education Individual’s heatmap Behavioral diagnoses (see Scenario 2) yes, this paper
Aggregated data with temporal information (dynamic heatmaps) Training models for autonomous vehicles Individual’s heatmap Establish driver’s liability (see Scenario 3) yes, future work
Areas of Interest (AOI) analysis Expert vs novice analysis Individual’s AOI visit order Autism spectrum diagnoses yes, future work
Table 1. In most cases, eye tracking data is released with the stimuli. This table illustrates the threats posed by releasing this data if no privacy protocol is in place.

From a practical perspective, the notion of differential privacy is both achievable and theoretically verifiable. Though the proofs may be mathematically sophisticated, the implementation is straightforward, and can be integrated into the eye tracking data collection pipeline. Figure 1 illustrates how this may be achieved. Privacy is guaranteed for the worst case when an adversary has already gained access to the data of all other individuals in a dataset (by hacking them for example). Even in this case, the adversary will still not be able to accurately infer data records of the individual. In applying the general definition of differential privacy to eye tracking, we acknowledge that individual users, service providers, and policy makers may have different positions on what level of privacy versus utility is desirable. Our work provides a theoretically grounded analysis of privacy preserving mechanisms to empower these stakeholders to make such decisions.

Implications. Table 1 presents some of the threats that may be posed if an adversary was to access eye tracking data with no privacy protocol in place. Specifically, we elaborate three scenarios where eye tracking data is collected with good intentions, but if hacked, could have consequences for the individuals concerned.

Scenario 1:

A hospital or doctor’s office collects eye tracking data as part of patients’ general examination. A research grant enables a team to use this data to build a machine learning model that can predict whether someone has a certain neurological disorder. A hacker gains unauthorized access to this database and is able to identify specific individuals with the disorder. The hacker then sells or publicly releases the identity of these individuals, negatively impacting their employment opportunities, inflating their health insurance costs, and elevating their social and emotional anxiety.

Scenario 2: A parent signs a consent form allowing her child to be eye tracked in a classroom. The consent form says that this data is for a research project to understand and characterize learning disabilities and build interventions. The anonymized dataset will be released as part of an NIH big data initiative. If an adversary manages to access an individual child’s data and analyze it for markers of dyslexia (for example), they may sell the information to a marketing company that will contact the parent with unsolicited advertising for therapies.

Scenario 3: A publicly funded research team is using eye tracking to study awareness and fatigue of commercial truck drivers. The eye movement data along with the scene being viewed is streamed to a remote server for later analysis. A driver in the study was involved in an accident that resulted in a fatality. Although drivers were told their data would be de-identified, a private investigator, hired by the family of the deceased, was able to extract his/her data record from the database, revealing evidence that (s)he was at fault in the accident.

In scenarios such as these, research teams may reassure participants that raw data will not be released, or that individual data will be de-identified or aggregated (often in the form of heatmaps), providing the impression that privacy is preserved.

2. Background

2.0.1. The problem with de-identification.

The first “solution” that occurs to many of us is to simply anonymize, or de-identify the dataset. This operation refers to removing personal identifiers such as the name of the participant from the dataset. The problem with this approach is that it is not future-proof; as newer datasets are released, multiple datasets can be linked, and the identity of a participant can then be inferred [nissim2017differential, ; ohm2009broken, ; holland2011biometric, ; komogortsev2010biometric, ].

2.0.2. The problem with running queries.

A second “solution” would be to not release the dataset as is, rather allow the analyst to query the dataset. The dataset would not allow queries on individual items, but only on large numbers of items. In other words, a query such as “Where did the student with the lowest grade look?” would be disallowed. But then, the analyst can run queries such as “Where did the students who did not have the lowest grade look?”, and “Where did all the students look?”, and use these queries to infer the disallowed query. This “solution” is not able to guarantee privacy in the worst case, for example, if the adversary hacks the data of out of persons in the dataset. Then (s)he can easily infer the th person’s data by querying the average or sum of the dataset.

These issues are well known in database research. One widely accepted formal definition of privacy that has emerged from this extensive research is as follows: an individual’s privacy is preserved if the inferences that are made from the dataset do not indicate in any significant way whether this individual is part of the dataset or not. This notion is called differential privacy.

2.0.3. Differential privacy.

Differential privacy as a concept was conceived through insights by theoretical computer scientists aiming to formalize the notion of privacy that was practically achievable as well as theoretically verifiable [Dwork:2011:privatedataanalysis, ]. A survey of differential privacy in different fields is presented by Dwork []. Relevant to eye tracking are the works that have applied differential privacy definitions to machine learning [ji2014differential, ; abadi2016deep, ] and time-series analysis [fan2014adaptive, ; rastogi2010differentially, ]. From a societal impact perspective, the eye tracking industry has as much to gain from these ideas.

2.0.4. Mathematical definition of differential privacy.

Formally, given datasets and that differ in at most one entry, let

denote a randomized mechanism that outputs a query of a database with some probability. Then, let

denote a subset of query outcomes (called an “event”). Then, we say the mechanism is differentially private (or DP in short) if for any , and ,

(1)

In the above inequality, the probability comes from the randomness of mechanism . Such randomness is necessary as we will see in Section 3.3. We note that this is a worst-case analysis that offers a strong guarantee of privacy, because the inequality must hold for all , and all neighboring datasets and .

Another more applicable notion of differential privacy is
differential privacy, which is a generalization of DP. Using the notation above, we say the mechanism is differentially private (or DP in short) if for any , and ( and differs at most one entry),

Typically it is believed that means poor privacy [dwork2014algorithmic, ] because it allows some individuals’ data to be fully recovered, where is the input size. We note that a mechanism can be -DP for multiple combinations of . As a rule of thumb, smaller ’s and ’s means better privacy, though we must point out that directly comparing different numerical values is not informative, e.g.  and are not comparable.

2.0.5. Toy example.

As part of a general wellness dataset , the heights of five people are collected. The mean value as the average height of the population is released. Here, is the set of outputting average height. In this example, an adversary obtains the heights of four of these five persons through hacking. In this way, the adversary has a dataset that contains all persons except the fifth. The adversary computes the average height of the dataset and finds that it is much lower than the average height of the dataset . The adversary thus infers that the fifth person must be very tall.222The adversary can also compute the exact height of the fifth person. In other words, even though the fifth person was not known by the adversary, and the dataset was not released (only the average height was released), the fifth person is also compromised because his or her height can be reverse engineered by the adversary. Now, we introduce a mechanism that perturbs the average height of the dataset by a random amount before releasing it. If the level of perturbation is high enough, the adversary will not be able to even infer whether the fifth person is tall or not. Thus the mechanism protects the privacy of the fifth person. Of course, if we add too much perturbation (or, output totally at random), the utility of the dataset will be affected because the output average height contains little information and does not reflect the average height of the population. This is the privacy-utility tradeoff (see Section 4).

2.0.6. Privacy in eye tracking.

For much of the past two decades, the focus of eye tracking research has been on making eye tracking ubiquitous, and on discovering the breadth of inferences that can be made from this data, especially in the contexts of health [leigh2015neurology, ] and education [jarodzka2017eye, ]. Privacy has not been a high priority because of the benefits of identifying pathology and designing personalized interventions. The relevance of privacy to eye tracking data was eloquently discussed by liebling2014privacy . ling2014synergies and KAB18 have also highlighted the need for eye-tracking data. Privacy considerations have been raised both for streaming data, as well as pre-recorded datasets. Despite growing awareness and concern, few solutions have been proposed. Our work provides a technical solution for the privacy of individuals.

2.0.7. Why heatmaps as the first for privacy analysis.

Besides scanpaths, the heatmap is a popular method of visualizing eye movement data [Duc18, ]. Heatmaps, or attentional landscapes as introduced by pomplun1996disambiguating and popularized by wooding2002fixation , are used to represent aggregate fixations [duchowski2012aggregate, ]. Other similar approaches involve gaze represented as height maps [ESW84, ; vvA07, ]

or Gaussian Mixture Models

[MSHH11, ]. Heatmaps are generated by accumulating exponentially decaying intensity at pixel coordinates relative to a fixation at coordinates ,

where the exponential decay is modeled by the Gaussian point spread function. A GPU-based implementation [duchowski2012aggregate, ] is available for real-time visualization. Though heatmaps are very popular as a visualization, AOI analyses and temporal data analysis is key to eye-tracking research. We have focused on static heatmaps as a proof of concept for the applicability of differential privacy (DP) to eye tracking data. Insights from this work will inform future research on privacy in eye tracking.

3. Analyzing differential privacy of the proposed privacy-preserving mechanisms

In this section, we analyze the differential privacy of four natural random mechanisms. We show two of these mechanisms cannot preserve privacy under the notion of DP. For the other two mechanisms, we provide theoretically guaranteed lower bounds on the noise required for any user-defined privacy level. Because a heatmap is created from aggregation of gaze maps, and because this is a reversible (convolution) process, the privacy of a heatmap is equivalent to that of the aggregated gaze map on which it is based.

3.1. Notations

We use to denote the number of observers in the database and to denote the total number of pixels in the gaze maps. For example, an image of resolution corresponds to . We introduce an integer to cap every observer’s gaze map. For example, if an observer looked at one pixel more than times, we only count in his/her gaze map.333Think of this as if the gaze map saturated. In Section 4.2, we will discuss the privacy-utility trade off and provide an algorithm for finding the “optimal cap”. Let denote the th observer’s personal gaze map (after applying cap). The aggregated gaze map of all observers in the database is denoted by . Here, we normalize by the number of observers in order to compare the noise-level under different setups. To simplify notations, we use to denote the collection of all observers’ gaze maps. Similarly, we use to denote the collection of all observers’ personal gaze maps except the th observer. Then, we will define several gaze-map-aggregation mechanisms as follows:

  • : Directly output the aggregated gaze map. Formally, .

  • : Randomly select gaze maps from dataset (without replacement) and calculate aggregated gaze map accordingly. Formally, assuming the selected gaze maps are , .

  • : Similar with , the only difference is the sampling process is with replacement.

  • : Adding Gaussian noise with standard deviation (noise-level)

    to all pixels independently. Formally, , where is a dimensional Gaussian noise term with zero mean and standard deviation (all dimensions are mutually independent).

  • : Similar with , the only difference is Laplacian noise with noise level is added instead of Gaussian noise.

In short, and inject sampling noise to the output while and inject additive noise.

3.2. Defining eye-tracking differential privacy

We start with re-phrasing the definition of differential privacy to eye tracking data. In the following discussion, we assume that the aggregated gaze map (or its noisy version) has been publicly released444Because DP focuses on worst case scenarios, the adversary also knows all other observers individual gazemaps.. The goal of our research is to protect observers’ personal gaze maps by adding appropriate noise to the aggregated gaze map. Using the notation in Section 3.1, we assume that , all gaze maps other than , are known by the adversary. For any set of outputting gaze maps, differential privacy is formally defined as follows.

Definition 3.1 (Dp).

For any set of event , any collection of gaze maps known by the adversary, we say a mechanism is differentially private if and only if

(2)

where and are any gaze maps of the th observer.

According to differential privacy literatures [dwork2014algorithmic, ], there is no hard threshold between good and poor privacy. For the purpose of illustration, we define the following “privacy levels” in the remainder of this paper:

  • Poor privacy: .

  • Okay privacy: and .

  • Good privacy: and .

Note “okay privacy” and “good privacy” are two examples we used for implementation. Practitioners can set their values of and according to their requirements (smaller and means better privacy). Note again is widely acknowledged as poor privacy [dwork2014algorithmic, ].

3.3. There is no free privacy

We first use (poor privacy) as an example to connect intuition and the definition of DP. Intuitively, if the adversary has the noiseless aggregated gaze map and all other observers’ gaze maps , he/she can perfectly recover by calculating .

Using Definition 3.1 and letting and ,

because will not be a possible output if . Thus, we know can’t be less than 1 to make Inequality 2 hold. Considering corresponds to poor privacy, we know has poor privacy in the language of DP defined in Definition 3.1.

3.4. Random selection gives poor privacy

In Section 3.1, we proposed two versions of random selection mechanisms. The first version () randomly selects observers without replacement while the second version () selects with replacement.

Theorem 3.2 (without replacement).

Mechanism has poor privacy.

Proof.

We prove ’s privacy by considering the following case: assuming resolution 555This case also holds for because the first pixel already leaked information., all observers other than the th did not look at the only pixel, we have,

where means all elements in collection equals to . Thus, we know can’t be less than to make (2) hold. Then, Theorem 3.2 follows because ( is the number of observers selected). ∎

Theorem 3.3 (with replacement).

Mechanism has poor privacy.

Proof of Theorem 3.3 (see Appendix A in Supplementary materials) is similar to the proof of Theorem 3.2.

3.5. Achieving good privacy with random noise

In this section, we show that adding Gaussian or Laplacian noise can give good privacy if the noise level satisfies certain conditions based on user-defined privacy levels.

3.5.1. Gaussian Noise

Gaussian noise is widely used noise in many optical systems. In , we add Gaussian noise with standard deviation independently to all pixels of the aggregated gaze map. The probability density of outputting aggregated gaze map is

(3)

which is a

dimensional Gaussian distribution such that all dimensions are independent. Note all

norm in main paper and appendix represent Frobenius norm of matrices. For simplification, we use to represent when without ambiguity. The next Theorem shows announcing (’s output) will not give much information to adversary if the noise-level is as required (for any , we can always find noise level to guarantee DP).

Theorem 3.4 (Gaussian Noise).

For any noise level , is differentially private.

Theorem 3.4 basically says we can always find a noise level to meet any user-defined privacy level (any and ).

Proof.

Let and to denote any two possible gaze maps of the th observer. To simplify notation, we use to denote the aggregated gaze map from observers other than the th. If the th observer’s gaze map is , the probability density of the outputting is

Similarly, if the th observer’s gaze map is , we have,

For any , and , we have,

Letting and considering , we have,

Thus, for any such that , the requirement of DP is always met. Then, we bound the tail probability for all cases where ’s requirement is not met.

When , we have,

Then, Theorem 3.4 follows by the definition of DP. ∎

3.5.2. Laplacian Noise

Laplacian noise is the most widely used in many differential privacy problems. However, we will show Laplacian noise is not as suitable as Gaussian noise for protecting eye tracking data. The next Theorem shows will not give much information to the adversary if the noise-level is as required.

Theorem 3.5 (Laplacian Noise).

Using the notations above, for any , is differentially private.

Proof of Theorem 3.5 (see Appendix A.2 in supplementary material) is very similar with Theorem 3.4. However, the required noise level, , normally is much higher than the requirement of Gaussian noise, . One can see the Laplacian mechanism requires one more term on noise level, which normally corresponds to times higher noise level.

(a) Surfaces for a chosen privacy level.
(b) Slice of the surface for images.
(c) Original heatmap.
(d) Privacy enhanced heatmap for level.
Figure 2. We examine the privacy-utility tradeoff for selected values of for a simulated heatmap. The greater the noise level we choose to add, the stronger is the privacy guarantee. The relevant stakeholders decide what level of noise is acceptable for a given application. For example, in Figure (d)d, the hotspots are still clear, and a UX designer may find this acceptable for the purpose of getting feedback on the design of a website.

4. Privacy-utility tradeoff

According to Theorem 3.4 and Theorem 3.5, we know better privacy (smaller and ) usually requires higher noise level. In this section, we will conduct experiments to show how Gaussian and Laplacian noise influence the utility, i.e., the corresponding heatmap.

4.1. Noise level vs. information loss

In Figure 2(a), we show a three-dimensional plot where the x and y axes are and respectively. The reader may revisit notations in Section 3.1. On the vertical z-axis, we plot , specifically based on the formula given by Theorem 3.4. The upper surface shows for good privacy ( and ). The lower surface shows for okay privacy ( and ). Any value of above this surface will provide okay privacy, and any value above the upper surface will provide good privacy.

In Figure 2 (a), as the image resolution increases, a larger number of observers is needed in the dataset to maintain the guarantee of good privacy. If there is a small number of observers, good privacy can be achieved by downsampling the image. In Figure 2 (b) we show a slice of this surface at . The dotted lines show an example noise level that we could have set based on what we find acceptable for utility. This is of course user-defined, and will vary depending on the application. The graphs illustrate that at a selected noise level, e.g., , we can achieve good privacy for a image if we have of the order of observers. For a dataset that has observers, we can tell the participants that we can achieve Okay privacy. We show two simulated heatmaps in Figure 2 (c) and (d). The location of the hotspots is unchanged for all practical purposes in the noisy but private heatmap.

We quantify the privacy-utility tradeoff in Figure 3. 100 noisy heatmaps are generated using the workflow in Figure 1. Real-world gaze maps from five observers looking at a portrait of a woman are used here.666Gaze data from dataset of raiturkar2018 , stimulus image from farid2012perceptual and mader2017identifying . The original heatmap is shown in Figure 1 to the right. For the purpose of the noisy heatmap, we assume the number of observers in dataset is 777If the number of observer is much smaller than 50,000, the practitioner could either down-sample gaze maps or sacrifice privacy (setting larger and ) to get an acceptable noise level. (the noise is added according to and Theorem 3.4 and Theorem 3.5). We simulate this large number of observers by replicating each of the five real observers times.

In the supplementary materials, we show the original heatmap overlaid on the stimulus image in high resolution (original.png). We also show examples of privacy enhanced heatmaps for this original heatmap at the privacy level (privancyenhanced.mpg). For this image resolution, based on Theorem 3.5.

We numerically analyzed correlation coefficient (CC) and mean square error (MSE) of noisy heatmaps under different privacy levels (different values of while fixing ). The cap is decided according to Algorithm 1 (see Section 4.2 for details). 100 noisy heatmaps are generated under each setting. The average CC and MSE of those generated noisy heatmaps are plotted in Figure 3. Error bars in Figure 3 represent the standard deviations.

It can be seen from the Figure 3 that Laplacian mechanism results in much more information loss than Gaussian mechanism to achieve same level of privacy under our setting. For both Gaussian and Laplacian mechanisms, one can see that better privacy (smaller ) usually means more information loss in the outputting heatmap.

We note that these graphs are based on real data of only five observers on one stimulus image. This graph is an example of how a practitioner may visualize the privacy-utility tradeoff in any given application domain. In practice, stakeholders would use our proposed workflow on their dataset to prepare such visualizations for different settings of the internal parameters () to help them evaluate the privacy-utility tradeoff. We note also that Theorem 3.4 is specific to aggregate heatmaps. For any other mechanism, the appropriate theorems would need to be worked out and the workflow modified to be consistent with the problem definition. We also point out that while mean squared error and cross-correlation are readily computed, they do not fully reflect the information lost or retained when noise is added. As an illustration, in Figure 2, the hotspots in the privacy enhanced heatmap are still clear, and a UX designer may find that the heatmap acceptable for their use case even though the MSE and CC metrics suggest otherwise.

Figure 3. Similarity between the privacy enhanced heatmap and original heatmap when is varied. The smaller the value of the stronger is the privacy guarantee from the DP perspective. This graph illustrates the privacy-utility tradeoff: as is made smaller, the mean squared error increases and the cross-correlation decreases. The Laplacian mechanism results in lesser similarity than the Gaussian mechanism.

4.2. Computing the optimal “cap”

In order to achieve better privacy with less information loss, we set a cap on the maximum number of times an observer’s gaze position falls on a pixel. This cap was denoted by in Section 3. Here we discuss the information loss on different settings on .

When is larger, higher noise level is required to get the same privacy (both the upper bound for and are proportional to ). However, larger

also corresponds to less information loss on every observer’s gaze map. In other words, there is tradeoff between variance (noise) and bias (cap) on cap. Let

to denote the gaze map outputted by Gaussian mechanism with cap and noise level . Thus, denotes the original aggregated gaze map and denotes the aggregated gaze map with cap without adding any Gaussian noise. Algorithm 1 provides an implementable way to choose the best value of to optimize mean square error (MSE).

1 Input: Individual gaze maps and noise factor .
2 Initialization: Calculate and the maximum number of times one observer look at one pixel: .
3 for  do
4       Calculate according to Theorem 4.1.
5 end for
6Output: value with the smallest expected MSE.
Algorithm 1 Utility optimization algorithm on choosing

In the next theorem, we will analytically analyze the expectation of MSE.

Theorem 4.1 ().

The expected MSE of Gaussian mechanism with cap and noise level is

Proof.

By the definition of MSE, we have,

Using the notations defined above, the expected square error on th pixel is

(4)

because , we have,

(5)

Theorem 4.1 follows by combining (4) and (5). ∎

Then, we analyze the complexity of Algorithm 1 in the next theorem, which says Algorithm 1 is with linear-time complexity.

Theorem 4.2 ().

The complexity of Algorithm 1 is .

Proof.

Rewriting the expected MSE in Theorem 4.1, we have,

where the norm still represents Frobenius norm. Since and are

-dimensional vectors, the complexity of computing expected MSE for a given

and will be .

Then, we evaluate the complexity of calculating the capped noise-free aggregated gaze map . Since we are adding cap to each observer’s individual gaze map, we can add cap to every pixel of all observers. Thus, one can see there are pixels from observers in total. Considering the for loop in Algorithm 1 runs times, Theorem 4.2 follows. ∎

5. Implications

5.0.1. Datasets are growing.

In contrast to the previous research paradigm where datasets were collected, archived, and then released, there is a growing trend to crowd-source data collection, via mobile apps for example, so that new data is continually being added to the dataset. With the methods presented, the new data is safe as long as the publicly available dataset is put through the Gaussian noise mechanism. Another way that eye tracking datasets might seek to preserve a user’s privacy is by releasing their eye movements, but not what they were looking at. With the methods we present, releasing the stimulus image/video that observers look at is safe because even in the worst case an adversary will not be able to guess at what a particular individual looked at.

5.0.2. Why can the generic theorem of differential privacy not be applied to eye tracking?

Unlike classical databases, every observer in eye tracking database contributes much richer information (i.e., millions of pixels) than individuals in classical databases. However, the generic theorems in differential privacy do not focus on high-dimensional data. Simply applying union bounds will result in very loose privacy bounds and unacceptable noise levels.

5.0.3. Why are we adding noise when the field is spending so much time and effort removing it?

There has been much research in eye tracking to improve the accuracy of eye tracking to maximize the utility and applicability of eye tracking devices for diverse use cases. This work has been directed at sources of noise that are inherent to the process, such as sensor and measurement noise. However, as eye tracking becomes ubiquitous, there is a cost for the individual user whose data is being recorded and for the organizations who are safeguarding and distributing this data. This cost is the sacrifice of privacy of the individual. We do not argue for reversing the technological push towards reliable, accurate eye tracking. Rather, we argue that our objective as a community must expand to include privacy in addition to utility. For those situations where privacy is deemed to be worth protecting, we introduce flexible mechanisms to do so. Noise is added to data in the aggregate, not to any individual’s data. Further, the noise function is fully understood, and its parameters are set based on the desired privacy-utility tradeoff. Unlike measurement noise, whose source may not be fully understood, we add noise in a controlled and measured way to achieve a specific objective.

5.0.4. Why should the research community care?

This research requires an interdisciplinary approach. The eye tracking community cannot just “leave it to the privacy researchers” because the theoretical guarantees that form the basis of this framework are highly dependent on the particular mechanisms that the data goes through (the functional forms in the equations, the particular thresholds, etc.). These mechanisms have to be developed collaboratively to preserve the utility of the output for eye tracking applications.

5.0.5. Why should the industry care?

The push towards ubiquitous eye tracking is being driven by large investments by major industry players. While their applications are highly data-dependent, their customers are increasingly data-sensitive. This paper proposes the first of a class of solutions which pair theoretical analysis from a DP-perspective with a practically implementable workflow for developers. This work opens the door for a responsible industry that can inform their users that while they may eye track the users at very high accuracy and resolution to enable foveated rendering (for example), they would put this data through mechanism A or B before releasing it to the app developers.

6. Conclusions and Future directions

We have proposed to apply the notion of differential privacy toward the analysis of privacy of eye-tracking data. We have analyzed the privacy guarantees provided by the mechanisms of random selection, and additive noise (Gaussian and Laplacian noise). The main takeaway from this paper is that adding Gaussian noise will guarantee differential privacy; the noise level should be appropriately selected based on the application. Our focus is on static heatmaps as a sandbox to understand how the definitions of differential privacy apply to eye tracking data. In this sense, this paper is a proof of concept. Eye tracking data is fundamentally temporal in nature, and the privacy loss if an adversary could access saccade velocities and dynamic attention allocation would be much greater than static heatmaps. Future work would systematically consider all the different ways in which eye tracking data is analyzed and stored.

We have considered two noise models (Gaussian and Laplacian noise). Follow up work might consider the privacy-utility trade-off for different noise models like pink noise. For temporal data such as raw eye movements, it may even be relevant to understand which noise models are more realistic. In other words, if the user’s virtual avatar was driven by privacy enhanced eye tracking data, it should still appear realistic and natural.

The mechanisms and analyses presented here apply to real-valued data that can be aligned to a grid and capped to a maximum value without loss of utility. Though our focus has been on eye tracking heatmaps, there are other data that fall in this category, for example, gestures on a touchscreen, or readings from a force plate. It would also be interesting to generalize these mechanisms and analyses to other physiological data such as heart rate, galvanic skin response, and even gestures or gait. These data are conceptually similar to eye tracking data in that they carry signatures of the individual’s identity and markers of their health and well-being. Furthermore, in physiological domains many data and analyses are temporal in nature. It would be interesting and important to define and analyze differential privacy for temporal data.

Acknowledgements.
This material is based upon work supported by the National Science Foundation under Grant No. 1566481. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.


References

  • [1] Eu general data protection regulation. https://eugdpr.org, 2018.
  • [2] Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pages 308–318. ACM, 2016.
  • [3] Lewis Chuang, Andrew Duchowski, Pernilla Qvarfordt, and Daniel Weiskopf, editors. Ubiquitous Gaze Sensing and Interaction, volume 8 of Dagstuhl Reports, Schloss Dagstuhl–Leibniz-Zentrum fü Informatik, Germany, 2018. Dagstuhl Publishing.
  • [4] Andrew T. Duchowski. Gaze-based interaction: A 30 year retrospective. Computers & Graphics, pages –, 2018. Special Section on Serious Games and Virtual Environments.
  • [5] Andrew T Duchowski, Margaux M Price, Miriah Meyer, and Pilar Orero. Aggregate gaze visualization with real-time heatmaps. In Proceedings of the Symposium on Eye Tracking Research and Applications, pages 13–20. ACM, 2012.
  • [6] Cynthia Dwork. Differential privacy: A survey of results. In International Conference on Theory and Applications of Models of Computation, pages 1–19. Springer, 2008.
  • [7] Cynthia Dwork. A firm foundation for private data analysis. Communications of ACM, 54(1):86–95, 2011.
  • [8] Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3–4):211–407, 2014.
  • [9] G. Elias, G. Sherwin, and J. Wise. Eye movements while viewing NTSC format television. Technical report, SMPTE Psychophysics Subcommittee, March 1984.
  • [10] Liyue Fan and Li Xiong. An adaptive approach to real-time aggregate monitoring with differential privacy. IEEE Transactions on knowledge and data engineering, 26(9):2094–2106, 2014.
  • [11] Hany Farid and Mary J Bravo. Perceptual discrimination of computer generated and photographic faces. Digital Investigation, 8(3-4):226–235, 2012.
  • [12] Jefferson Graham. Is apple really better about privacy? here’s what we found out. https://www.usatoday.com/story/tech/talkingtech/2018/04/17/apple-make-simpler-download-your-privacy-data-year/521786002/, April 2018.
  • [13] Corey Holland and Oleg V Komogortsev. Biometric identification via eye movement scanpaths in reading. In Biometrics (IJCB), 2011 International Joint Conference on, pages 1–8. IEEE, 2011.
  • [14] Halszka Jarodzka, Kenneth Holmqvist, and Hans Gruber. Eye tracking in educational science: Theoretical frameworks and research agendas. Journal of Eye Movement Research, 10(1), 2017.
  • [15] Zhanglong Ji, Zachary C Lipton, and Charles Elkan. Differential privacy and machine learning: a survey and review. arXiv preprint arXiv:1412.7584, 2014.
  • [16] Mohamed Khamis, Florian Alt, and Andreas Bulling. The past, present, and future of gaze-enabled handheld mobile devices: Survey and lessons learned. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI), pages 38:1–38:17, 2018.
  • [17] Palanivel Kodeswaran and Evelyne Viegas. Applying differential privacy to search queries in a policy based interactive framework. In Proceedings of the ACM first international workshop on Privacy and anonymity for very large databases, pages 25–32. ACM, 2009.
  • [18] Oleg V Komogortsev, Sampath Jayarathna, Cecilia R Aragon, and Mechehoul Mahmoud. Biometric identification via an oculomotor plant mathematical model. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications, pages 57–60. ACM, 2010.
  • [19] R John Leigh and David S Zee. The neurology of eye movements. Oxford University Press, USA, 2015.
  • [20] Daniel J Liebling and Sören Preibusch. Privacy considerations for a pervasive eye tracking world. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication, pages 1169–1177. ACM, 2014.
  • [21] Rich Ling, Diako Mardanbeigi, and Dan Witzner Hansen. Synergies between head-mounted displays and headmounted eye tracking: The trajectory of development and its social consequences. In Living inside social mobile information, pages 131–156, 2251 Arbor Blvd. Dayton, OH 45439, USA, 2014.
  • [22] Ao Liu, Yun Lu, Lirong Xia, and Vassilis Zikas. How private is your voting? a framework for comparing the privacy of voting mechanisms. arXiv preprint arXiv:1805.05750, 2018.
  • [23] Brandon Mader, Martin S Banks, and Hany Farid. Identifying computer-generated portraits: The importance of training and incentives. Perception, 46(9):1062–1076, 2017.
  • [24] Parag K. Mital, Tim J. Smith, Robin L. Hill, and John M. Henderson. Clustering of Gaze During Dynamic Scene Viewing is Predicted by Motion. Cognitive Computation, 3:5–24, 2011.
  • [25] Kobbi Nissim, Thomas Steinke, Alexandra Wood, Micah Altman, Aaron Bembenek, Mark Bun, Marco Gaboardi, David R O’Brien, and Salil Vadhan. Differential privacy: A primer for a non-technical audience. Working Group Privacy Tools Sharing Res. Data, Harvard Univ., Boston, MA, USA, Tech. Rep. TR-2017-03, 2017.
  • [26] Paul Ohm. Broken promises of privacy: Responding to the surprising failure of anonymization. Ucla L. Rev., 57:1701, 2009.
  • [27] Marc Pomplun, Helge Ritter, and Boris Velichkovsky. Disambiguating complex visual information: Towards communication of personal views of a scene. Perception, 25(8):931–948, 1996.
  • [28] Pallavi Raiturkar, Hany Farid, and Eakta Jain. Identifying computer-generated portraits: an eye tracking study. Technical report, University of Florida, 08 2018.
  • [29] Vibhor Rastogi and Suman Nath. Differentially private aggregation of distributed time-series with transformation and encryption. In Proceedings of the 2010 ACM SIGMOD International Conference on Management of data, pages 735–746. ACM, 2010.
  • [30] Marnix S. van Gisbergen, Jeroen van der Most, and Paul Aelen. Visual Attention to Online Search Engine Results. Technical report, De Vos & Jansen in cooperation with Checkit, 2007. URL: http://www.iprospect.nl/wp-content/themes/iprospect/pdf/checkit/eyetracking_research.pdf (last accessed Dec. 2011).
  • [31] David S Wooding. Fixation maps: quantifying eye-movement traces. In Proceedings of the Symposium on Eye tracking research & Applications (ETRA), pages 31–36. ACM, 2002.

Appendix A Missing proofs for theorems

a.1. Proof and discussion for Theorem 3.3

Theorem 3.3 says has poor privacy.

Proof.

Considering the same case as the proof of Theorem 3.2, we have,

Thus, we know can’t be less than to make Inequality 2 hold and Theorem 3.3 follows. ∎

a.2. Proof for Theorem 3.5

Proof.

The probability density function

of output is

To simplify notations, we use to represent
when without ambiguity. Let and to denote any two possible gaze maps of the th observer. If the th observer’s gaze map is , the probability density function of the outputting is

where we abused notation to let , which is the aggregated gaze map except . Similarly, if the th observer’s gaze map is , we have,

For any , and , we have,

Since the probability is the integral of PDF, the above upper bound for PDF ratio is also an upper bound for probability ratio. Thus, for any possible output set , we have,

and Theorem 3.5 follows by applying Definition 3.1. ∎