Learning and Anticipating Future Actions During Exploratory Data Analysis

09/25/2018 ∙ by Ran Wan, et al. ∙ Washington University in St Louis 0

The goal of visual analytics is to create a symbiosis between human and computer by leveraging their unique strengths. While this model has demonstrated immense success, we are yet to realize the full potential of such a human-computer partnership. In a perfect collaborative mixed-initiative system, the computer must possess skills for learning and anticipating the users' needs. Addressing this gap, we propose a framework for inferring focus areas from passive observations of the user's actions, thereby allowing accurate predictions of future events. We evaluate this technique with a crime map and demonstrate that users' clicks appear in our prediction set 95 of the time. Further analysis shows that we can achieve high prediction accuracy typically after three clicks. Altogether, we show that passive observations of interaction data can reveal valuable information that will allow the system to learn and anticipate future events, laying the foundation for next-generation tools.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

The overarching goal of visual analytics is to create a symbiosis between human and machine. Visualization serves as a medium that allows users to collaborate with computers in ways that takes advantage of their distinct strengths (Keim et al., 2008). Both Crouser and Chang (Crouser and Chang, 2012; Crouser et al., 2013) and Green et al. (Green et al., 2008) describe an affordance-based partnership model that leverages the human’s unique skills (e.g., reasoning and social awareness) with the machine’s computational powers. Typically, the human drives the analysis process by exploring the data to form hypotheses and develop insights. Success in the analytic process hinges on the user’s ability to perform meaningful interactions with the data and on the machine’s ability to provide the right information at the right time (Ellis and Mansmann, 2010; Keim et al., 2008).

Although this model has shown remarkable success, for many analysts, the complaint “too much data – not enough information” is still all too common (Newsom Jr et al., 2013). In many ways, the tools fall short of their full potential. A useful collaborative tool should possess the ability to learn about what the user is doing, what the user will be doing, what the user ought to be doing, and whether the current trajectory will solve the problem at hand. Current visual analytics tools do not yet possess the ability to learn and anticipate actions, and therefore are unable to tailor their outputs.

These considerations have, in large part, driven the goals of many visual analytics researchers. To understand what the user is doing, Pirolli and Card introduced the sensemaking loop, which models an analyst’s progression from information foraging through hypotheses generation and insight (Pirolli and Card, 1999). Researchers have also created taxonomies for the types of tasks and interactions that are feasible for a given visualization (Amar et al., 2005; Buja et al., 1996; Chuah and Roth, 1996; Dix and Ellis, 1998; Gotz and Zhou, 2009; Lee et al., 2006; Keim, 2002; Shneiderman, 1996; Wilkinson, 2006; Yi et al., 2007; Zhou and Feiner, 1998). Researchers have demonstrated automatic and manual techniques for tracking workflow (Andrienko et al., 2018; Cowley et al., 2005; Bavoil et al., 2005; Dabek and Caban, 2017; Callahan et al., 2006; Freire et al., 2006; Heer et al., 2008; Javed and Elmqvist, 2013), analysis strategies (Brown et al., 2014; Ottley et al., 2015), and personality (Brown et al., 2014; Ottley et al., 2015; Steichen et al., 2013; Toker et al., 2013). To understand what the user ought to be doing, researchers introduced techniques for detecting cognitive biases for interaction data (Cho et al., 2017; Dimara et al., 2017; Wall et al., 2017). While these past efforts have demonstrated some success, predicting future events is still an open challenge, making it difficult to realize a human-computer team that genuinely operates in tandem.

The work in this paper builds on the prior results and aims to develop automatic techniques for learning and anticipated events during visual data exploration. We propose a context-aware, data-driven prediction system that integrates advancements from artificial intelligence within a visualization tool to detect future interactions. Specifically, we create a hidden Markov model that represents

evolving attention as a series of unobservable states giving rise to actions. We can then automatically infer elements of interest from passive observations of the user’s actions, thereby allowing accurate predictions of future interactions.

For a proof of concept, we conducted a controlled user study and collected click-stream data as participants explored a map visualization of reported crimes (see figure 4). Our results show that the probabilistic model can achieve, depending on the type of task, between 95% and 97% accuracy at predicting future mouse clicks from observation of their click behavior. Further analysis shows that we can achieve high prediction accuracy in a short period (typically after three clicks). Altogether, we show that passive observations of interaction data can reveal valuable information about users’ attention.

We posit that the work in this paper opens the door for many opportunities to improve analysts’ experience and lay the foundation for next-generation visual analytics systems. For instance, the machine can proactively perform tasks such as prefetching, calculation of summaries statistics, suggestion formation, bias or error identification, and target selection assistance for overcrowded interfaces. We discuss how the proposed technique can help create next-generation visual analytics systems that can automatically learn users’ focus to support the analysis process better.

We make the following contributions:

  • A design-agnostic approach to modeling interaction with visualization: We provide a design-agnostic approach for automatically learning future event during data exploration and demonstrate, using a crime map, how to model users’ interests and actions.

  • Predicting future clicks from passive observations: We demonstrate how to apply this model to a real-world visualization and dataset. Our proof-of-concept experiment validates that we can use this approach on real systems for real-time predictions. We demonstrate the participants’ clicks appear in our prediction set on average 95% of the time.

  • Implications for designing mixed-initiative visualization tools: We discuss techniques for supporting the user in real time and contribute to next-generation of visual analytics systems.

2. Prior Work on Learning from Interaction Logs

Analyzing interactions to learn about the user or an interface design has been in important area of research across many fields. For example, in machine-learning, researchers have used interaction data to model and predict users’ browsing behaviors on websites and web search systems (Eirinaki and Vazirgiannis, 2003; Kolari and Joshi, 2004; Kosala and Blockeel, 2000; Srivastava et al., 2000). Some researchers have also used interaction data to explore how interface design can bias user behaviors (Guan and Cutrell, 2007; Joachims et al., 2017) and how to overcome these biases (Joachims et al., 2017).

In databases, Battle et al. (Battle et al., 2016) analyzed interaction data to improve prefetching techniques. They showed that analyzing behavioral data resulted in a 430% improvement in system latency. In the HCI field, researchers showed that displaying interaction history of past users improves the problem-solving of future users (Wexelblat and Maes, 1999). Furthermore, Gajos et al. developed the SUPPLE system that can learn the type and degree of a user’s disability by analyzing mouse interaction data (Gajos and Weld, 2004; Gajos et al., 2008a, b). Fu et al. developed statistical and machine-learning models to predict behavior on crowdsourcing annotation and web search tasks (Fu et al., 2017). These are just a few of the many examples of related work, across a vast number of research communities. However, most relevant to the the work in this paper is research in the area of analytic provenance.

2.1. Analytic Provenance

It is a common belief that interaction logs contain crucial information about an analyst’s reasoning process with a visualization (Pike et al., 2009). Through interaction with a visual interface, analysts explore data, form and revise hypotheses, and make judgments. The term provenance refers to the history of an object or idea, and analytic provenance researchers aim to track and analyze the analytics process (Freire et al., 2008; Ragan et al., 2016; Nguyen et al., 2014; North et al., 2011). At a high level, the goal is to automatically capture and encode interactions with a visual interface to infer analysts’ goals and intentions. Researchers and practitioners can then recall, replicate, recover actions, communicate, present, and perform meta-analyses on the analysis process (Ragan et al., 2016).

A standard approach to recovering the analytic process is to capture low-level user actions such as mouse and keyboard events. For example, Cowley et al. developed Glassbox with the goal of logging interactions to infer intent, knowledge, and work-flow automatically (Cowley et al., 2005). Dou et al.  demonstrated that is it possible to extract high-level information from interaction data (Dou et al., 2009). They conducted a user study and recorded interactions while financial analysts used a visual analytics system to detect wire fraud. Through a manual analysis of the interaction data, they showed that is possible to recover analysts’ strategies, methods, and findings. More recent work by Feng et al. demonstrated metrics for quantifying the data exploration (Feng et al., 2018). Dabek and Caban introduced a grammar-based approach to modeling user interactions (Dabek and Caban, 2017). They used automatons to model users’ behavior and demonstrated that their technique could capture user’s analytic process.

A series of work focused on recording, annotating, and maintaining interaction history, and demonstrates the benefit of preserving a linear history for future use (Bavoil et al., 2005; Brodlie et al., 1993; Callahan et al., 2006; Gotz et al., 2006; Heer et al., 2008; Shrinivasan and van Wijk, 2008). VisTrails, for instance, automatically keeps track of the analyst’s workflow and pipeline, making it possible for the user to resume, reuse, and share their explorations (Bavoil et al., 2005; Callahan et al., 2006; Freire et al., 2006). Heer et al. (Heer et al., 2008) and Javed and Elmqvist (Javed and Elmqvist, 2013) created graphical history tools that would allow users to track, recall, and share their process. Gotz et al. developed tools for supporting the sensemaking process by augmenting existing data with user annotations (Gotz et al., 2006).

2.2. Analyzing Interaction to Infer User Attributes

Researchers have also used interaction logs to infer user knowledge or intent. Brown et al. used Dis-Function to learn analysts’ knowledge through direct manipulation of visual elements (Brown et al., 2012). Users expressed their domain knowledge by grouping similar points. The system then used this information to update the underlying distance function for the data projection. Prior work also demonstrates how interaction data can be used to steer computation and refine model parameters (Endert et al., 2012b, a; Garg et al., 2008; Paurat et al., 2014; Saket et al., 2017; Sarvghad et al., 2018; Xiao et al., 2006). For example, Endert et al. (Endert et al., 2012b, a) designed ForceSPIRE, which is a text data analysis tool that automatically updates the underlying layout model as users interact with documents. Guo et al. analyzed interaction logs to understand how analysts achieve insights (Guo et al., 2016).

Other researchers analyzed interaction data to infer individual characteristics. For instance, recent work by Wall et al. introduced a framework for quantifying different types of biases and proposed a Markov chain technique for identifying biases in real time 

(Wall et al., 2017). Work by Brown et al. used machine-learning techniques to infer user attributes automatically (Brown et al., 2014). They showed that off-the-shelf algorithms could successfully predict completion time and personality traits based on low-level mouse clicks and moves (Brown et al., 2014). They also demonstrated the viability of making real-time inferences from passive observations. Ottley et al. analyzed clickstream data to demonstrate a correlation between personality traits and search strategies with hierarchical visualizations (Ottley et al., 2015). Lu et al. used eye-tracking data to select parameters for a visualization automatically (Lu et al., 2010). Also utilizing eye gaze data, Steichen et al. (Steichen et al., 2013) and Toker et al. (Toker et al., 2013) predicted cognitive traits such as visual working memory, personality, and perceptual speed.

3. General Modeling Framework

Figure 2. Extracting low-level features.

The previous section recaps prior work aimed at learning from interaction data. Much of the proposed approaches in the visualization community have primarily focused on analyzing behavior for tracking analytic provenance. In this work, we propose and demonstrate, for the first time to the best our knowledge, a model for predicting mouse interactions before they occur.

The goal is to create a computational model that is task- and design-agnostic. We use a bottom-up approach that utilizes low-level visual features that we extract from the visualization design (e.g., color, shape, and position). We make passive observations of low-level interaction and consider the properties associated with each visual element the user interacts with. It is important to note that low-level features do not incorporate top-down signals that may be derived from the task at hand. However, the extraction process (detailed in figure 2) is robust and has proved successful for modeling visual attention in images (Itti et al., 1998; Itti and Koch, 2000, 2001; Koch and Ullman, 1987).

Figure 3. A hidden Markov model approach to modeling attention and actions with a visualization system. We represent evolving attention as a sequence of latent variables in the hidden state space. Observable states are the user’s actions. The conditional distribution of each observation depends on the state of the corresponding latent variable.

We construct a hidden Markov model, presuming the user’s attention evolves under a Markov process (that is, the attention at a particular time only depends on their attention at the previous time step), and interaction events are generated conditionally independently given this sequence of attention shifts. Figure 3 shows an overview of the hidden Markov model used. We represent selective attention as a sequence of latent variables. The conditional distribution of each observation depends on the state of the corresponding latent variable. To specify this model, we need to define the following:

  • Unobservable states: A space of the possible ”interests” driven by the salient visual features.

  • Observable states: A space of possible interactions.

  • Dynamical model: A model of the evolution of the user’s attention over time.

  • Observation model: A model of how attention gives rise to observed actions.

3.1. Defining Unobservable and Observable States

First, we define a discrete time index associated with interactions with a visualization. At the start of exploring the dataset, we define . This index will then increment every time a participant interacts with a visual element. Our model will presume that there is a hidden, unobserved state representing the attention of the user at time . We will assume that we can map the sequence of observed interactions to this hidden sequence of focus areas. The task we consider here is how to infer the hidden attention/focus of the user by observing their sequence of interactions.

In order to create a model of user interaction, we must first understand the mechanisms that drives the user to interact with a particular visual element. Our model assumes no expertise or prior knowledge from the user. We also assume that innate biological models of selective attention drive interactions. At a high level, we build on Koch and Ullman’s model of visual attention (Koch and Ullman, 1987) and learn a saliency map for a given time step.

3.1.1. Unobservable States

We therefore begin by segmenting the visualization based on the low-level visual features. We define as the mark space that specifies the types of visual marks and channels used in the visualization. Visual marks are geometric elements, and there are four primitive types: points, lines, areas, and volumes (Bertin, 1983). Visual channels describe the graphical properties of visual marks such as position, size, color, luminance, shape, texture, and orientation (Bertin, 1983). Together with Card et al.’s data-mapping principles (Card et al., 1999) these design guidelines can be used to describe any existing visual representation (Card et al., 2009). We create by decomposing the visualization into its primitive visual marks and channels, as detailed in figure 2.

A crucial component of the probabilistic model is the specification of a hidden state space, which will represent the attention of the user at a given time. In general, we propose that designers can tailor this space for a given scenario. In many scenarios, we may reasonably assume the users’ attention at a given time to be related to some weighted subset of visualization marks, for example, visual marks of a particular size, color, shape, or in a specific location. In such a case we may define the latent attention at time , as where represents the feature weights, and represent feature values describing the user’s focus at time . We provide more details for the feature weights below.

3.1.2. Observable States

In contrast to the hidden attention space, the space of observed actions is typically easy to define. We may define to be an observation of the user at time , where this observation will be an interaction event with a visual element (e.g., mouse clicks, mouse moves, eye gaze, etc.). We will represent each observation as the set feature values that describes the visual element.

Symbols Description
the time an event occurs.

Mark Space: The set of N visual features extracted from the visualization (e.g., position, size, and color).


observation interaction at time (e.g., click, gaze, and hover). We consider set of values for the N features.
bias vector for all features .
latent attention at time .
Table 1. Mathematical symbols.

3.2. Dynamical Model

The full specification of a hidden Markov model requires defining a probabilistic model of the dynamics of the hidden state space, that is how the user’s latent attention shifts from one time-step to the next. We define to be the latent attention of the user at time .

3.2.1. Single Task

We model shifts of attention by defining a probability distribution

describing the evolution of attention. We propose that this model should be reasonably easy to define in most visualization settings. In general, it is unlikely that the user’s focus will change rapidly from one interaction event to the next. Therefore we can often choose this dynamics model to represent a simple random diffusion in the latent space:

where is some appropriate noise distribution (e.g., zero-mean Gaussian noise for real-valued features or a discrete distribution favoring for discrete features, see also below). This model assumes that focus of attention is likely to remain constant from time to , with some slow decay as the user continues to interact with the system. This is consistent with psychological research that suggest that selective attention does not change drastically over time (Koch and Ullman, 1987).

3.2.2. Multiple Tasks

If a visualization setting may comprise a sequence of separate tasks, we may also construct dynamical models that loosely encode that user’s attention may change in one of two ways: either the current task has not yet completed, in which case we may assume a simple drift model as described above. Otherwise, if the task has completed, we might model the attention at the next time step as being drawn from some broad distribution over the space of possible focus points. In such a construction our dynamical model would be a mixture distribution with two components corresponding to the continuation of a task or beginning a new task. Such an approach has been used to model user intent in online games from observed low-level behavior (Garnett et al., 2014).

3.2.3. Bias

Koch and Ullman hypothesized that it is useful to consider bias when modeling attention shifts (Koch and Ullman, 1987). Similarly, recent work by Wall et al. (Wall et al., 2017) proposed a framework modeling different types cognitive biases during visual data exploration. Motivated by the prior work, we adopt a bias vector to capture the relative importance of the various components of the mark space where .

3.2.4. Evolution of Attention

For the dynamical model of the hidden state , we assume that the attention at time is typically similar to the attention at the previous time step ; that is, that attention does not change rapidly over time. We further assume that the each component of the attention vector evolves independently:

Continuous Features

For an arbitrary continuous feature such as position, we may model of evolution the features using additive zero-mean Gaussian noise:

where the parameter

is the variance of the drift. For strictly positive values such as size or intensity, we could use a similar diffusion on the logarithm of the value instead, or we could simply project onto the feasible domain.

Categorical Features

One possibility for modeling the evolution of an arbitrary discrete parameter such as color or shape is a simple “biased coin flip” model favoring no change:

where is a parameter modeling the fickleness of the user, is the Kronecker delta distribution with support , and

is the uniform distribution over the values not equal to

. This distribution effectively says the user’s attention does not change with probability ; otherwise, it changes to a different value with equal probability.

Ordinal Features

We suggest treating ordinal feature as either categorical or continuous and using one of the above.

3.2.5. Bias

We also suggest that the relative importance of the various components of the mark space should remain relatively stable over time, and can adopt a diffusion for the bias parameter as well:

Note that we must account for boundary effects and normalization effects when defining the dynamical model; in practice, we may simply project out-of-range values onto their feasible domains.

3.3. Observation Model

We must also specify an observation model , which defines how latent user attention generates interactions. We must take care to define such an observation model appropriately for a given scenario, and we will demonstrate how we might construct an explicit example in our user study below. In a visualization setting, defining a reasonable choice for such a model is relatively straightforward. If a user’s attention is represented by some values in the same space as the visual elements in the visualization, we may often construct an observation model that loosely specifies that “users interaction with related to their hidden focus space.” We will show an explicit construction of such a model in Section 4.

3.4. Predicting Movement

Our goal at each time stamp is to predict the user’s possible next interactions given the set of the user’s previously observed events. To approach this goal, we will use our hidden Markov model to infer the attention of the user at time , , given the interactions up to time , . Unfortunately, this inference is usually not possible in closed form, but we can use a particle filter. Particle filtering is a well-established technique for inferring the hidden states of dynamical systems such as ours (Doucet et al., 2000; Gordon et al., 1993).

We represent our belief about the latent state given the previous events with a set of particles , each particle a point in the attention space. These particles represent samples from the posterior distribution . Suppose for induction that we have a set of such particles. Particle filtering proceeds by repeating the following steps:

  • We push the particles through the dynamical model by sampling a new value for each particle:

  • We observe the next interaction event and weight the particles according to the agreement with the observation by evaluating the observation model:

  • We sample a new set of particles by sampling with replacement from the set of existing particles with probability equal to the weights .

This set of resampled particles will represent a sample from the distribution , and we may proceed inductively.

For each timestamp, we can get , which is the particle given all previous interaction events. However, particles can be at any location on the visualization. Our goal is to find possible visual element users are going to interact with at the next time stamp.

To do this, we need one extra step. We treat every mark on the visualization as a potential candidate for the next interaction. We sum the weight every particle contributes to that candidate using the observational model. A subset of marks with highest weights, , will be considered as predictions. Notice that the size of at this point is arbitrary.

4. Example Application

We now apply this model to a visualization interface (see figure 4). We chose a map for our study because of its broad application and use. Below we demonstrate how to define the hidden state space and discuss choices for the dynamical and observation models. In this example, we assume that users interact with visual marks by clicking on them.

Figure 4. The interface used in our experiment. Participants used their mouse to pan and zoom the map. A tooltip displayed information about the crimes on click.

4.1. Defining Unobservable and Observable States

We define to be the click event at time , which we will represent as a three-dimensional vector , where is the x-coordinate and y-coordinate of the click and is the color of the circle clicked, represented by a discrete integer-valued index ranging over the eight possible values . Note that we use prime symbols to indicate quantities associated with a click event.

Next, we will define a hidden state space modeling the attention of the user. Each point in this hidden space is a vector specifying (1): a location of interest, (2): a mark color of interest, and (3): a bias parameter indicating the relative importance of location and mark color. For this example, we represent the bias parameter as a number , with 1 indicating a complete focus on location and 0 indicating a complete focus on mark color. A point in this latent attention space is thus a four-dimensional vector .

Our model assumes that at every discrete time step in the interaction process (each time the user makes a click), the user has an underlying attention corresponding to a vector in the attention space defined above. We seek to infer the attention of the user through observing the sequence of click events . We will approach this inference problem via creating a hidden Markov model and performing inference with particle filtering.

Our model is fully specified by a dynamical model describing how the hidden state evolves and an observation model describing how a hidden attention vector generates click events. We define each of these below.

4.2. Dynamical Model

Here, we adopt a simple stationary diffusion model. As detailed in Section 3.2.4, we assume that the four components of the attention vector evolve independently:

We model the evolution of the continuous location and location–color bias parameters with a simple Gaussian drift:

The expected value of these parameters is equal to the previous value, with zero-mean Gaussian diffusion with parameter-dependent variance added. We will select these parameters , , and . Notice also that these three parameters are all also bounded values: the locations and indicates a position on the map and must lie in its domain, and the bias parameter must lie in the interval . Therefore, we need to deal with cases when the diffused value steps outside the boundary. Here we simply adopted a rule that whenever a diffused value steps outside the boundary for a variable, we move it onto the boundary in the direction of diffusion. For example, if diffuses to value greater than 1, we will set it to 1; likewise if the diffused location lies beyond the width and height of the map, we will project onto the nearest point on the canvas boundary.

Lastly, because mark color is a categorical value, we cannot directly apply normal diffusion to it. Here we used a discrete analog of that diffusion following our suggestion in Section 3.2.4. We define a transition probability and assume that with probability the latent mark color of interest does not change. Otherwise, a new mark color of interest is chosen from all possible values with equal probability:

where is a Kronecker delta distribution and is a uniform distribution over the mark colors except . Again this choice models our assumption that attention typically changes slowly over time.

4.3. Observation Model

We must also specify an observational model modeling the probability of a click event given the attention at time . A brief summary of this observational model is that we flip a coin with heads probability equal to the location–color bias parameter . If the coin lands heads, we assume the user is focusing on location and will probably click somewhere near the location in . If not, we assume the user is focusing on mark color and will click on a mark of the color . Specifically, we define:

where denotes a uniform distribution over the available marks of color . This above model therefore assumes that that if the user is interested in position (with probability

), she will click on a position on the map with probability proportional to a Gaussian distribution centered on

with diagonal covariance . Again, we will specify these parameters.

4.3.1. Predicting Movements

To prediction movements, we can apply particle filter as described in Section 3.4. Figure 1 shows an simulation of the algorithm when applied to a simple scatter plot. The simulated user begins by clicking on blue dots at the center of the projection, and within a few clicks, interest predictions converge to circles of the representative color with similar locations. At , the user selects a different color circle in the same region, and subsequent predictions update to include circles of different colors in a more tightly defined area.

5. Evaluation

To test our approach, we designed a user study to track and analyze interactions. The dataset presented on the map were reported crimes in the city of St. Louis for March 2017 and that we gathered from the St. Louis Metropolitan Police Department’s database (St. Louis Metropolitan Police Department, [n. d.]). The dataset contained 20 features and 1951 instances of reported crime with eight different categories: Homicide, Theft-Related, Assault, Arson, Fraud, Vandalism, Weapons, and Vagrancy.

To visualize the crime instances, we used a single visual mark (we represented each crime as a circle on the map). The visual channels used were position and color which denoted the location and type of crime respectively. To separate intentional from unintentional interaction, users interacted with the map by clicking on crime instances which triggered a tooltip displaying information about the type of crime and when it occurred.

5.1. Participants

We recruited 30 participants via Amazon’s Mechanical Turk. Participants were 18 years or older and were from the United States. Each participant had a HIT approval rate greater than 90% with at least 50 approved HITs. We paid a base rate of $1.00, an additional $0.50 for every correct answer plus $1.00 for each of the two optional post-surveys they completed. The maximum reward was $6.00.

There were 17 women and 13 men in our subject pool with ages ranging from 21 to 56 years ( and ). Sixty percent of the participants self-reported to have at least a college education.

5.2. Task

In the main portion of the study, participants interacted with the crime map through panning, zooming and clicking to complete six search tasks and their associated question. We divided these questions into three different task conditions. The three question types were meant to represent simple lookup tasks for which the participant had to consult the visualization:

  • Geo-Based: Different types of crime that are constrained to a specific geographical region.

  • Type-Based: Same types of crime across the entire map.

  • Mixed: Same types of crime and constrained to a specific geographic region.

The questions were simplified versions of real-world tasks that represented a potential interest. For instance, a person who in interested in buying a house may visit a crime map to learn about the types of crimes that frequently occur in the neighborhood (Geo-Based). A fire marshal may be interested in trends across reported cases of Arsons (Type-Based), or an investor may want to learn about theft crimes that tend to occur near a potential business site (Mixed).

The Geo-Based questions asked the participants to count the number of crimes within a specified geographical location that had a specific property. For example, “Count the number of crimes that occurred during AM in the red-shaded region.” Participants clicked on every crime instance (a total of 43 dots) in the specified region. They then chose their response from a series of multiple choice options.

Unlike the Geo-Based questions, the Type-Based tasks were not bounded to a specific region. These questions required participants to explore the entire map and search for a specified category of crime. For instance, “How many cases of Arson occurred during PM?” To answer the question correctly, the participant would click on each instance of Arson (a total of 14 violet dots) to count the number of cases that occurred during PM.

For Mixed tasks, participants interacted with points of the same category of crime in a specified area. For example, “There are four types of Theft Related Crimes: Larceny, Burglary, Robbery, and Motor Vehicle Theft. Count the number of cases of Robbery in the red-shaded region.” Participants clicked on blue dots in the red-shaded area to reveal the tooltip (a total of 85 dots) and recorded the instances of Robbery.

While we used the same dataset throughout the experiment, each task focused on a different area of the map and a different type of crime. To correctly answer the questions, the design of the task required the participant to click on every valid point in the dataset. This was done to ensure a reasonably rich and large interaction dataset.

5.3. Procedure

After selecting the task on Mechanical Turk, participants consented per [redacted for anonymity] IRB protocol. They read the instructions for the study, then the main portion of the study began with a short video demonstrating the features of the interface. Specifically, we showed instructions for panning and zooming, and how to activate the tooltip. The participant then completed the six search tasks and entered their answers for each by selecting the appropriate multiple choice response. The order of the six tasks was counterbalanced to prevent ordering effects. Once the tasks were done, they completed a short demographic questionnaire.

5.4. Data Collection and Cleaning

During the experiment, we recorded every mouse click event. We tracked the data point, its coordinates and a timestamp for the mouse event. Each participant completed 6 tasks (two per task type), resulting in 180 trials. To ensure the best quality data for our analysis, we filtered participants with incorrect answers. We further removed tasks with less than five mouse click events. After cleaning, 78 trials remained (28, 23, and 27 trials for Geo-Based, Type-Based and Mixed tasks respectively).

Figure 5. The average prediction accuracy across the three type of tasks. For our algorithm successfully predicted the users’ next click, on average, 95% of the times.

5.4.1. Predicting Movement

To predict movements, we applied particle filter as described in Section 3.4. Although the choice for (the size of the prediction set) can be adapted for the application, our goal was to have a prediction set that is small, relative to the size of the dataset. We set which represents 5% of the dataset used in the study. This means that for a given timestep , the algorithm chooses 100 points with the highest likelihood of being clicked at .

5.4.2. Parameters

We used 1000 particles. The parameters were set as . The location scale parameters were again a fraction of the width and height of the map. The probability of maintaining the same type of crime as the users’ attention was defined to be .

5.5. Results

5.5.1. Prediction Accuracy

After gathering the data, we analyzed our model’s ability to observe mouse clicks and predict interactions before they occur. To allow time for the algorithm to learn users’ attention, we begin our predictions at . If the click at falls within our prediction set, we consider this a success. For each type of task (Geo-Based, Type-Based, and Mixed) we measured the overall predictive accuracy across all available clicks for all users:

Figure 5 shows the model’s accuracy for each of the three tasks. For (5% of the dataset), our technique attained an average of 95% at predicting the users’ next clicks across all three task type (, for Type-Based, , for Mixed, and , for Geo-Based tasks). In other words, with high accuracy, we can predict that the next click will be within a small set of data points, relative to the dataset.

Figure 6. The average accuracy over time for the three types of tasks in the study. After learning from 3 click interactions, the algorithm immediately achieves high prediction accuracy. We found that prediction accuracies remain fairly constant over time.

5.5.2. Accuracy Over Time

Our second analysis sought to evaluate our algorithm’s performance as a function of the number of clicks observed. For each type task, we measure the prediction accuracy (set size = 100) for the first 20 clicks observed. Consistent with our previous analysis, we begin our predictions at . Figure 6 summarizes our findings. Our analysis reveals that the technique promptly achieves high prediction accuracies and performance remains fairly constant with more observations.

6. Discussion & Future Work

The hidden Markov model is a general framework that is widely used for modeling sequence data in areas such as natural language processing 

(Manning and Schütze, 1999), speech recognition (Jelinek, 1997; Rabiner and Juang, 1993), and biological sequencing (Durbin et al., 1998; Sonnhammer et al., 1998). However, we demonstrate its utility for modeling interest from interaction with a visualization system. There are many possible variations for the model, the implementation, and parameters settings. Examples include choices for the diffusion parameters, number of particles for the particle filter, and prediction set sizes. A designer may tune these parameters or customize them based on the visualization or task. We see this as a strength of the approach which can seed many opportunities for future work.

Although, the evaluation uses a single interface, we posit that the approach in this paper is generalizable under transparent assumptions. We leverage data mapping principles and the notion that we can represent a visualization as a set of primitive visual marks and channels. Designers can apply the approach to any visualization that can be specified in this manner. The model assumes that the visual marks are perceptually differentiable, and relies heavily on good design practices. To specify a user’s evolving attention, we must first carefully define the mark space, . One way to improve this process is to automatically extract the visual marks and channels from the visualization’s code. However, this is beyond the scope of the paper.

Modeling attention can be a rich signal for inferring goals, intention and interest (Horvitz, 1999a; Horvitz et al., 2003), and information about users’ current and future attention can be useful for allocating computational resources (Horvitz et al., 2003) or for supporting data exploration. For example, the system can perform pre-computation or pre-fetching based on its predictions. For large datasets that may have overlapping points, a straightforward approach can be to redraw the points in the prediction set. Doing so can make it easier for users to interact with points that match their interests but may have initially been occluded by other visual marks. For more passive adaptations, designers can use the approach in this paper to inform techniques for target assistance (Bateman et al., 2011). The bubble cursor technique, for example, does not change the visual appearance of the interface but increases the click radius for the given target, thereby making them more accessible (Grossman and Balakrishnan, 2005). Another possibility is target gravity, which attracts the mouse to the target (Bateman et al., 2011). Future work can explore how to utilized to support the user during data exploration and analysis tasks.

The general idea of mixed initiative systems (Allen et al., 1999; Horvitz, 1999a, b, 2007) or tailoring an interface based on users’ skills or needs has existed for many years in HCI (Gajos and Weld, 2004). Researchers have explored the tradeoff between providing support and minimizing disruptions (Afergan et al., 2013; Peck, 2014; Solovey et al., 2011; Treacy Solovey et al., 2015). The work in the paper aligns well with this broader research agenda. We believe that the proposed approach is a significant step toward creating tools that can automatically learn and anticipate future actions, and opens possibilities for future work.

6.1. Future Work

One possible path for future work is to investigate the model’s performance for more complex tasks. In our experiment, we controlled the tasks by instructing participants to either search for a specific reported crime or identify a pattern in the dataset. While these tasks were designed based on realistic scenarios, they assume that the user has a specific and unchanging goal when they interact with the visualization. As a result, the search patterns we observed may not generalize to open-ended scenarios, or when the user’s interest change while interacting with the data. It is also possible that there are some scenarios where the user’s attention cannot be represented at as subspace of the visualization marks (e.g., attending to negative space). Future work can evaluate the approach with open-ended tasks.

The combination of visual marks and channels is an essential factor when defining the hidden state space for our probabilistic model. The map used in our experiment was simplistic compared to other real-world visual analytics systems. Future work can test the model using different combinations of visual variables and channels on a single map, or an entirely different type of visualization. It is also common for designers to aggregate the data based on the zoom level of the interface. It is essential to validate the technique by changing and increasing the size of the dataset, which can result in the drastic changes in the appearance and number of visual marks.

7. Conclusion

In this paper, we have proposed a generalizable and design-agnostic approach to modeling users’ evolving attention and actions with a visualization system. We used a hidden Markov model and represented attention using the primitive visual marks and channels of the visualization design. We demonstrated with a simple map how to apply this approach to a given visualization design.

To evaluate this technique, we conducted a user study and captured interaction data as participants explored a map showing a real-world crime dataset. The results of the study demonstrate that the approach is highly successful at modeling interaction and predicting users’ next clicks. We observed an overall accuracy of 95% at guessing actions before they occur. These results are exciting and contribute to our overall goal of creating intelligent systems that learn about the user, her analysis process, and her task as she uses the system. We believe that the work in this paper is a significant step toward this goal and can act as a catalyst for future work aimed at developing visual analytic systems that can better support users.

Acknowledgements.
The authors thank you for your valuable comments and helpful suggestions.

References

  • (1)
  • Afergan et al. (2013) Daniel Afergan, Evan M Peck, Remco Chang, and Robert JK Jacob. 2013. Using passive input to adapt visualization systems to the individual. (2013).
  • Allen et al. (1999) JE Allen, Curry I Guinn, and E Horvtz. 1999. Mixed-initiative interaction. IEEE Intelligent Systems and their Applications 14, 5 (1999), 14–23.
  • Amar et al. (2005) Robert Amar, James Eagan, and John Stasko. 2005. Low-level components of analytic activity in information visualization. In Information Visualization, 2005. INFOVIS 2005. IEEE Symposium on. IEEE, 111–117.
  • Andrienko et al. (2018) Natalia Andrienko, Tim Lammarsch, Gennady Andrienko, Georg Fuchs, Daniel Keim, Silvia Miksch, and Andrea Rind. 2018. Viewing Visual Analytics as Model Building. In Computer Graphics Forum. Wiley Online Library.
  • Bateman et al. (2011) Scott Bateman, Regan L Mandryk, Tadeusz Stach, and Carl Gutwin. 2011. Target assistance for subtly balancing competitive play. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2355–2364.
  • Battle et al. (2016) Leilani Battle, Remco Chang, and Michael Stonebraker. 2016. Dynamic prefetching of data tiles for interactive visualization. In Proceedings of the 2016 International Conference on Management of Data. ACM, 1363–1375.
  • Bavoil et al. (2005) Louis Bavoil, Steven P Callahan, Patricia J Crossno, Juliana Freire, Carlos E Scheidegger, Cláudio T Silva, and Huy T Vo. 2005. Vistrails: Enabling interactive multiple-view visualizations. In Visualization, 2005. VIS 05. IEEE. IEEE, 135–142.
  • Bertin (1983) Jacques Bertin. 1983. Semiology of graphics: diagrams, networks, maps. (1983).
  • Brodlie et al. (1993) Ken Brodlie, Andrew Poon, Helen Wright, Lesley Brankin, Greg Banecki, and Alan Gay. 1993. GRASPARC: a problem solving environment integrating computation and visualization. In Proceedings of the 4th conference on Visualization’93. IEEE Computer Society, 102–109.
  • Brown et al. (2012) Eli T Brown, Jingjing Liu, Carla E Brodley, and Remco Chang. 2012. Dis-function: Learning distance functions interactively. In Visual Analytics Science and Technology (VAST), 2012 IEEE Conference on. IEEE, 83–92.
  • Brown et al. (2014) Eli T Brown, Alvitta Ottley, Helen Zhao, Quan Lin, Richard Souvenir, Alex Endert, and Remco Chang. 2014. Finding Waldo: Learning about users from their interactions. IEEE Transactions on visualization and computer graphics 20, 12 (2014), 1663–1672.
  • Buja et al. (1996) Andreas Buja, Dianne Cook, and Deborah F Swayne. 1996.

    Interactive high-dimensional data visualization.

    Journal of computational and graphical statistics 5, 1 (1996), 78–99.
  • Callahan et al. (2006) Steven P Callahan, Juliana Freire, Emanuele Santos, Carlos E Scheidegger, Cláudio T Silva, and Huy T Vo. 2006. VisTrails: visualization meets data management. In Proceedings of the 2006 ACM SIGMOD international conference on Management of data. ACM, 745–747.
  • Card et al. (2009) Stuart Card, JD Mackinlay, and B Shneiderman. 2009. Information visualization. Human-computer interaction: Design issues, solutions, and applications 181 (2009).
  • Card et al. (1999) Stuart K Card, Jock D Mackinlay, and Ben Shneiderman. 1999. Readings in information visualization: using vision to think. Morgan Kaufmann.
  • Cho et al. (2017) Isaac Cho, Ryan Wesslen, Alireza Karduni, Sashank Santhanam, Samira Shaikh, and Wenwen Dou. 2017. The Anchoring Effect in Decision-Making with Visual Analytics. In Visual Analytics Science and Technology (VAST), 2017 IEEE Conference on.
  • Chuah and Roth (1996) Mei C Chuah and Steven F Roth. 1996. On the semantics of interactive visualizations. In Information Visualization’96, Proceedings IEEE Symposium on. IEEE, 29–36.
  • Cowley et al. (2005) Paula Cowley, Lucy Nowell, and Jean Scholtz. 2005. Glass box: An instrumented infrastructure for supporting human interaction with information. In System Sciences, 2005. HICSS’05. Proceedings of the 38th Annual Hawaii International Conference on. IEEE, 296c–296c.
  • Crouser and Chang (2012) R Jordon Crouser and Remco Chang. 2012. An affordance-based framework for human computation and human-computer collaboration. IEEE Transactions on Visualization and Computer Graphics 18, 12 (2012), 2859–2868.
  • Crouser et al. (2013) R Jordan Crouser, Alvitta Ottley, and Remco Chang. 2013. Balancing human and machine contributions in human computation systems. In Handbook of Human Computation. Springer, 615–623.
  • Dabek and Caban (2017) Filip Dabek and Jesus J Caban. 2017. A grammar-based approach for modeling user interactions and generating suggestions during the data exploration process. IEEE transactions on visualization and computer graphics 23, 1 (2017), 41–50.
  • Dimara et al. (2017) Evanthia Dimara, Anastasia Bezerianos, and Pierre Dragicevic. 2017. The attraction effect in information visualization. IEEE transactions on visualization and computer graphics 23, 1 (2017).
  • Dix and Ellis (1998) Alan Dix and Geoffrey Ellis. 1998. Starting simple: adding value to static visualisation through simple interaction. In Proceedings of the working conference on Advanced visual interfaces. ACM, 124–134.
  • Dou et al. (2009) Wenwen Dou, Dong Hyun Jeong, Felesia Stukes, William Ribarsky, Heather Richter Lipford, and Remco Chang. 2009. Recovering reasoning processes from user interactions. IEEE Computer Graphics and Applications 29, 3 (2009).
  • Doucet et al. (2000) Arnaud Doucet, Simon Godsill, and Christophe Andrieu. 2000. On sequential Monte Carlo sampling methods for Bayesian filtering. Statistics and computing 10, 3 (2000), 197–208.
  • Durbin et al. (1998) Richard Durbin, Sean R Eddy, Anders Krogh, and Graeme Mitchison. 1998. Biological sequence analysis: probabilistic models of proteins and nucleic acids. Cambridge university press.
  • Eirinaki and Vazirgiannis (2003) Magdalini Eirinaki and Michalis Vazirgiannis. 2003. Web mining for web personalization. Transactions on Internet Technology (TOIT) 3, 1 (2003), 1–27.
  • Ellis and Mansmann (2010) Geoffrey Ellis and Florian Mansmann. 2010. Mastering the information age solving problems with visual analytics. In Eurographics, Vol. 2. 5.
  • Endert et al. (2012a) Alex Endert, Patrick Fiaux, and Chris North. 2012a. Semantic interaction for sensemaking: inferring analytical reasoning for model steering. IEEE Transactions on Visualization and Computer Graphics 18, 12 (2012), 2879–2888.
  • Endert et al. (2012b) Alex Endert, Patrick Fiaux, and Chris North. 2012b. Semantic interaction for visual text analytics. In Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 473–482.
  • Feng et al. (2018) Mi Feng, Evan Peck, and Lane Harrison. 2018. Patterns and Pace: Quantifying Diverse Exploration Behavior with Visualizations on the Web. IEEE transactions on visualization and computer graphics (2018).
  • Freire et al. (2008) Juliana Freire, David Koop, Emanuele Santos, and Cláudio T Silva. 2008. Provenance for computational tasks: A survey. Computing in Science & Engineering 10, 3 (2008).
  • Freire et al. (2006) Juliana Freire, Cláudio T Silva, Steven P Callahan, Emanuele Santos, Carlos E Scheidegger, and Huy T Vo. 2006. Managing rapidly-evolving scientific workflows. In International Provenance and Annotation Workshop. Springer, 10–18.
  • Fu et al. (2017) Eugene Yujun Fu, Tiffany CK Kwok, Erin You Wu, Hong Va Leong, Grace Ngai, and Stephen CF Chan. 2017. Your Mouse Reveals Your Next Activity: Towards Predicting User Intention from Mouse Interaction. In Computer Software and Applications Conference (COMPSAC), 2017 IEEE 41st Annual, Vol. 1. IEEE, 869–874.
  • Gajos and Weld (2004) Krzysztof Gajos and Daniel S Weld. 2004. SUPPLE: automatically generating user interfaces. In Proceedings of the Ninth International Conference on Intelligent User Interfaces. ACM, 93–100.
  • Gajos et al. (2008a) Krzysztof Z Gajos, Daniel S Weld, and Jacob O Wobbrock. 2008a. Decision-Theoretic User Interface Generation.. In AAAI, Vol. 8. 1532–1536.
  • Gajos et al. (2008b) Krzysztof Z Gajos, Jacob O Wobbrock, and Daniel S Weld. 2008b. Improving the performance of motor-impaired users with automatically-generated, ability-based interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI). ACM, 1257–1266.
  • Garg et al. (2008) Supriya Garg, Julia Eunju Nam, IV Ramakrishnan, and Klaus Mueller. 2008. Model-driven visual analytics. In Visual Analytics Science and Technology, 2008. VAST’08. IEEE Symposium on. IEEE, 19–26.
  • Garnett et al. (2014) Roman Garnett, Thomas Gärtner, Timothy Ellersiek, Eyjólfur Guðmondsson, and Pétur Óskarsson. 2014. Predicting Unexpected Influxes of Players in EVE Online. In Proceedings of the 2014 IEEE Conference on Computational Intelligence and Games.
  • Gordon et al. (1993) Neil J Gordon, David J Salmond, and Adrian FM Smith. 1993.

    Novel approach to nonlinear/non-Gaussian Bayesian state estimation. In

    IEE Proceedings F (Radar and Signal Processing), Vol. 140. IET, 107–113.
  • Gotz and Zhou (2009) David Gotz and Michelle X Zhou. 2009. Characterizing users’ visual analytic activity for insight provenance. Information Visualization 8, 1 (2009), 42–55.
  • Gotz et al. (2006) David Gotz, Michelle X Zhou, and Vikram Aggarwal. 2006. Interactive visual synthesis of analytic knowledge. In Visual Analytics Science And Technology, 2006 IEEE Symposium On. IEEE, 51–58.
  • Green et al. (2008) Tera Marie Green, William Ribarsky, and Brian Fisher. 2008. Visual analytics for complex concepts using a human cognition model. In Visual Analytics Science and Technology, 2008. VAST’08. IEEE Symposium on. IEEE, 91–98.
  • Grossman and Balakrishnan (2005) Tovi Grossman and Ravin Balakrishnan. 2005. The bubble cursor: enhancing target acquisition by dynamic resizing of the cursor’s activation area. In Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 281–290.
  • Guan and Cutrell (2007) Zhiwei Guan and Edward Cutrell. 2007. An eye tracking study of the effect of target rank on web search. In Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 417–420.
  • Guo et al. (2016) Hua Guo, Steven R Gomez, Caroline Ziemkiewicz, and David H Laidlaw. 2016. A case study using visualization interaction logs and insight metrics to understand how analysts arrive at insights. IEEE transactions on visualization and computer graphics 22, 1 (2016), 51–60.
  • Heer et al. (2008) Jeffrey Heer, Jock Mackinlay, Chris Stolte, and Maneesh Agrawala. 2008. Graphical histories for visualization: Supporting analysis, communication, and evaluation. IEEE transactions on visualization and computer graphics 14, 6 (2008).
  • Horvitz (1999a) Eric Horvitz. 1999a. Principles of mixed-initiative user interfaces. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems. ACM, 159–166.
  • Horvitz (1999b) Eric Horvitz. 1999b. Uncertainty, action, and interaction: In pursuit of mixed-initiative computing. IEEE Intelligent Systems 14, 5 (1999), 17–20.
  • Horvitz et al. (2003) Eric Horvitz, Carl Kadie, Tim Paek, and David Hovel. 2003. Models of attention in computing and communication: from principles to applications. Commun. ACM 46, 3 (2003), 52–59.
  • Horvitz (2007) Eric J Horvitz. 2007. Reflections on challenges and promises of mixed-initiative interaction. AI Magazine 28, 2 (2007), 3.
  • Itti and Koch (2000) Laurent Itti and Christof Koch. 2000. A saliency-based search mechanism for overt and covert shifts of visual attention. Vision research 40, 10-12 (2000), 1489–1506.
  • Itti and Koch (2001) Laurent Itti and Christof Koch. 2001. Computational modelling of visual attention. Nature reviews neuroscience 2, 3 (2001), 194.
  • Itti et al. (1998) Laurent Itti, Christof Koch, and Ernst Niebur. 1998. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on pattern analysis and machine intelligence 20, 11 (1998), 1254–1259.
  • Javed and Elmqvist (2013) Waqas Javed and Niklas Elmqvist. 2013. ExPlates: spatializing interactive analysis to scaffold visual exploration. In Computer Graphics Forum, Vol. 32. Wiley Online Library, 441–450.
  • Jelinek (1997) Frederick Jelinek. 1997. Statistical methods for speech recognition. MIT press.
  • Joachims et al. (2017) Thorsten Joachims, Adith Swaminathan, and Tobias Schnabel. 2017. Unbiased learning-to-rank with biased feedback. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining. ACM, 781–789.
  • Keim et al. (2008) Daniel Keim, Gennady Andrienko, Jean-Daniel Fekete, Carsten Görg, Jörn Kohlhammer, and Guy Melançon. 2008. Visual analytics: Definition, process, and challenges. In Information visualization. Springer, 154–175.
  • Keim (2002) Daniel A Keim. 2002. Information visualization and visual data mining. IEEE Transactions on Visualization & Computer Graphics 1 (2002), 1–8.
  • Koch and Ullman (1987) Christof Koch and Shimon Ullman. 1987. Shifts in selective visual attention: towards the underlying neural circuitry. In Matters of intelligence. Springer, 115–141.
  • Kolari and Joshi (2004) Pranam Kolari and Anupam Joshi. 2004. Web mining: Research and practice. Computing in science & engineering 6, 4 (2004), 49–53.
  • Kosala and Blockeel (2000) Raymond Kosala and Hendrik Blockeel. 2000. Web mining research: A survey. ACM SIGKDD Explorations Newsletter 2, 1 (2000), 1–15.
  • Lee et al. (2006) Bongshin Lee, Catherine Plaisant, Cynthia Sims Parr, Jean-Daniel Fekete, and Nathalie Henry. 2006. Task taxonomy for graph visualization. In Proceedings of the 2006 AVI workshop on BEyond time and errors: novel evaluation methods for information visualization. ACM, 1–5.
  • Lu et al. (2010) Aidong Lu, Ross Maciejewski, and David S Ebert. 2010. Volume composition and evaluation using eye-tracking data. ACM Transactions on Applied Perception (TAP) 7, 1 (2010), 4.
  • Manning and Schütze (1999) Christopher D Manning and Hinrich Schütze. 1999. Foundations of statistical natural language processing. MIT press.
  • Newsom Jr et al. (2013) Benjamin Newsom Jr, Ranjeev Mittu, Ciara Sibley, and Myriam Abramson. 2013. Towards a Context-Aware Proactive Decision Support Framework. Technical Report. Naval Research Laboratory Washington United States.
  • Nguyen et al. (2014) Phong H Nguyen, Kai Xu, and BL Wong. 2014. A survey of analytic provenance. Middlesex University (2014).
  • North et al. (2011) Chris North, Remco Chang, Alex Endert, Wenwen Dou, Richard May, Bill Pike, and Glenn Fink. 2011. Analytic provenance: process+ interaction+ insight. In CHI’11 Extended Abstracts on Human Factors in Computing Systems. ACM, 33–36.
  • Ottley et al. (2015) Alvitta Ottley, Huahai Yang, and Remco Chang. 2015. Personality as a predictor of user strategy: How locus of control affects search strategies on tree visualizations. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 3251–3254.
  • Paurat et al. (2014) Daniel Paurat, Roman Garnett, and Thomas Gärtner. 2014. Interactive exploration of larger pattern collections: A case study on a cocktail dataset. Proc. of KDD IDEA (2014), 98–106.
  • Peck (2014) Evan M Peck. 2014. Designing Brain-Computer Interfaces for Intelligent Information Delivery Systems. Ph.D. Dissertation. Tufts University.
  • Pike et al. (2009) William A Pike, John Stasko, Remco Chang, and Theresa A O’connell. 2009. The science of interaction. Information Visualization 8, 4 (2009), 263–274.
  • Pirolli and Card (1999) Peter Pirolli and Stuart Card. 1999. Information foraging. Psychological review 106, 4 (1999), 643.
  • Rabiner and Juang (1993) Lawrence R Rabiner and Biing-Hwang Juang. 1993. Fundamentals of speech recognition. Vol. 14. PTR Prentice Hall Englewood Cliffs.
  • Ragan et al. (2016) Eric D Ragan, Alex Endert, Jibonananda Sanyal, and Jian Chen. 2016. Characterizing provenance in visualization and data analysis: an organizational framework of provenance types and purposes. IEEE transactions on visualization and computer graphics 22, 1 (2016), 31–40.
  • Saket et al. (2017) Bahador Saket, Hannah Kim, Eli T Brown, and Alex Endert. 2017. Visualization by demonstration: An interaction paradigm for visual data exploration. IEEE transactions on visualization and computer graphics 23, 1 (2017), 331–340.
  • Sarvghad et al. (2018) Ali Sarvghad, Bahador Saket, Alex Endert, and Nadir Weibel. 2018. Embedded Merge & Split: Visual Adjustment of Data Grouping. IEEE transactions on visualization and computer graphics (2018).
  • Shneiderman (1996) Ben Shneiderman. 1996. The eyes have it: A task by data type taxonomy for information visualizations. In Visual Languages, 1996. Proceedings., IEEE Symposium on. IEEE, 336–343.
  • Shrinivasan and van Wijk (2008) Yedendra Babu Shrinivasan and Jarke J van Wijk. 2008. Supporting the analytical reasoning process in information visualization. In Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 1237–1246.
  • Solovey et al. (2011) Erin Treacy Solovey, Francine Lalooses, Krysta Chauncey, Douglas Weaver, Margarita Parasi, Matthias Scheutz, Angelo Sassaroli, Sergio Fantini, Paul Schermerhorn, Audrey Girouard, et al. 2011. Sensing cognitive multitasking for a brain-based adaptive user interface. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 383–392.
  • Sonnhammer et al. (1998) Erik LL Sonnhammer, Gunnar Von Heijne, Anders Krogh, et al. 1998. A hidden Markov model for predicting transmembrane helices in protein sequences.. In Ismb, Vol. 6. 175–182.
  • Srivastava et al. (2000) Jaideep Srivastava, Robert Cooley, Mukund Deshpande, and Pang-Ning Tan. 2000. Web usage mining: Discovery and applications of usage patterns from web data. ACM SIGKDD Explorations Newsletter 1, 2 (2000), 12–23.
  • St. Louis Metropolitan Police Department ([n. d.]) St. Louis Metropolitan Police Department. [n. d.]. http://www.slmpd.org/crime_mapping.shtml. Accessed November 13, 2017.
  • Steichen et al. (2013) Ben Steichen, Giuseppe Carenini, and Cristina Conati. 2013. User-adaptive information visualization: using eye gaze data to infer visualization tasks and user cognitive abilities. In Proceedings of the 2013 international Conference on Intelligent User Interfaces. ACM, 317–328.
  • Toker et al. (2013) Dereck Toker, Cristina Conati, Ben Steichen, and Giuseppe Carenini. 2013. Individual user characteristics and information visualization: Connecting the dots through eye tracking. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI). ACM, 295–304.
  • Treacy Solovey et al. (2015) Erin Treacy Solovey, Daniel Afergan, Evan M Peck, Samuel W Hincks, and Robert JK Jacob. 2015. Designing implicit interfaces for physiological computing: guidelines and lessons learned using fNIRS. ACM Transactions on Computer-Human Interaction (TOCHI) 21, 6 (2015), 35.
  • Wall et al. (2017) Emily Wall, Leslie M Blaha, Lyndsey Franklin, and Alex Endert. 2017. Warning, bias may occur: A proposed approach to detecting cognitive bias in interactive visual analytics. In IEEE Conference on Visual Analytics Science and Technology (VAST).
  • Wexelblat and Maes (1999) Alan Wexelblat and Pattie Maes. 1999. Footprints: history-rich tools for information foraging. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems. ACM, 270–277.
  • Wilkinson (2006) Leland Wilkinson. 2006. The grammar of graphics. Springer Science & Business Media.
  • Xiao et al. (2006) Ling Xiao, John Gerth, and Pat Hanrahan. 2006. Enhancing visual analysis of network traffic using a knowledge representation. In Proceedings of the IEEE Symposium On Visual Analytics Science And Technology (VAST). IEEE, 107–114.
  • Yi et al. (2007) Ji Soo Yi, Youn ah Kang, and John Stasko. 2007. Toward a deeper understanding of the role of interaction in information visualization. IEEE transactions on visualization and computer graphics 13, 6 (2007), 1224–1231.
  • Zhou and Feiner (1998) Michelle X Zhou and Steven K Feiner. 1998. Visual task characterization for automated visual discourse synthesis. In Proceedings of the SIGCHI conference on Human factors in computing systems. ACM Press/Addison-Wesley Publishing Co., 392–399.