Juxtaposing Controlled Empirical Studies in Visualization with Topic Developments in Psychology

Empirical studies form an integral part of visualization research. Not only can they facilitate the evaluation of various designs, techniques, systems, and practices in visualization, but they can also enable the discovery of the causalities explaining why and how visualization works. This state-of-the-art report focuses on controlled and semi-controlled empirical studies conducted in laboratories and crowd-sourcing environments. In particular, the survey provides a taxonomic analysis of over 129 empirical studies in the visualization literature. It juxtaposes these studies with topic developments between 1978 and 2017 in psychology, where controlled empirical studies have played a predominant role in research. To help appreciate this broad context, the paper provides two case studies in detail, where specific visualization-related topics were examined in the discipline of psychology as well as the field of visualization. Following a brief discussion on some latest developments in psychology, it outlines challenges and opportunities in making new discoveries about visualization through empirical studies.

READ FULL TEXT VIEW PDF

page 10

page 11

page 22

07/15/2021

A Survey of Perception-Based Visualization Studies by Task

Knowledge of human perception has long been incorporated into visualizat...
06/06/2012

From individual to population: Challenges in Medical Visualization

In this paper, we first give a high-level overview of medical visualizat...
09/28/2020

The Huge Variable Space in Empirical Studies for Visualization – A Challenge as well as an opportunity for Visualization Psychology

In each of the last five years, a few dozen empirical studies appeared i...
11/09/2019

Recent developments in controlled crowd dynamics

We survey recent results on controlled particle systems. The control asp...
08/01/2019

Vis4Vis: Visualization for (Empirical) Visualization Research

Appropriate evaluation is a key component in visualization research. It ...
10/18/2020

Studying Visualization Guidelines According to Grounded Theory

Visualization guidelines, if defined properly, are invaluable to both pr...
06/23/2015

A Survey on Distributed Visualization Techniques over Clusters of Personal Computers

In the last years, Distributed Visualization over Personal Computer (PC)...

1 Introduction

Empirical studies play a significant role in the field of visualization [1]. While they are often used to evaluate different visual designs, visualization techniques, and software systems [2], more and more studies were designed to gain fundamental understanding about why and how visualization works.

Empirical studies can take many forms, including controlled experiments, structured surveys and questionnaires, unstructured or free-text surveys, focus group discussions, and think aloud, case studies, field observation, laboratory observation, interviews, games, log analysis, algorithmic performance measurement, quality metrics, and so on. The survey by Lam et al. [2], which focuses on the purpose of evaluation, provides a number of examples of the different typologies.

A large number of empirical studies published as independent research papers in the visualization literature are in the form of controlled experiments, including controlled laboratory environments and semi-controlled crowd-sourcing environments. The reference section of this survey includes more than 129 references about these controlled and semi-controlled studies, providing a relatively comprehensive collection of these empirical studies. To our best knowledge, however, there are so far only two surveys on empirical studies in specific areas, namely glyph-based visualization [3] and geo-spatial visualization (cartography) [4]. There is also a brief overview and categorization of controlled experiments in [5].

Controlled experiments are the most predominant research methods in psychology, and there are hundreds of thousands of such studies in the psychology literature. In comparison, the controlled experiments in visualization are drops in the ocean. One would naturally be interested in how the controlled experiments in visualization relate to those in psychology. Have some phenomena in visualization already been well-studied in psychology? Does visualization present any new problems and hypotheses that are well worth the attention of both disciplines?

In this work, we conduct a comparative study to juxtapose the existing controlled empirical studies in visualization with topic developments in psychology. We aim to provide visualization researchers with a state-of-the-art report about such experiments in visualization and a temporal overview of the landscape in psychology. We aim to enable visualization researchers to relate the perceptual and cognitive phenomena in visualization to the existing developments in psychology through the use of visual analytics techniques, while informing psychology researchers about the unanswered perceptual and cognitive questions in visualization.

The main objective of this survey is therefore to fill in a major gap in the literature. Visualization literature features many contributions in the form of survey reviews. Between 2002 and 2017, Computer Graphics Forum published nearly 40 state-of-the-art reports or survey papers on topics in visualization. In their 2017 survey of surveys, McNabb and Laramee selected and examined 86 surveys [6]. Among these, however, only a small number have focused on human factors. These include reviews of evaluation studies on specific topics, such as eye tracking [7], mobile devices [8], parallel coordinates [9, 10], and streaming data [11]. Perhaps the most significant survey on empirical study is the paper by Lam et al. [2] where the authors examined a large collection of evaluation studies and categorized them into seven scenarios.

Furthermore many empirical studies in visualization are of a discovery nature. For example, Borgo et al. and Correll et al. discovered humans’ capability of visual averaging in pixel-based visualization [12] and time series visualization [13]; Haroz and Whitney explored the capability limits of attention in visualization [14]; Rensink and Baldridge, and Harrison et al. detected signals suggesting that humans’ perception of correlation may correlate with Weber’s law [15, 16]; Chung et al. studied the orderability of visual channels [17]; and Kijmongkolchai et al. measured the soft knowledge used in visualization [5]. In many ways, these discovery studies are similar to a huge volume of empirical studies in psychology, except that they focus on perceptual and cognitive phenomena in visualization.

In addition, while evaluation studies may inform us about which designs, techniques, systems, or work practices are more effective than others, they also indirectly feature questions and answers about perception and cognition. It is highly desirable to juxtapose both discovery and evaluation studies in visualization with the empirical studies in psychology.

In the remainder of this paper, we first briefly describe the methodology adopted to compile this survey in Section 2. This is followed by a summary view of the broad landscape of psychology, in Section 3, through the construction of a high-level taxonomy of psychology. This allows us to illustrative the process of taxonomy construction, while identifying some variables that may be used in categorizing empirical studies in visualization. We then consider in detail a taxonomy for empirical studies in visualization in Section 4, where we discuss various variables for categorization and reason about the options for ordering these variables in defining a taxonomy.

In Section 5 we give a brief overview about the history of the discipline, the commonly-used organization of subjects or themes in the discipline, the major schools of thoughts, and the popular research methods. This is followed by a topic analysis of two major journals in psychology and the development trend of over 30 keywords. In Section 6 we juxtapose the empirical studies in visualization with the topic developments in psychology. We use visualization to highlight the synergy between the two disciplines, topics where visualization researchers may potentially find many existing studies, topics that have significant impact on visualization but require new studies to address the complexity of visualization tasks, and topics that demand substantial new efforts from both disciplines.

This is followed by two case studies in Section 7, where, for each case study, we juxtapose empirical studies published in psychology journals and visualization journals (also including journals in other domains). This juxtapositional analysis allows us to observe the similar and different characteristics of empirical studies across the two domains, and appreciate that visualization-related empirical studies can not only help define a significant application area of psychology, but also provide opportunities to answer fundamental questions in visualization and to develop new computing techniques for supporting empirical studies.

In Section 8, we briefly describe several recent developments in psychology and discuss their relevance to visualization. Finally, in Section 9, we summarize the challenges and opportunities in conducting empirical studies in visualization. We point out the need for visualization researchers to be familiar with the landscape and historical developments in psychology as well as the need to stimulate new hypotheses and new experiments based on phenomena and tasks in visualization. We emphasize that the need for conducting such studies in the field of visualization as well as the need to collaborate with researchers in psychology.

2 Study Method

Ideally one would like to conduct a comprehensive and comparative survey of controlled empirical studies in visualization as well as in psychology. While it is feasible to conduct a traditional survey on controlled empirical studies in visualization, it would be an enormous challenge to attempt a traditional survey on those studies in psychology. There are hundreds of publication venues in psychology (e.g., 111 journals listed by Wikipedia). We therefore use two different approaches to survey the two disciplines respectively.

STEP A. For controlled experiments in visualization, we use “close reading” to study over 129 research papers published in the visualization literature. We perform a taxonomic analysis of these papers by examining their categories under different classification schemes and by comparing different options in defining a taxonomic hierarchy for organizing these papers. This step is to be reported in detail in Section 4.

STEP B. For controlled experiments in psychology, we use “distant reading” to study the temporal evolution of topics in psychology. We first establish an initial list of topics by constructing a high-level taxonomy of psychology based on seven textbooks and some ten online resources. The psychology-trained co-authors lead the charting of the overall landscape of psychology and scrutinizing of the details of category labeling, while computer-science-trained co-authors lead a systematic approach of identifying variables for categorization and proposing the ordering of these variables. This is to be reported in Section 3.

We then use text analysis to examine papers published between 1978 and 2017 (for 40 years) in two major journals of psychology: Behavioural and Brain Sciences and Psychological Review. We make use of software for topic analysis and visualization to handle a huge volume of data automatically. This algorithmic approach allows us to compile major statistical indicators about the topic developments in psychology. The psychology-trained co-authors in the team then analyze the results generated by the software, and make appropriate adjustments to the keywords and topics, which are used to rerun the algorithmic method. This combined human-machine process requires several iterations until the results offer a meaningful representation of the topic developments in psychology. This is to be reported in Section 5.

STEP C. STEP A and STEP B, which are conducted in parallel, are followed by an integrated study in this third step. The controlled empirical studies in visualization surveyed in STEP A are first tagged with keywords and topics identified in STEP B. We then examine the relations between the controlled empirical studies in visualization and the topics in psychology using various visualizations where the entities of the two disciplines are juxtaposed and interconnected, reported in Section 6. This facilitates further analysis of the challenges and opportunities in conducting empirical studies in visualization, reported in Section 9.

Fig. 1: A high level taxonomy of psychology, which is constructed based on a number of textbooks of psychology and a number of online resources on branches of psychology.

3 A High Level Taxonomy of Psychology

Psychology is a huge discipline. To our best knowledge there has not been any influential taxonomy proposed in literature to encompass most topics in psychology. One reason may be because of the sheer number of topics in the discipline. Another may be due to the difficulties for experts to reach a consensus. The latter reflects, to a large extent, the historical misunderstanding about the correctness of taxonomies.

In scientific and scholarly disciplines, a collection of concepts are commonly organized into a taxonomy, where concepts are known as “taxa” and are typically arranged hierarchically using a tree structure [18]. The main steps for building a taxonomy include:

  1. [noitemsep,nolistsep]

  2. mustering a collection of concepts (or entities);

  3. identifying a list of candidature variables, each of which is typically a nominal and ordinal variable and has a small number of valid values;

  4. using each variable to categorize the concepts (or entities) into groups and observe comparatively the distribution of the concepts (or entities) resulting from the application of the different variables;

  5. making the collection of all concepts (or entities) as the root of the taxonomic tree;

  6. selecting a principal variable and applying the selection to the current collection, where the valid values of the variable are thus the taxa;

  7. considering each group of concepts under a taxon as a new collection of concepts (or entities) and repeating steps (c-f) until there is no more than one concept under a taxon or when the taxonomic tree reaches a curtain depth.

Step (b) and Step (e) are often the main sources of disagreement among experts as they involve subjective decisions as to what variables are to be considered and how these variables are selected or ordered. Any reasonable decision should yield a relatively useful taxonomy. There are many factors that may influence the judgment as to what can be considered a “reasonable decision”. For example, one factor can be the desire to depict the commonly-used or commonly-accepted grouping in the upper part of a taxonomic tree. Another factor can be the preference for a more balanced tree, i.e., at each level of the tree, the taxa encompass similarly-sized collections of concepts. Some may be in favour of a variable with a smaller number of valid values, which usually leads to a deeper tree. Others may wish to have a shallower and flatter tree and may thus prefer to use multivariate variables or univariate variables that have relatively larger numbers of valid values. In many cases, there is no easy way to weigh different factors objectively, which often results in unnecessary discord. Therefore, we emphasize here that our proposed taxonomy is just one of many reasonable taxonomies that could be constructed.

Figure 1 shows a high-level taxonomy of psychology constructed using the above process. We carefully assembled a collection of the major concepts based on chapter and section titles in seven textbooks in psychology [19, 20, 21, 22, 23, 24, 25], and some 10 lists of major branches of psychology, mostly on the web [26, 27, 28, 29, 30, 31, 32, 33, 34, 35]. After considering a set of variables, we found that the first three variables as shown in Figure 1 could be used to separate all branches of applied psychology as well as major branches such as Social Psychology, Comparative Psychology, Biological Psychology, Developmental Psychology, and Evolutionary Psychology. These branches became taxa, each of which heads a sub-taxonomy. We then used the 5th variable “Means-to-Ends” to separate research methods from groups of behaviours to be studied.

For the latter, we introduced the 7th variable with eight functional categories, namely sensing, storing, learning, thinking, motivating, feeling, externalizing, and deviating. Among these, sensing leads to a sub-taxonomy that includes major branches such as Sensory Processes, Perception, Attention, and Consciousness, while thinking leads to a few major branches and significant topics in psychology. Meanwhile the other values of the 7th variable intrinsically correspond to other major branches such as Memory, Learning, Motivation, Emotion, Personality Psychology, and Abnormal Psychology.

While all taxa at the bottom of the taxonomic tree in Figure 1 can be further divided, this high-level taxonomy is adequate to be used a reference dimension for characterizing empirical studies in visualization in the next section.

4 Taxonomy of Controlled Empirical Studies in Visualization

Building on the references collected by Lam et al. [2], Kijmongkolchai et al. [5], Fuchs et al. [3], and Roth et al. [4], we identified a total of 129 papers of controlled experiments published in visualization literature. We focused on those controlled experiments published as independent research papers, mainly because locating small controlled experiments that are components of design study papers or application papers is not a trivial undertaking. This collection of references naturally becomes the collection of entities as required by Step (a) in the process of building a taxonomy (Section 3).

4.1 Variables for Categorization

Naturally all variables that are shown in Figure 1 can potentially be used to categorize empirical studies in visualization. Here we list them formally as a subset of candidature variables:

  • [noitemsep,nolistsep]

  • Study Objectives fundamental understanding, practical application.

  • Organizational Levels constituent (micro), individual (mezzo), population (macro), species (external).

  • Order of Temporal Differentiation moments/periods, years, generations.

  • Application Areas . Most branches of applied psychology also application areas of visualization. In addition, there are other application areas of visualization, e.g., computational fluid dynamics, which are not specifically considered as branches in psychology.

  • Means-to-Ends of Psychology Studies research method, group of behaviors.

  • Functional Categories of Research Methods fundamental understanding, practical application.

  • Functional Categories of Behaviors sensing, storing, learning, thinking, motivating, feeling, externalizing, deviating

  • Functional Categories of Sensing sensory processes, perception, attention, consciousness.

  • Means-to-Ends of Thinking factor, behavior.

  • Functional Categories of Factors in Thinking knowledge, language, belief, moral, .

  • Functional Categories of Thinking Behaviors reasoning, decision-making, problem-solving, predicting and anticipating, imagining .

  • Functional Categories of Sensory Processes visual, audio, somatic, , para-psychological.

Fig. 2: A categorization scheme for characterizing different visualization tasks. Result of the integration of the task taxonomies proposed by Wehrend and Lewis [36], Zhou and Feiner [37], Amar et al. [38], Valiati et al. [39], and Pretorius et al. [40].

Meanwhile, in the field of visualization, we often consider other variables that are not included in the above list. Some of these variables may be specific for visualization, while others may belong to the subtrees defined under some leaf taxa as in Figure 1.

Lam et al. [2] considered “scenario” as a variable that distinguishes a wide range of study methods, including controlled experiments, surveys and questionnaires, focus group discussions, and think aloud, case studies, field observation, laboratory observation, interviews, games, log analysis, algorithmic performance measurement, and quality metrics. This is indeed a multivariate variable that can be decomposed into several variables, such as:

  • Study Platforms laboratory, internet, in the wild.

  • Types of Intervention observation, interview, focus group discussion, stimuli-and-responses.

  • Study Types exploratory studies, assessment studies, manipulation experiments, observation experiments [41].

  • Types of Collected Data free text, structured survey results, structured behavior data, unstructured behavior data, cognitive activity data.

Kijmongkolchai et al. [5] considered two variables in their two-dimensional categorization. The first variable broadly divides all empirical studies into two groups, () to gain new understanding about perception and cognition in the context of visualization, and () to evaluate and compare visualization designs, algorithms, techniques, and systems. The second variable examines the different types of knowledge required to perform tasks featured in the studies, including () context, () pattern, and () statistics. The varying knowledge about contexts is expected to have significant impact when there are variations of the underlying data spaces, e.g., between different applications. This also includes context changes induced by algorithms and interactions, which result in changes to participants’ attention to different parts of a data space. The varying knowledge about patterns is expected to have significant impact when there are changes of the visual representations of the same data. These studies are typically used to examine participants’ performance in observing relatively complex information (e.g., features, patterns, events, etc.) rather than individual numbers. The varying ability to perceive statistical information is expected to have significant impact when there are changes to the visual representations related to specific numbers and statistical measures. These studies are typically used for examining participants’ performance in determining the values of individual measures through visualization. These two variables are summarized below:

  • [noitemsep,nolistsep]

  • Purposes of Studies fundamental understanding, technical evaluation.

  • Scopes of Knowledge context, pattern, statistics.

There are many other variables, for example:

  • [noitemsep,nolistsep]

  • Design-Analysis Strategies between-subjects, within-subjects, mixed, neither.

  • Metrics and Measures error, time, confidence, attention, motion, spatial ability, interpretation, comprehension, learning, .

One important variable is the variation of visualization tasks, which has been studied extensively. A number of task taxonomies have been proposed (e.g., [42, 43, 44, 45, 46, 47, 48, 49, 50]). For this survey, we make use of the proposals by Wehrend and Lewis [36], Zhou and Feiner [37], Amar et al. [38], Valiati et al. [39], and Pretorius et al. [40]. As illustrated in Figure 2, visualization tasks can be broadly divided into three categories, namely operational, analytical, and cognitive tasks. We found Andrienko and Andrienko’s task taxonomy to provide a useful conceptual framework for differentiating between spatial and temporal data [51]. The taxonomy features several levels and we could not find an easy way to integrate it with others.

  • [noitemsep,nolistsep]

  • Task Groups operational, analytical, cognitive.

  • Functional Categories of Operational Tasks configure, show.

  • Functional Categories of Analytical Tasks retrieve, identify, determine, locate, filter, cluster, categorize, compare, rank, sort, correlate, associate.

  • Functional Categories of Cognitive Tasks reveal, emphasize, switch/replace, connect, infer, generalize learn.

4.2 Selecting Variables

After we have obtained a candidature list of variables, , we analyze their usefulness for categorizing the controlled empirical studies that we have collected. It is obvious that all these empirical studies were carried out in the context of visualization, and most (if not all) do not fall into individual branches of applied psychology in Figure 1. Of course it is reasonable to suggest that these studies can be considered as applied psychology, and one may consider to create a new branch called Visualization Psychology. Nevertheless, there is no reason to choose (Study Objectives) and (Application Areas) since it cannot divide the collection of studies further.

Similarly, hardly any empirical studies in visualization can be considered as part of Social Psychology Comparative Psychology, Biological Psychology, Developmental Psychology, and Evolutionary Psychology, we can also eliminate (Organizational Levels) and (Order of Temporal Differentiation). Almost all the papers in the collection are not investigations into different research methods in psychology, there is also no reason to choose (Means-to-Ends of Psychology Studies) and (Functional Categories of Research Methods).

Because this survey focuses on controlled experiments, we do not expect any papers in the collection fall into the category in the wild in terms of (Study Platforms), observation, interview and focus group discussion in terms of (Types of Intervention), exploratory studies, assessment studies, and observation experiments in terms of (Study Types), and free text and structured survey results in terms of (Types of Collected Data). On the other hand, can characterizes one of the main differences between controlled laboratory studies and semi-controlled crowd-sourcing studies, while can be used to distinguish those studies collecting structured behavior data (e.g., many typical accuracy-response studies) from those collecting unstructured behavior data (e.g., eye-tracking studies) and those collecting cognitive activity data (e.g., electroencephalography-based studies).

As shown in [5], (Purposes of Studies) and (Scopes of Knowledge) can adequately divide the collection of controlled experiments into similarly-sized groups. We expect (Design-Analysis Strategies) and (Metrics and Measures) also have reasonable discrimination capacity.

The four variables for characterizing visualization tasks (, , , and ) are no doubt important as specific visualization tasks are explicitly defined in most studies, especially those intended for technical evaluation. It is highly desirable to include these in a taxonomy for empirical studies in visualization.

Tables I, II, and III compare the categorizations using (Purposes of Studies), (Scopes of Knowledge), (Visualization Tasks), and (Functional Categories of Behaviors). These result from the Step (c) in the taxonomy construction process as described in Section 3.

Paper Purpose Knowledge Visualization Tasks Behaviors
Adnan et al. [52] Evaluation Pattern, Statistics Identify, Compare, Infer Thinking
Aigner et al. [53] Evaluation Statistics Retrieve, Compare, Identify Sensing
Aigner et al. [54] Evaluation Pattern 12 Tasks Thinking
Albers et al. [55] Evaluation Statistics Retrieve Sensing
Albo et al. [56] Evaluation Pattern, Statistics 10 Tasks Sensing, Thinking
Alexander et al. [57] Both Pattern, Statistics Retrieve Sensing
Anderson et al. [58] Understanding Statistics Compare Sensing
Bae & Watson [59] Understanding Context, Pattern Show Learning
Beecham et al. [60] Both Pattern Compare Sensing
Bezerianos & Isenberg [61] Understanding Pattern Retrieve Sensing
Borgo et al. [12] Both Pattern, Statistics Retrieve, Determine, Compare Sensing, Thinking
Borgo et al. [62] Understanding Context, Pattern, Statistics Retrieve, Identify Thinking, Storing
Borgo et al. [63] Understanding Pattern, Statistics Retrieve Sensing
Borkin et al. [64] Understanding Pattern Retrieve, Identify Storing
Borkin et al. [65] Understanding Context Retrieve, Identify Storing
Boukhelifa et al. [66] Evaluation Pattern Identify Sensing
Boy et al. [67] Evaluation Context, Pattern Infer Thinking
Boyandin et al. [68] Evaluation Context, Pattern Infer, Configure Thinking
Brandes et al. [69] Understanding Pattern Compare, Determine Thinking
Bresciani & Eppler [70] Evaluation Context Configure, Determine Thinking, Storing,
Externalizing
Burch et al. [71] Evaluation Pattern, Statistics Compare Sensing, Thinking
Cai et al. [72] Understanding Statistics Retrieve Sensing
Chen et al. [73] Evaluation Pattern Determine Thinking
Chevalier et al. [74] Evaluation Pattern Locate, Connect Thinking
Chung et al. [17] Understanding Statistics Sort, Rank Thinking
Cleveland & McGill [75] Understanding Statistics Retrieve, Compare Sensing
Correll et al. [13] Understanding Statistics Retrieve, Compare Sensing
Correll et al. [76] Both Pattern, Statistics Retrieve Sensing
Correll & Gleicher [77] Evaluation Pattern, Statistics Identify, Infer Thinking
Correll & Heer [78] Both Statistics Retrieve Sensing
Correll et al. [79] Evaluation Pattern, Statistics Identify Sensing
Dasgupta et al. [80] Understanding Context Locate, Determine Thinking
Demiralp et al. [81] Understanding Statistics Compare, Sort Sensing
Diehl et al. [82] Understanding Pattern Locate Storing
Dimara et al. [83] Evaluation Pattern, Statistics Retrieve Sensing
Dimara et al. [84] Understanding Pattern Compare Motivating
Etemadpour et al. [85] Evaluation Context, Pattern, Statistics Rank, Determine, Cluster Thinking
Felix et al. [86] Understanding Pattern, Statistics Compare, Retrieve, Determine Sensing, Thinking
Fink et al. [87] Understanding Pattern Retrieve Sensing
Fuchs et al. [88] Evaluation Pattern Compare, Retrieve Sensing, Thinking
Ghani et al. [89] Evaluation Pattern Locate Storing
Gleicher et al. [90] Understanding Statistics Retrieve Sensing
Gramazio et al. [91] Evaluation Pattern, Statistics Compare Sensing
Gramazio et al. [92] Understanding Pattern, Statistics Identify Sensing
Griffin & Robinson [93] Evaluation Pattern Locate, Connect, Associate Thinking
Gschwandtner et al. [94] Evaluation Pattern Identify Sensing
Guo et al. [95] Understanding Pattern, Statistics Identify, Compare Thinking
TABLE I: A collection of visualization-related empirical studies in this survey categorized into (Purposes of Studies), (Scopes of Knowledge), (Visualization Tasks), and (Functional Categories of Behaviors).
Paper Purpose Knowledge Visualization Tasks Behaviors
Haroz & Whitney [14] Understanding Pattern Identify, Compare, Determine Sensing, Thinking
Haroz et al. [96] Understanding Pattern, Statistics Retrieve Storing
Haroz et al. [97] Evaluation Pattern Infer Thinking
Harrison et al. [16] Understanding Pattern, Statistics Retrieve, Compare Sensing
Heer & Bostock [98] Both Statistics Retrieve, Compare Sensing
Heer et al. [99] Both Statistics Retrieve Sensing
Höferlin et al. [100] Evaluation Context, Pattern Locate Thinking
Hofmann et al. [101] Evaluation Pattern Compare, Identify Thinking
Huron et al. [102] Evaluation Context Configure Thinking, Externalizing
Isenberg et al. [103] Evaluation Statistics Retrieve Sensing
Jakobsen & Hornbæk [104] Understanding Context Locate, Connect, Infer Thinking
Jakobsen et al. [105] Evaluation Context Retrieve, Locate, Connect Thinking
Jansen & Hornbæk [106] Understanding Statistics Retrieve Sensing
Javed et al. [107] Evaluation Statistics Retrieve, Compare Sensing
Kanjanabose et al. [108] Understanding Pattern, Statistics Retrieve, Identify, Cluster Sensing, Thinking
Kersten-Oertel et al. [109] Evaluation Pattern Compare Sensing
Kijmongkolchai et al. [5] Both Context, Pattern, Statistics Determine, Infer Storing, Thinking
Kim et al. [110] Evaluation Pattern Compare, Infer Thinking
Kim & Heer [111] Evaluation Context, Statistics Compare, Retrieve Sensing
Kuang et al. [112] Understanding Pattern, Statistics Retrieve Sensing
Kurzhals et al. [113] Evaluation Pattern Identify Sensing, Thinking
Kwon et al. [114] Evaluation Pattern Identify, Connect, Compare Thinking, Storing
Laidlaw et al. [115] Evaluation Pattern Identify, Determine Thinking
Li et al. [116] Understanding Pattern, Statistics Retrieve Sensing
Liccardi et al. [117] Understanding Context, Pattern Infer Thinking
Lin et al. [118] Understanding Context Compare, Infer Thinking
Lind & Bruckner [119] Evaluation Context, Pattern Identify, Compare Sensing
Livingston & Decker [120] Evaluation Pattern Compare Sensing
Livingston et al. [121] Evaluation Pattern Determine Thinking
MacEachren et al. [122] Evaluation Pattern Identify Sensing
Marriott et al. [123] Both Pattern Learn Storing
Mazurek & Waldner [124] Evaluation Context Determine Thinking
Micallef et al. [125] Understanding Context, Pattern Retrieve, Infer Storing, Thinking
Mittelstädt & Keim [126] Understanding Context Retrieve Sensing
Morris et al. [127] Understanding Statistics Retrieve Sensing
Netzel et al. [128] Evaluation Pattern Compare, Determine Thinking
Netzel et al. [129] Evaluation Pattern Locate Thinking
Nowell et al. [130] Evaluation Pattern Identify Sensing
Ondov et al. [131] Evaluation Pattern Compare Storing, Sensing
Ottley et al. [132] Understanding Context, Statistics Infer Thinking
Padilla et al. [133] Evaluation Pattern, Statistics Locate, Identify, Compare Thinking
Pandey et al. [134] Understanding Pattern, Statistics Compare Sensing, Thinking
Pandey et al. [135] Understanding Pattern Compare, Cluster, Configure, Thinking
Determine, Associate
Poupyrev et al. [136] Evaluation Pattern Locate Sensing, Externalizing
Ragan et al. [137] Evaluation Pattern Identify, Determine Thinking
Rensink & Baldridge [15] Understanding Statistics Retrieve Sensing
Ryan et al [138] Understanding Pattern, Statistics Determine Sensing
TABLE II: A collection of visualization-related empirical studies in this survey categorized into (Purposes of Studies), (Scopes of Knowledge), (Visualization Tasks), and (Functional Categories of Behaviors).
Paper Purpose Knowledge Visualization Tasks Behaviors
Saket et al. [139] Understanding Context, Statistics Identify, Determine, Correlate Sensing, Thinking
Retrieve, Compare, Filter, Rank
Saket et al. [140] Evaluation Pattern Identify Sensing
Saket et al. [141] Evaluation Pattern Locate, Retrieve Thinking, Storing
Saket et al. [142] Both Pattern Locate Sensing, Feeling
Sarvghad et al. [143] Understanding Context Identify, Infer Thinking
Schloss et al. [144] Understanding Pattern, Statistics Retrieve Sensing
Sher et al. [145] Understanding Pattern, Statistics Retrieve Sensing
Skau et al. [146] Understanding Pattern, Statistics Retrieve Sensing
Skau & Kosara [147] Evaluation Pattern, Statistics Retrieve Sensing
Song & Szafir [148] Both Pattern, Statistics Determine Sensing
Srinivasan et al. [149] Evaluation Statistics Compare, Identify Sensing
Strobelt et al. [150] Evaluation Pattern Locate Sensing
Talbot et al. [151] Understanding Statistics Retrieve Sensing
Szafir [152] Understanding Pattern Retrieve Sensing
Szafir et al. [153] Understanding Pattern Compare Sensing
Talbot et al [154] Understanding Statistics Retrieve Sensing
Talbot et al. [155] Evaluation Pattern, Statistics Compare Thinking
Tanahashi et al. [156] Evaluation Context, Pattern Learn Learning
Tory [157] Evaluation Pattern Identify, Locate Thinking
Vande Moere et al. [158] Understanding Context, Pattern Identify Thinking
Volante et al. [159] Evaluation Pattern Determine Feeling, Thinking
Wagner Filho et al. [160] Evaluation Statistics, Pattern Locate, Compare, Retrieve Sensing, Externalizing
Identify, Filter
Walker et al. [161] Evaluation Context, Pattern Locate, Compare Thinking
Wang et al. [162] Evaluation Statistics, Pattern Compare Sensing
Ware [163] Understanding Statistics Retrieve Sensing
Wu et al. [164] Evaluation Context Infer, Determine Thinking
Wun et al. [165] Evaluation Pattern Configure Thinking
Xu et al. [166] Evaluation Pattern Identify Sensing
Yang et al. [167] Evaluation Context, Pattern, Statistics Locate, Determine Thinking
Yang et al. [168] Evaluation Statistics, Pattern Compare, Retrieve Sensing
Yost & North [169] Understanding Statistics, Pattern Identify, Compare, Determine Sensing
Zhao et al [170] Evaluation Statistics Retrieve, Compare Sensing
Zhao et al. [171] Evaluation Context Compare, Determine Thinking
Zhao et al. [172] Evaluation Statistics, Pattern Retrieve, Compare, Correlate Sensing, Thinking
Zheng et al. [173] Evaluation Pattern Determine Sensing
Ziemkiewicz et al. [174] Both Pattern Locate Thinking
TABLE III: A collection of visualization-related empirical studies in this survey categorized into (Purposes of Studies), (Scopes of Knowledge), (Visualization Tasks), and (Functional Categories of Behaviors).

From the two tables, we can obtain the following statistics about the numbers of occurrences of different category labels:

  • [noitemsep,nolistsep]

  • Purposes of Studies: fundamental understanding (62), technical evaluation (79).

  • Scopes of Knowledge: context (29), pattern (93), statistics (61).

  • Visualization Tasks: configure (5), show (1); retrieve (48), identify (31), determine (24), locate (19), filter (2), cluster (3), categorize (0), compare (45), rank (3), sort (2), correlate (2), associate (2); reveal (0), emphasize (0), switch/replace (0), connect (5), infer (15), generalize (0), learn (2).

  • Functional Categories of Behaviors: sensing (72), storing (13), learning (2), thinking (59), motivating (1), feeling (2), externalizing (4), deviating (0).

It is necessary to note that the above numbers of occurrences should be considered as crude approximations because the assignment of various labels can be subjective. In particular, the difference between different visualization tasks can be quite subtle and their classification can thus be ambiguous and imprecise. For example, the actions of visually identify, determine, or locate something can be quite similar, the actions of rank and sort can be highly related, and the action of infer can easily involve many other actions such as filter and associate. When we labeled each paper in the collection, we tried to focus on the tasks that were explicitly stated in the paper and avoid the introduction of additional labels that were not be intentionally investigated by the study concerns.

Fig. 3: A taxonomy for empirical studies in visualization, accommodating controlled, semi-controlled, and uncontrolled studies.

4.3 Examples of Classifying Empirical Studies

In this subsection, we describe several examples to illustrate how we labeled each paper in the collection using some of the variables described in Section 4.1. We adopted close reading to perform the labeling. Here we provide examples to illustrate our procedure.

The main objectives in the work by Borgo et al. [12] is to explore effects of visual embellishments on memorization, visual search and concept comprehension. The purpose (

) of the study was therefore classified as

understanding. The study performed required participants to perform two tasks in parallel. The first corresponded to the primary task, e.g., stimuli exploration. The primary task was subdivided into four main sections each probing a different aspect of the exploratory process with respect to memorization (long-term and working memory), visual search, and concept grasping hence retrieve and identify for visualization tasks ( and ). The stimuli used through the experiment were statistical representation of data as 2D histograms/bar charts and bubble charts in both numerical and metaphorical form. Participants were asked to identify and remember pattern, statistics, and context hence the three categories chosen for knowledge (). The secondary task was performed in parallel with the primary acting as a distractor to mimic real life situations where focus of attention is continually challenged by the surrounding environment. The study setting of two orthogonal tasks run in parallel explored user behavior () with respect to thinking and storing information in stressful situations.

Chung et al. [17] propose two empirical studies exploring human perception of orderability of the visual channels value, size, hue, texture, orientation, shape, and numerical representation. The purpose () of the study is therefore understanding

. The study analyzes orderability according to two main criteria: perceived orderedness (e.g., sorting), value estimation e.g., if a target element has smallest value, largest value, or neither of the two (e.g., ranking). The two criteria were translated into corresponding tasks hence the choice of

sort and rank for visualization tasks ( and ). Based on the nature of the tasks the behavior () was classified as thinking while the knowledge measure () as statistics. The latter is due to the estimation of a proportion as the cognitive aspect of the tasks.

Dimara et al. [84] explore the “attraction effect” in visual layout. Attraction effect is defined as a cognitive bias in decision making when the choice between two alternatives is influenced by the presence of an irrelevant but stronger third alternative. The purpose () of the study is again understanding since the authors focus on the understanding of the nature of the bias. A decision making task was at the core of the study where participants were asked to compare alternatives, which included both decoys and distractors, and choose the most appropriate one, this led to categorize visualization tasks () as compare. The behavior () under study is that of motivating since participant decisions were motivated by diverse appealing qualities of the various alternatives. The authors controlled appeal of decoy and distractors following pre-defined patterns from decision-making research, they tried to determine if such patterns would lead to similar bias effects also in information visualization, we therefore categorized knowledge () as pattern.

School of Thought Influential Figure Main Belief or Emphasis
Associationism J. R. Angell (1869–1949) Mental connections between events and ideas
Behaviourism I. Pavlov (1849–1936) Study of observable emitted behaviors
Cognitivism J. Piaget (1896–1980) Understanding how people think as state transitions
Constructivism J. Dewey (1859–1952) The mind actively gives meaning and order to the reality
Empiricism J. Locke (1632–1704) All knowledge is derived from experience
Functionalism H. Ebbinghaus (1850–1909) Mental operations and practical use of consciousness
Gestaltism M. Wertheimer (1880-1943) Study of holistic concepts, not merely as sums of parts
Nativism I. Kant (1724–1804) Certain skills/abilities are hard-wired into the brain at birth
Pragmatism W. James (1842–1910) Knowledge is validated by its usefulness
Structuralism E. Titchener (1867–1927) Analysis of consciousness into constituent components
TABLE IV: Commonly mentioned schools of thought in psychology [175, 176].

In Pandey et al. [135] the authors present a study exploring how human judge scatter plot similarity when scatter plots are presented as sets of icons. The study objectives was twofold: to understand the most dominant features of a scatter plot which influence human perception of similarity across scatter plots, and to measure the correlation between perceived similarity and exiting state of the art measures such as graph-theoretic scagnostics. We therefore classified purpose () as understanding

. The study main task asked participants to group scatter plots according to perceived similarity. Participants were presented with large collections of scatter plots and were free to attempt several grouping options before finalizing their answer. The participant degree of freedom in interacting with the visualization and the type of perceptual judgment required by the task led us to classify the visualization tasks (

and ) as compare, cluster, configure, determine, and associate. The nature of the task also implied that the measured participant behavior () was thinking. The study collected data reflected the patterns participants perceived within the scatter plots display therefore we classified knowledge () as pattern.

The work of Tanahashi et al. [156] set itself aside. The authors in fact designed a study aimed at evaluating different design options employed in the creation of online guides and their effectiveness when applied to the design of online guides to educate novice users in the use of information visualizations. We therefore classified the study purpose () as evaluation. The authors measured the knowledge acquired by participants via comprehension tests, hence learning as measured behavior () and learn as visualization task (). Comprehension tests required participants to answer multiple choice questions each contextual to the type of visualization and data shown. Questions required the participants to interpret the visualization. We therefore categorized knowledge () as context and pattern.

4.4 A Taxonomy of Empirical Studies in Visualization

In order to build a general taxonomy for all empirical studies in visualization, it would be desirable to make use of (Purposes of Studies) in a way similar to the high-level taxonomy of psychology in Figure 1. It would also be desirable to use some combination of (Study Platforms), (Types of Intervention), (Study Types), and (Types of Collected Data) to separate, for instance, laboratory-based studies from crowd-sourcing studies, and eye-tracking studies from transcripts of focus group discussions. It would also be highly desirable to include a sub-taxonomy for visualization, such as the one shown in Figure 2. Last but not least, it will be important to make connections with psychology by making use of a sub-taxonomy as in Figure 1.

Figure 3 shows one possible option of such a taxonomy. As discussed in Section 3, this is not necessarily the “correct” or “best” taxonomy, since such labeling is very unhelpful. Because any appropriate selection of variables and reasonable ordering of the selected variables would likely result in a useful taxonomy, we believe that the taxonomy in Figure 3 is such a reasonable taxonomy.

5 Topic Developments in Psychology

In this section, we first give a brief overview about the history of the discipline, the major schools of thoughts, and the popular research methods. We then describe technical process of conducting our survey of topic developments in psychology. Finally we present the results of a computer-assisted survey of two major journals in psychology between 1978 and 2017.

5.1 Overview

Psychology is the scientific study of behavior and mental processes. The earliest interest in humans’ mind can be traced back to some 1500 years before common era (BCE). The development of this interest has continued ever since.

The 16th century saw the beginning of western psychology, and attracted many great thinkers and practitioners at that time and the following centuries, such as German philosopher Rudolf Göckel (1547–1628), French philosopher, mathematician, and scientist René Descartes (1596–1650), English doctor Thomas Willis (1621–1675), English philosopher and physician John Locke (1632–1704), Irish philosopher George Berkeley (1685–1753), Scottish philosopher, historian, and economist David Hume (1711–1776), and many more [177, 178].

The 19th century saw the emergence of psychology as an independent discipline rising from a branch of philosophy. Today, psychology is one of most popular academic disciplines, and is studied in the majority of universities around the world. Figure 1 outlines the broad landscape of contemporary psychology.

Perhaps because of the heritage of philosophy, there have been many schools of thought in psychology [175, 176]. The rise and fall of these schools of thought often signifies the paradigm shift in the discipline. Table IV summarizes some major schools of thought.

In psychology, there are different forms of psychological enquiries, such as controlled experiments, correlational studies, naturalistic observation, case studies, interviews, discourse analysis, and personal reflections [21]. There have been many proposed conceptual models (often referred to as theories), which were usually informed by empirical studies and many empirical studies were designed to test such models. There have also been many attempts to describe the causal relations in human behaviors and cognition using computational models.

Fig. 4:

Major keywords in Behavioural and Brain Sciences between 1978 and 2017. The number of keywords are countered in blocks of five years to increase the number of papers being sampled at each sampling point. The ThemeRiver software interpolates the thickness between each pair of consecutive data points.

5.2 The Process of Topic Analysis

In this work, we used three different types of visualization to study the topic development in psychology, namely tag clouds, temporal tag clouds, and ThemeRiver. We used Voyant Tools (https://voyant-tools.org/) for the text analysis. Voyant Tools is a web-based application that allows us to upload, process, generate data for text analysis (text mining tool) and visualization (uploads the file either in pdfs or text format). Voyant Tools process the text and automatically generates tag clouds and graphs, and coverts data in downloadable csv or json format. Inside Voyant Tools, common stop words, such as the, an, etc., are automatically filtered. The Voyant Tools is widely used in the Digital Humanities field, and has a wide user base (http://hermeneuti.ca/VoyantFacts).

In the first phrase, we identified a number of high impact journals in psychology. We chose three journals, Annual Reviews of Psychology, Perception, and Vision Research, to test the effectiveness of different parts of text in journal papers. For each of the three journals, we selected five articles in the same period. We carried out the text analysis for (i) full PDFs and (ii) Title+Abstract+Keywords (T-A-K), and generated a tag cloud for each journal (i.e., each group of five papers). The psychology-trained co-authors reviewed and discussed the generated visualization, identified the need and method for filtering and combining words. We also found that it was not easy to compare different tag clouds, though comparing words within a tag cloud was effective.

After implementing the suggestions from psychology-trained co-authors in text analysis process, we were be able to generate informative tag clouds with minimal noise. We found that using texts from the Title+Abstract+Keywords is better than full PDFs in most cases to provide us with visualizations that depict the main topics in each group of five papers.

In the second phase, we focused on two journals, Behavioral and Brain Sciences and Psychological Review. For the former, we only extract papers that are labeled as Target, Research, or Main articles. For the latter, we included all articles except errata. As Psychological Review does not have keywords for many articles, for consistency, we extracted Title+Abstract and Authors for articles in both journals. We processed all issues of the two journals in the 40-year period of 1978–2017.

Using the Voyant Tools, we were able to generate a list of most popular terms from the Title+Abstract. These were then downloaded and saved as spreadsheets. The psychology-trained co-authors inspected these spreadsheets carefully, highlighted those terms that are more relevant to the field of psychology, and suggested words to be grouped together.

We focused the highlighted terms and grouped together words with similar semantics (i.e., plural and singular, British spelling and American spelling, present participles, and so on). using the refined data, we tested temporal tag clouds and ThemeRiver visualizations. We found that the ThemeRiver visualization is more effective in depicting the topic development over a period. The temporal tag clouds and ThemeRiver were generated using D3.js.

Fig. 5: Major keywords in Psychological Review between 1978 and 2017. The number of keywords are countered in blocks of five years to increase the number of papers being sampled at each sampling point. The ThemeRiver software interpolates the thickness between each pair of consecutive data points.

5.3 Topics in Psychology between 1978 - 2017

Figure 4 shows a ThemeRiver visualization that depicts the changes of the top 25 keywords in Behavioural and Brain Sciences between 1978 and 2017. The number of keywords are countered in blocks of five years to increase the number of papers being sampled at each sampling point. This results in 8 data points per keyword for 1978–1982, 1983–1987, , 2008-2012, 2013–2017. The ThemeRiver software interpolates the thickness between each pair of consecutive data points. From this visualization, we can make the following observations:

  • [noitemsep,nolistsep]

  • In this journal, many uses of the term behavior are usually associated with actual and specific behaviors, for which some behavioral outcomes are observed or detected under some conditions being studied. The the use of the term does not necessarily suggest behaviorism (conditioning), but it may be used in the context of behaviorism sometimes. The steady decline in the use of this term may be due to a number of factors. For example, animal studies (e.g., rat studies) has been increasingly recognized as unethical, leading to fewer studies about animal behaviors. Another reason is that many authors have chosen to replace this term with more generally-accepted terms, such as “cognition”, “decision making”, “judgments”, and so on, which give a less emphasis on the behavior itself.

  • There is a decline in using the term theory

    . This is probably due to the fact that this journal has been gradually given less emphasis on general theory building and more focus on general review.

  • The term model also exhibits a decline. This is likely because of the same reason as for the term theory.

  • For the term perception, there seems to be a spike in the period of 1998–2002, possibly due to the popularity of the subject area at this time, especially in areas of Applied Psychology. Note that there is a spike in the total number of papers in the journal during that period (about 25% more than the previous and following half decades). Nevertheless the spike is still significant even after taking this fact into account.

  • The term memory sees a surge in the 15 year period of 1993–2007. This is likely because of the new emphasis on cognition during that period, perhaps together with the rising popularity of cognitivism.

  • The terms neural and brain typically represent the emerging trend in Biological Psychology. The lack of an increasing pattern in both cases is somehow unexpected.

  • The term learning is prominent in the period of 1993–1997. Its decreasing pattern after that period may be related to the decreasing trend in using the term behavior, since conceptually there are some overlaps among “behavior”, “behaviorism”, and “learning”. In different periods of time, they may be sometimes more and other times less popular in use.

Figure 5 shows a ThemeRiver visualization that depicts the changes of the top 25 keywords in Psychological Review between 1978 and 2017. In the same way as for Behavioural and Brain Sciences, the number of keywords are countered in blocks of five years to increase the number of papers being sampled at each sampling point. The ThemeRiver software interpolates the thickness between each pair of consecutive data points. From this visualization, we can make the following observations:

  • The journal published noticeably more papers during the period between 2004 and 2010. Although the total number of papers (i.e., 250) published during the period of 2003–2007 is slightly more than that (i.e., 242) during 2008–2012, interestingly the spike occurs at the axial point of 2008–2012. Most terms, such as memory, attention, social, and representation exhibit a gradual increasing trend in line with the growth of published papers in the journal.

  • The term model is overwhelmingly the most popular term in the journal, because there has been a focus on theory development in this journal. There is also a large spike in the use of the term model for the period of 2008–2012, possibly reflecting the journal’s emphasis on model building during the period.

  • The uses of the terms decision and learn have slightly different trends as other terms with more growth in recent years. This may be attributed to the fact that they are often used in places where “behavior” or “behavioral” were used previously. The term decision is often used in studies about memory, attention, categorization, and so on.

  • The term behavior does appear to have an increasing trend in recent years. This is likely used to discuss actual behaviors in various tasks, instead of representing the paradigm of behaviorism.

Because the journal of Behavioural and Brain Sciences has more general review papers on many different subject areas in psychology, it does not feature a highly dominant theme. Meanwhile the journal of Psychological Review has a greater emphasis on developing models. This explains why the term model is the most prominent. The journal also gives an emphasis on theory as the development of a model is typically informed by the development of a theory, while the testing of a model can provide evidence to support or falsify a theory. The term process appears a lot in both figures, because it is a general word for scenarios where many cognitive components work together to process information in a way that can then be usable. This is in line with model building. The term memory is also central to such processes as many models have a component of memory in order to process information. This suggest that the journal focuses largely on the paradigm of cognitivism, which is the dominant school of thought in psychology today. The term cognition may appears less frequently than behavior. This is largely because authors often use more specific words, such as “visual”, “recognition”, “decision”, and “response”, instead of “cognition”.

6 Juxtapositional Analysis of Taxonomies and Topics

Figure 6(a) shows two time series, representing the numbers of papers in Psychological Review between 1978 and 2018, and the number of papers on visualization-related empirical studies collected in this survey. Although these numbers are not directly comparable, we can make several inferred observations after taking some other factors into account.

  • [noitemsep,nolistsep]

  • Although the collection of papers in Tables I, II, and III may not be complete, it is a relatively comprehensive collection, and thus the time series for the number of visualization-related empirical studies is indicative. From the orange-colored time series in Figure 6(a), we can observe that there were noticeably more papers since 2010 in comparison with the period before. Note that IEEE Visualization started in 1990, and EuroVis started in 1999. This suggests a healthy growth of empirical studies conducted in the context of visualization.

  • From Figures 4 and 5, we have already observed the different trends of keyword counts in Behavioural and Brain Science and Psychological Review. In fact, the two journals also have different trends in terms of number of papers. We thus do not treat the variations exhibited by the blue-colored time series as a representative trend in psychology. It is also necessary to point out that not all papers in Psychological Review report empirical studies. Nevertheless, most papers in this journal report empirical studies and/or models derived or based on the results of empirical studies. Hence it is reasonable to consider that this journal alone has published more empirical studies than the field of visualization. Considering that there are over 100 journals in the field of psychology, the number of empirical studies published in visualization venues are drops in the ocean. From the discussions on taxonomies in Sections 3 and 4, the subject of visualization no doubt shares much common ground with the discipline of psychology. Meanwhile, through this survey, we have also observed that there have not been many visualization-related empirical studies published in psychology journal. Hence, there is clearly a need and scope for more visualization-related empirical studies.

  • In addition, from Figure 6(b) we can observe that the average number of authors per paper in Psychological Review exhibits a slow trend of gradual increase between 1978 and 2018. In comparison, the average number of authors per paper for visualization-related empirical studies is in general higher during the period between 2010 and 2018. Note that the averages between 1978 and 2009 are statistically not meaningful and should not be considered in the comparison.

With the high-level taxonomies developed in Sections 3 and 4, we can examine the relations between the topics in psychology and empirical studies in visualization. In particular, we can use the variable , Functional Categories of Behaviors, to categorize the collection of empirical studies as shown in Tables I, II, and III.

(a) Time series representing the numbers of papers in Psychological review and the number of papers on visualization-related
empirical studies in this survey.
(b) Time series showing the average number of authors per paper in Psychological Review and the average number of authors of
empirical studies in visualization.
Fig. 6: Time series between 1978 and 2018 showing (a) the numbers of papers in Psychological Review (PR) and the number of papers on visualization-related empirical studies (Vis) collected in this survey; (b) the average of number of authors per paper in Psychological Review (PR) and average number of authors per paper for visualization-related empirical studies (Vis).

We first notice that many papers are about the behaviors of sensing (72) and thinking (59). In many ways, this is expected. However, there are also 13 papers about storing, 4 about externalizing, 2 about learning, 2 about feeling, and 1 about motivating. There is none about deviating. In comparison with the ThemeRiver visualizations in Figures 4 and 5, the terms memory and learning are relatively prominent. This suggests that these may be the areas of interest, which demand the attention of future empirical studies in visualization. In particular, external memorization is an important merit of visualization [179], and there is a need to study how humans reduce their cognitive load for memorization through the use of visualization in addition to the current focus on how humans remember what is shown in visualization. Furthermore, learning is an important aspect of visualization as visualization can aid learning and learning can improve visualization skills. This line of enquiries may also lead to new scope of studies related to Developmental Psychology.

While there are a small number of empirical studies in visualization designed to test some theories such as the Weber’s law, the scale of using empirical studies to support theory development and model development is far smaller in comparison with that in psychology as illustrated by the terms model and theory in Figures 4 and 5, where there are many other terms, such as language, information, representation, and social that are also relevant to visualization but have not yet featured much in empirical studies in visualization.

Year Venue #Authors #Experiment #Ind.Var. Variations #Repeats #Iterations #Stimuli/P #Participants Apparatus Main Task
Beach and Scopp [180] 1966 Psychology 2 1 1 A;5 4 20 32 card estimate
Erlick [181] 1966 Psychology 1 1 2 A:212 50 10 paper
Strahan and Hansen [182] 1978 Psychology 2 1 3 A:132;G:2 13 paper
Bobko and Karren [183] 1979 Psychology 2 1 3 A:8;A:2;A:3 13 89 questionnaire estimate
Lane et al. [184] 1985 Psychology 3 2
E1 5 G:2;A: 40 39 estimate
E2 5 G:2;A: 40 40 estimate
Lauer and Post [185] 1989 Psychology 2 1 5 A: 144 27 computer estimate
Collyer et al. [186] 1990 Psychology 3 1 1 A:4 4 16 50 computer estimate
Meyer and Shinar [187] 1992 Psychology 2 2
E1 4 A;G:2;G:2 54 19+10 paper estimate
E2 4 A:;G:2;G:2 36 49+49 paper estimate
Doherty et al. [188] 2007 Psychology 4 4
E1 1 A:2 2 20 paper estimate
E2 1 A:4 25 100 21 paper high/low
E3 1 A:4 25 100 20 paper high/low
E4 3 A: 1 58 paper estimate
Rensink [189] 2017 Psychology 1 4
E1 1 A:10 ~40 computer high/low
E2 2 A: ~40 computer high/low
E3 2 A: ~40 computer high/low
E4 2 A: ~40 computer high/low
Cleveland et al. [190] 1982 Science 3 3
E1 2 A: 1,4 19 74 paper estimate
E2 1 A:2 2 109 projector estimate
E3 1 A:2 2 32 projector estimate
Rensink and Baldridge [15] 2010 Visualization 2 1
E1 1 A/G:19 ¡50 20 computer high/low
E2 2 A/G: ¡50 20 computer high/low
Li et al. [116] 2010 Visualization 3 1 4 A: 2 168 25 computer estimate
Harrison et al. [16] 2014 Visualization 4 2
E1 2 G:6;A:2 ¡50 ¡200 88 crowd high/low
E2 3 G:9;G:6;A:2 ¡50 ¡200 1687 crowd high/low
Kanjanabose et al. [108] 2015 Visualization 3 1 3 A: 2 72 43 computer 4 tasks
Sher et al. [145] 2016 Visualization 4 1(6) 190 37
E1 1 A:21 ¿2 computer estimate
E2 2 A: ¿2 computer estimate
E3 2 A: ¿2 computer estimate
E4 1 A:6 ¿2 computer estimate
E5 1 A:7 ¿2 computer estimate
E6 2 A: computer estimate
Notes:
. The letter “A” indicates that the variations were presented to all participants, while the letter “G” indicates that the variations were presented separately to different groups.
. Each participant received only 1 stimulus.
. The data resulting from the first experiment was used in the other three experiments as one of the two variations of the two-value variables.

. The four tasks are value retrieval, clustering, outlier detection, and change detection.

. The stimuli of six experiments were presented to participants in an integrated experiment to facilitate sharing of stimuli and alleviating cross-experiment confounding effects.
TABLE V: Many empirical studies have been conducted, by psychologists, statisticians, and visualization researchers, to examine humans’ performance in estimating correlation using scatter plots and other visual representations.

7 Juxtaposed Case Studies

The previous section juxtaposes the topic development in psychology and the empirical studies in visualization. With this broad context, this section focuses on two specific topics, for which we juxtapose the empirical studies published in psychology venues and those in visualization venues.

7.1 Visual Estimation of Correlation

Visually estimating correlation has been an interesting topic to psychologists, statisticians, and visualization researchers. Table V lists a number of empirical studies published mainly in psychology and visualization journals, including some major attributes of these studies. The earlier studies on this topic in the 1960s, 1970s, 1980s, and 1990s typically involved apparatuses such as cards, papers and booklets, and projectors, while the use of computers started in the late 1980s and crowd-sourcing started in the current decade [191]. Noticeably the introduction of computers as apparatuses enables more complicated study design, such as dynamic stimulus generation for iterative capture of participants’ responses [15, 16, 189] and integrated multi-hypotheses experiment with stimuli sharing in [145]. The crowd-based study conducted by Harrison et al. [16] also demonstrated the feasibility of recruiting many more participants through the internet than any typical laboratory setting. Possibly because of the programming skills available in the visualization community, the studies by Li et al. [116], Harrison et al. [16], Kanjanabose et al. [108] enabled the investigation on this topic to be extended from visually estimating correlation using scatter plots to other visual representations for estimating correlation and other visualization tasks using scatter plots.

The majority of the studies on this topic, in all publication venues, have identified that humans’ estimation of the Pearson’s product-moment correlation coefficient (PPMCC) do not have the same numerical accuracy and consistency as the PPMCC itself. Some sounded an alarm about humans’ sub-optimal inferences (e.g., [180]), while others tried to model such displacement (e.g., [181, 190]). Some 40 years ago, the psychologist authors of [183] suggested that “examination of scatter plots may have many uses (cf. Tukey, 1977), although it is clear that calculation of r is not one of them.” Recently, visualization researchers built on the collective knowledge gained from the studies in Table V, and started to ask the question about the benefits of scatter plots (e.g., [145]), and more broadly and deeply, the benefit of visualization in general [192]. All these enable us to appreciate the values of empirical studies such as those in Table V, and motivate us to use empirical studies to help answer some fundamental questions in the field of visualization.

7.2 Color Perception and Colormapping

Color perception has been a pervasive topic in psychology and visualization. The discipline of psychology has accumulated a very large collection of research papers on empirical studies and theoretical discourses derived from empirical studies. We provide below several examples of color research in psychology.

Fig. 7: A photo, #theDress, which first appeared on the social media service Tumblr, attracted a huge amount of online discussion. People perceived the colors of the dress differently — many saw it in blue and lace black, while some saw white and gold.
  1. [noitemsep,nolistsep]

  2. Between late 1960s and early 1980s, Treisman and her colleagues reported a number of experiments for studying the interaction between colors and a few other visual channels (e.g., words [193] and shapes [194, 195]), which led to the proposal of a feature integration theory of attention [196]. Around that time, there were also many other publications in psychology, reporting experiments on colors and other visual channels as well as their preattentive properties, interactions, and integration (e.g., [197, 198, 199, 200])

  3. Colors were used in a number of experiments to study the effects of languages on cognition, e.g., color space in naming and memory [201] (1972), color categories [202], and preattentive color perception [203] (2009). The recent article by Zhong et al. mentioned some 20 references reporting empirical studies on this topic [204].

  4. Color perception was also a common topic shared by many branches of psychology, ranging from the leftmost branch in Figure 1, Biological Psychology, (e.g., [205]) to the rightmost branch in Figure 1, Comparative Psychology (e.g., [206]).

  5. Color constancy is a perceptual phenomenon that the colors on a surface appear to be constant despite measurable variations of intensity and spectrum due to illumination and textures. Foster provided a substantial review on this topic, included many references [207]. In 2015, an image, which was referred to as “#theDress” online (Figure 7), attracted a considerable amount of discussion in social media as different viewers appeared to perceive different colors of the dress. #theDress also stimulated much scholarly discourse and some research activities among researchers in psychology. The recent review by Martín-Moro et al. included 17 references on this topic, including six empirical studies [208], while Witzel and Gegenfurtner summarized the discussion on #theDress in their review of the two closely related topics, color constancy and color categorization [209].

In visualization, understanding color perception is vital to the design decisions on choosing visual channels and creating colormaps. Naturally visualization researchers have conducted many empirical studies on color perceptions and color mapping. Below are a number of empirical studies (in chronological order) that were published in visualization venues and collected during this survey.

  1. [noitemsep,nolistsep]

  2. Ware conducted three experiments on color sequencing in colormaps [163].

  3. Borgo et al. reported three experiments to study (i) the performance of five visualization tasks in spatio-temporal visualization using color pixel blocks, (ii) the effect of different numbers of color bands in a colormap, and (iii) the humans’ capability of “averaging” in pixel-based visualization [12]. Their experiment (ii) confirmed Ware’s finding about the merit of multi-band colormaps [163].

  4. Haroz and Whitney reported three experiments to study the impact of visual feature type (color vs. motion), layout, and variety of visual elements on user performance [14].

  5. Griffin and Robinson compared the uses of colors and leader lines for highlighting visual objects depicted in coordinated views for geo-visualization [93].

  6. Lin et al. conducted two experiments to compare the use of standard colormaps in visualization with the use of expert- or algorithm-selected colormaps with strong semantic association between colors and words [118].

  7. Gramazio et al. reported an empirical study to examine the performance of color-based visual search tasks in three types of pixel grid layouts [92].

  8. Demiralp et al. reported an empirical study investigating the interaction between colors and a few other visual channels [81].

  9. Mittelstädt and Keim conducted an experiment to study the impact of contrast effect on visualization tasks relying on color perception, and the means for alleviating the effect using personalized perception models [126].

  10. Gramazio et al. reported an empirical study to evaluate a web-based tool for creating discriminable and aesthetically preferable categorical color palettes [91].

  11. Szafir reported three experiments to study the impact of mark types and sizes upon the perception of color differences in visualization [152].

  12. Schloss et al. conducted an empirical study to examine the relationship between the semantic meaning associated to a sequential colormap and the ordering of the colors in the colormap [144].

Comparing the above two lists, we can easily observe the synergy between:

  • [noitemsep,nolistsep]

  • (a) and (3), (4), (7) and (10);

  • (b) and (5) and (11);

  • (d) and (1), (2), (6), (8), and (9).

The phenomenon exhibited by #theDress in Figure 7 is directly related to many visualization tasks, especially those involving continuous colormaps and 3D visual objects. Similarly to the case study in the previous section, if retrieving values from the colors of visual objects is not reliable, visualization researchers not only need to devise new guidelines, methods, and techniques to alleviate such problems, but also need to conduct more empirical studies that will help to answer the fundamental question “what is really the benefit of visualization” with unreliable visualization tasks for value retrieving.

8 Some Recent Developments in Psychology

Psychology is a continuing evolving discipline and new topics are emerging frequently. In this section, we briefly describe three new developments and discuss their relevance to visualization.

8.1 Distributed Cognition Approaches in Cognitive Science

While we classify Behavioural and Brain Sciences as a psychology journal in Section 5, it is also thought of as a journal in the interdisciplinary field of cognitive science

. Cognitive scientists seek converging evidence about the nature of cognition from multiple disciplines that speak to how information is processed, such as artificial intelligence, neuroscience, philosophy of mind, as well as psychology and social sciences.

A foundational principle in cognitive science is the concept of a cognitive architecture, the structures of information processing that are architectural in the sense that they are invariant with regard to training, and experience. These are studied empirically in humans and simulated in computational cognitive architectures. For human cognitive architecture we see well-known neuroscience constraints such as trichromacy as determined by cone pigments and visual resolution limitations as determined by retinal receptor density. Other architectural limitations are defined by consistent limitations in human performance across tasks. These include attentional limitations, e.g., the number of spatial tokens (i.e., FINSTs) that parse complex visual scenes [210]. These architectural constraints can be found in the experimental psychology literature, however there are aspects of the cognitive architecture that relate specifically to interaction with dynamic and immersive visual environments that are not commonly studied by psychologists. Examples of these human/computer cognitive systems applications include:

  • [noitemsep,nolistsep]

  • Smart seeing and projecting [211]: The argument for visualization often stems from our ability to see patterns in information graphics. This depends upon our mental models of the processes associated with that information as well as our ability to parse the artificial visual scene of the dashboard. A theory of the operating characteristics of smart seeing visualization could be quite useful for creating and evaluating interactive methods. A related kind of expertise, projecting is the ability of an expert to take into account what is represented in the visualization and to predict what will happen (or what should be done) next, then manipulating the information for a what-if analysis.

  • Enactive/complementary cognition [212]: Another way of integrating information technology and cognitive processes is when a dynamic environment generates information based on computational processes or changes in streaming data. Analysts must adjust their thinking and respond to the updated information in real time. Studies in the human factors literature document how changes in timing of the response to user actions can alter users’ task performance strategy [213]. The ability of a theory to model human performance in dynamic environments would require it to be able to take the temporal coordination of cognitive processes and external events into account. Methods for doing this are still being developed, see [214, 215, 216] for examples.

  • Multi-agent cognition and joint activity [217]: The third D-Cog method studies coordination of action between multiple human and/or non-human agents, either through structured coordination protocols or as negotiated coordination. Two mechanisms can be used to enable negotiation: representation of the probable behaviours of an agent (e.g., a user behavioural model for an artificial intelligent agent) and cooperative signalling. Multi-human cognition is often studied using descriptive social science methods. A more focused approach comes from cognitive ethnography [218]. A cognitivist research approach to multi-agent coordination examines human-human coordination as a Joint Activity [219]. Joint activity models have been used to analyze coordinated activity in paired analysis studies [220]. To do this an analysis task is proposed with roles given to two or more analysts. The roles require them to cooperate in accomplishing the analysis task in an interface environment [217]. Sessions are video captured and analyzed using Clark’s theory. It is possible that this approach could also be extended to study of coordination with non-human agents.

8.2 Mindfulness

Third wave therapies such as mindfulness are becoming more popular in psychology. Though these are typically used in a health psychology context they also relate to cognition. Mindfulness can be defined as paying attention to the present moment, and in a non-judgmental way [221]. Mindfulness meditation is thought to promote cognitive flexibility which may be useful in visualization tasks. For example, after mindfulness training individuals encountered less stroop task interference (a measure of automatic thinking) and performed better in the concentration and endurance test. The task requires the individual to visually discriminate targets from visually similar non-targets. So, mindfulness may have increased the visual attention of participants [222].

In many visual analytics applications, analysts may encounter a variety of psychological conditions that may impact on the performance of visual analytics tasks. Research on mindfulness in the context of visualization and visual analytics may provide new means to address such conditions.

8.3 Cognitive Neuroscience

Cognitive neuroscience [223, 224] is an interdisciplinary field connecting neuroscience with psychology. It is a topic category under the leftmost branch in Figure 1, Biological Psychology, which is concerned with the biological processes and aspects that underlie human behaviors and cognition. Cognitive neuroscience focuses on the neural connections in the brain, their formation and transformation, their functions and their controls, and their impact on various cognitive processes (cf. the 7th variables in Figure 1).

Over the past four decades, the rapid advancement of new brain mapping technologies (e.g., fMRI and PET) has enabled cognitive scientists to observe brain activities at a more detailed spatiotemporal scale than ever before. These technologies have been used to study human vision systems (e.g., [225, 226, 227]), and visualization-related cognitive functions such as memory and reasoning (e.g., [228, 229]). The applications of visualization and visual analytics have not been at the same scale as other imaging modalities (e.g., CT, MRI, DTO, etc.), though there were some reports of such applications (e.g., [230, 231]). There is a huge potential for developing advanced visualization and visual analytics techniques in supporting functional neuroimaging and hence cognitive neuroscience.

Meanwhile, more advanced and effective analysis of functional neuroimaging data will provide the field of visualization with more opportunities to study visualization phenomena using functional neuroimaging.

9 Challenges and Opportunities

There are many branches of Applied Psychology, some of which are shown in Figure 1. One has to ask that “is there a room for Visualization Psychology?” The authors of this survey believe that the visualization community should work with colleagues in psychology to establish such a branch. We hope that this survey is an early step towards this long term goal.

There will be many challenges along the route to the establishment of a new branch of Applied Psychology. These may include:

  • Many research students in visualization may need some persuasion to take on empirical studies as their thesis topics.

  • Many academic supervisors may feel uncomfortable to start a new line of scientific investigation.

  • The perception about the relatively lower acceptance rate for papers in the category of “Evaluation” or “Empirical Studies”.

Meanwhile, visualization provides a unique window on the human mind, while playing an indispensable role in data science. This surveys shows that the visualization community is not only capable of carrying out empirical studies to test some visual designs or visualization systems as part of a software engineering workflows but also capable of attempting the more ambitious goal of empirical studies, that is, to make new discoveries about how and why visualization works in some conditions and not in others, and to inform and verify proposed theories advances.

Most of us agree that in some circumstances, visualization is more effective and/or efficient than viewing data in numerical, textual, or tabular forms, and than being simply informed by a computer about the decision. When visualization works in these circumstances, there must be some merits in perception and cognition. Hence any causal factors that make visualization work may potentially be the causal factors that make perception and cognition work. Therefore, visualization researchers are in the right place at the right time to look for these causal factors.

In summary, while the field of visualization can learn a huge amount from psychology in terms of research findings and research methodologies, there is a need to develop an interdisciplinary subject, bringing together the discipline of psychology and the field of visualization more closely. While visualization can be a significant application area of psychology, visualization researchers can also provide advanced computing technologies to support the design of empirical studies and the analysis of captured empirical data. While there is a continuing need to conduct usability studies for evaluating visual designs, visualization techniques, and visualization systems, there is profound need to design innovative empirical studies for understanding complex phenomena in visualization and for informing the development of the foundation of visualization.

References

  • [1] R. Kosara, C. G. Healey, V. Interrante, D. H. Laidlaw, and C. Ware, “Thoughts on user studies: Why, how, and when,” IEEE Computer Graphics and Applications, vol. 23, no. 4, pp. 20–25, 2003.
  • [2] H. Lam, E. Bertini, P. Isenberg, C. Plaisant, and S. Carpendale, “Empirical studies in information visualization: Seven scenarios,” IEEE Trans. Visualization & Computer Graphics, vol. 18, no. 9, pp. 1520–1536, Sept 2012.
  • [3] J. Fuchs, P. Isenberg, A. Bezerianos, and D. Keim, “A systematic review of experimental studies on data glyphs,” IEEE Trans. Visualization & Computer Graphics, vol. 23, no. 7, pp. 1863–1879, July 2017.
  • [4] R. E. Roth, A. Çöltekin, L. Delazari, H. F. Filho, A. Griffin, A. Hall, J. Korpi, I. Lokka, A. Mendonça, K. Ooms, and C. P. J. M. van Elzakker, “User studies in cartography: Opportunities for empirical research on interactive maps and visualizations,” International Journal of Cartography, vol. 3, no. sup1, pp. 61–89, 2017.
  • [5] N. Kijmongkolchai, A. Abdul-Rahman, and M. Chen, “Empirically measuring soft knowledge in visualization,” Computer Graphics Forum, vol. 36, no. 3, pp. 73–85, 2017.
  • [6] L. McNabb and R. S. Laramee, “Survey of surveys SoS - mapping the landscape of survey papers in information visualization,” Computer Graphics Forum, vol. 36, no. 3, pp. 589–617, Jun. 2017.
  • [7] K. Kurzhals, B. Fisher, M. Burch, and D. Weiskopf, “Eye tracking evaluation of visual analytics,” Information Visualization, vol. 15, no. 4, pp. 340–358, 2016.
  • [8] K. Blumenstein, C. Niederer, M. Wagner, G. Schmiedl, A. Rind, and W. Aigner, “Evaluating information visualization on mobile devices: Gaps and challenges in the empirical evaluation design space,” in Proc. of the Sixth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization, ser. BELIV’ 16, 2016, pp. 125–132.
  • [9] J. Heinrich and D. Weiskopf, “State of the Art of Parallel Coordinates,” in Eurographics 2013 - State of the Art Reports, M. Sbert and L. Szirmay-Kalos, Eds.   The Eurographics Association, May 2013, pp. 95–116.
  • [10] J. Johansson and C. Forsell, “Evaluation of parallel coordinates: Overview, categorization and guidelines for future research,” IEEE Trans. on Visualization & Computer Graphics, vol. 22, no. 1, pp. 579–588, 2016.
  • [11] A. Dasgupta, D. L. Arendt, L. R. Franklin, P. C. Wong, and K. A. Cook, “Human factors in streaming data analysis: Challenges and opportunities for information visualization,” Computer Graphics Forum, 2017.
  • [12] R. Borgo, K. Proctor, M. Chen, H. Jänicke, T. Murray, and I. M. Thornton, “Evaluating the impact of task demands and block resolution on the effectiveness of pixel-based visualization,” IEEE Trans. Visualization & Computer Graphics, vol. 16, no. 6, pp. 963–972, 2010.
  • [13] M. Correll, D. Albers, S. Franconeri, and M. Gleicher, “Comparing averages in time series data,” in Proc. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), 2012, pp. 1095–1104.
  • [14] S. Haroz and D. Whitney, “How capacity limits of attention influence information visualization effectiveness,” IEEE Trans. Visualization & Computer Graphics, vol. 18, no. 12, pp. 2402–2410, 2012.
  • [15] R. A. Rensink and G. Baldridge, “The perception of correlation in scatterplots,” Computer Graphics Forum, vol. 29, no. 3, pp. 1203–1210, 2010.
  • [16] L. Harrison, F. Yang, S. Franconeri, and R. Chang, “Ranking visualizations of correlation using Weber’s law,” IEEE Trans. Visualization & Computer Graphics, vol. 20, no. 12, pp. 1943–1952, 2014.
  • [17] D. H. S. Chung, D. Archambault, R. Borgo, D. J. Edwards, R. S. Laramee, and M. Chen, “How ordered is it? On the perceptual orderability of visual channels,” Computer Graphics Forum, vol. 35, no. 3, pp. 131–140, 2016.
  • [18] M. Chen, G. Grinstein, C. R. Johnson, J. Kennedy, and M. Tory, “Pathways for theoretical advances in visualization,” IEEE Computer Graphics and Applications, vol. 37, no. 4, pp. 103–112, 2017.
  • [19] R. L. Atkinson, R. G. Atkinson, E. E. Smith, and D. J. Bem, Introduction to Psychology, 11th ed.   Thomson Learning, 1993.
  • [20] E. E. Smith, S. Nolen-Hoeksema, B. Fredrickson, and G. R. Loftus, Atkinson & Hilgard’s Introduction to Psychology, 14th ed.   Cengage Learning Emea, 2003.
  • [21] D. H. Hockenbury and S. E. Hockenbury, Discovering Psychology, 5th ed.   World Publishers, 2010.
  • [22] H. Gleitman, A. J. Fridlund, and D. Reisberg, Psychology, 6th ed.   W. W. Norton & Co., 2003.
  • [23] R. Cross, Psychology: The Science of Mind and Behaviour, 5th ed.   Hodder Education, 2005.
  • [24] K. R. Boff, L. Kaufman, and J. P. Thomas, Handbook of Perception and Human Performance, Volume I.   Wiley-Interscience, 1986.
  • [25] ——, Handbook of Perception and Human Performance, Volume II.   John Wiley & Sons, 1986.
  • [26] Wikipedia, “Category: Branches of psychology,” Access in January 2018. [Online]. Available: https://en.wikipedia.org/wiki/Category:Branches_of_psychology
  • [27] K. Cherry, “The major branches of psychology,” Access in January 2018. [Online]. Available: https://www.verywell.com/major-branches-of-psychology-4139786
  • [28] C. Nordqvist, “Psychology: What you need to know,” Access in January 2018. [Online]. Available: https://www.medicalnewstoday.com/articles/154874.php
  • [29] K. Cherry, “Main branches of psychology,” Access in January 2018. [Online]. Available: https://www.explorepsychology.com/branches-of-psychology/
  • [30] A. Sharma, “Branches of psychology,” Access in January 2018. [Online]. Available: http://www.psychologydiscussion.net/branch/branches-of-psychology-different-branches-of-psychology/544
  • [31] Revuu, “10 branches of psychology,” Access in January 2018.
  • [32] Netindustries, “Branches of psychology,” Access in January 2018. [Online]. Available: http://psychology.jrank.org/collection/24/Branches-Psychology.html
  • [33] Enigmatic_HourGlass, “Branches of psychology,” Access in January 2018. [Online]. Available: https://quizlet.com/42518115/branches-of-psychology-flash-cards/
  • [34] PsycholoGenie, “List of subdivisions in psychology,” Access in January 2018. [Online]. Available: https://psychologenie.com/branches-of-psychology
  • [35] P. L.-J. Ritchie and J. Grenier, “Branches of psychology,” in Psychology.   Eolss, 2009, vol. I.
  • [36] S. Wehrend and C. Lewis, “A problem-oriented classification of visualization techniques,” in Proc. of the 1st Conference on Visualization, 1990, pp. 139–143.
  • [37] M. X. Zhou and S. K. Feiner, “Visual task characterization for automated visual discourse synthesis,” in Proc. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), 1998, pp. 392–399.
  • [38] R. A. Amar and J. T. Stasko, “Knowledge precepts for design and evaluation of information visualizations,” IEEE Trans. Visualization & Computer Graphics, vol. 11, no. 4, pp. 432–442, 2005.
  • [39] E. R. A. Valiati, M. S. Pimenta, and C. M. D. S. Freitas, “A taxonomy of tasks for guiding the evaluation of multidimensional visualizations,” in Proc. of the AVI Workshop on BEyond Time and Errors: Novel Evaluation Methods for Information Visualization, ser. BELIV ’06, 2006, pp. 1–6.
  • [40] J. Pretorius, H. C. Purchase, and J. T. Stasko, “Tasks for multivariate network analysis,” in Multivariate Network Visualization: Dagstuhl Seminar #13201, Dagstuhl Castle, Germany, May 12-17, 2013, Revised Discussions, A. Kerren, H. C. Purchase, and M. O. Ward, Eds.   Cham: Springer International Publishing, 2014, pp. 77–95.
  • [41] P. Cohen, Empirical Methods for Artificial Intelligence.   The MIT Press, 1995.
  • [42]

    A. Buja, D. Cook, and D. F. Swayne, “Interactive high-dimensional data visualization,”

    Journal of Computational and Graphical Statistics, vol. 5, no. 1, pp. 78–99, March 1996.
  • [43] D. Pfitzner, V. Hobbs, and D. Powers, “A unified taxonomic framework for information visualization,” in Proc. of the Asia-Pacific Symposium on Information Visualisation - Volume 24, ser. APVis 03, 2003, pp. 57–66.
  • [44] T. Lammarsch, A. Rind, W. Aigner, and S. Miksch, “Developing an Extended Task Framework for Exploratory Data Analysis Along the Structure of Time,” in EuroVA 2012: Int. Workshop on Visual Analytics, K. Matkovic and G. Santucci, Eds.   The Eurographics Association, 2012.
  • [45] M. Brehmer and T. Munzner, “A multi-level typology of abstract visualization tasks,” IEEE Trans. Visualization & Computer Graphics, vol. 19, no. 12, pp. 2376–2385, Dec 2013.
  • [46] H. J. Schulz, T. Nocke, M. Heitzler, and H. Schumann, “A design space of visualization tasks,” IEEE Trans. Visualization & Computer Graphics, vol. 19, no. 12, pp. 2366–2375, Dec 2013.
  • [47] J. w. Ahn, C. Plaisant, and B. Shneiderman, “A task taxonomy for network evolution analysis,” IEEE Trans. Visualization & Computer Graphics, vol. 20, no. 3, pp. 365–376, March 2014.
  • [48] N. Kerracher, J. Kennedy, and K. Chalmers, “A task taxonomy for temporal graph visualisation,” IEEE Trans. Visualization & Computer Graphics, vol. 21, no. 10, pp. 1160–1172, Oct 2015.
  • [49] A. Rind, W. Aigner, M. Wagner, S. Miksch, and T. Lammarsch, “Task cube: A three-dimensional conceptual space of user tasks in visualization design and evaluation,” Information Visualization, vol. 15, no. 4, pp. 288–300, 2016.
  • [50] P. Murray, F. Mcgee, and A. Forbes, “A taxonomy of visualization tasks for the analysis of biological pathway data,” in BMC Bioinformatics, vol. 18, no. sup2, 2017, p. 21.
  • [51] N. Andrienko and G. Andrienko, Exploratory Analysis of Spatial and Temporal Data: A Systematic Approach.   Springer, Berlin, Heidelberg, 2006.
  • [52] M. Adnan, M. Just, and L. Baillie, “Investigating time series visualisations to improve the user experience,” in Proc. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), 2016, pp. 5444–5455.
  • [53] W. Aigner, C. Kainz, R. Ma, and S. Miksch, “Bertin was Right: An Empirical Evaluation of Indexing to Compare Multivariate Time-Series Data Using Line Plots,” Computer Graphics Forum, 2011.
  • [54] W. Aigner, A. Rind, and S. Hoffmann, “Comparative evaluation of an interactive time-series visualization that combines quantitative data with qualitative abstractions,” Computer Graphics Forum, vol. 31, no. 3pt2, pp. 995–1004, 2012.
  • [55] D. Albers, M. Correll, and M. Gleicher, “Task-driven evaluation of aggregation in time series visualization,” in Proc. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), 2014, pp. 551–560.
  • [56] Y. Albo, J. Lanir, P. Bak, and S. Rafaeli, “Off the radar: Comparative evaluation of radial visualization solutions for composite indicators,” IEEE Trans. Visualization & Computer Graphics, vol. 22, no. 1, pp. 569–578, 2016.
  • [57] E. Alexander, C. Chang, M. Shimabukuro, S. Franconeri, C. Collins, and M. Gleicher, “Perceptual biases in font size as a data encoding,” IEEE Trans. Visualization & Computer Graphics, vol. 24, no. 8, pp. 2397–2410, Aug 2018.
  • [58] E. W. Anderson, K. C. Potter, L. E. Matzen, J. F. Shepherd, G. A. Preston, and C. T. Silva, “A user study of visualization effectiveness using eeg and cognitive load,” Computer Graphics Forum, vol. 30, no. 3, pp. 791–800, 2011.
  • [59] J. Bae and B. Watson, “Reinforcing visual grouping cues to communicate complex informational structure,” IEEE Trans. Visualization & Computer Graphics, vol. 20, no. 12, pp. 1973–1982, 2014.
  • [60] R. Beecham, J. Dykes, W. Meulemans, A. Slingsby, C. Turkay, and J. Wood, “Map LineUps: Effects of spatial structure on graphical inference,” IEEE Trans. Visualization & Computer Graphics, vol. 23, no. 1, pp. 391–400, 2017.
  • [61] A. Bezerianos and P. Isenberg, “Perception of visual variables on tiled wall-sized displays for information visualization applications,” IEEE Trans. Visualization & Computer Graphics, vol. 18, no. 12, pp. 2516–2525, 2012.
  • [62] R. Borgo, A. Abdul-Rahman, F. Mohamed, P. W. Grant, I. Reppa, L. Floridi, and M. Chen, “An empirical study on using visual embellishments in visualization,” IEEE Trans. Visualization & Computer Graphics, vol. 18, no. 12, pp. 2759–2768, 2012.
  • [63] R. Borgo, J. Dearden, and M. W. Jones, “Order of magnitude markers: An empirical study on large magnitude number detection,” IEEE Trans. Visualization & Computer Graphics, vol. 20, no. 12, pp. 2261–2270, 2014.
  • [64] M. A. Borkin, A. A. Vo, Z. Bylinskii, P. Isola, S. Sunkavalli, A. Oliva, and H. Pfister, “What makes a visualization memorable?” IEEE Trans. Visualization & Computer Graphics, vol. 19, no. 12, pp. 2306–2315, 2013.
  • [65] M. A. Borkin, Z. Bylinskii, N. W. Kim, C. M. Bainbridge, C. S. Yeh, D. Borkin, H. Pfister, and A. Oliva, “Beyond memorability: Visualization recognition and recall,” IEEE Trans. Visualization & Computer Graphics, vol. 22, no. 1, pp. 519–528, 2016.
  • [66] N. Boukhelifa, A. Bezerianos, T. Isenberg, and J. D. Fekete, “Evaluating sketchiness as a visual variable for the depiction of qualitative uncertainty,” IEEE Trans. Visualization & Computer Graphics, vol. 18, no. 12, pp. 2769–2778, 2012.
  • [67] J. Boy, L. Eveillard, F. Detienne, and J. D. Fekete, “Suggested interactivity: Seeking perceived affordances for information visualization,” IEEE Trans. Visualization & Computer Graphics, vol. 22, no. 1, pp. 639–648, 2016.
  • [68] I. Boyandin, E. Bertini, and D. Lalanne, “A qualitative study on the exploration of temporal changes in flow maps with animation and small-multiples,” Computer Graphics Forum, vol. 31, no. 3pt2, pp. 1005–1014, 2012.
  • [69] U. Brandes, B. Nick, B. Rockstroh, and A. Steffen, “Gestaltlines,” Computer Graphics Forum, vol. 32, no. 3pt2, pp. 171–180, 2013.
  • [70] S. Bresciani and M. J. Eppler, “The benefits of synchronous collaborative information visualization: Evidence from an experimental evaluation,” IEEE Trans. Visualization & Computer Graphics, vol. 15, no. 6, pp. 1073–1080, Nov 2009.
  • [71] M. Burch, N. Konevtsova, J. Heinrich, M. Hoeferlin, and D. Weiskopf, “Evaluation of traditional, orthogonal, and radial tree diagrams by an eye tracking study,” IEEE Trans. Visualization & Computer Graphics, vol. 17, no. 12, pp. 2440–2448, Dec 2011.
  • [72] X. Cai, K. Efstathiou, X. Xie, Y. Wu, Y. Shi, and L. Yu, “A study of the effect of doughnut chart parameters on proportion estimation accuracy,” Computer Graphics Forum, vol. 37, no. 6, pp. 300–312, 2018.
  • [73] M. Chen, R. P. Botchen, R. R. Hashim, D. Weiskopf, T. Ertl, and I. M. Thornton, “Visual signatures in video visualization,” IEEE Trans. Visualization & Computer Graphics, vol. 12, no. 5, pp. 1093–1100, 2006.
  • [74] F. Chevalier, P. Dragicevic, and S. Franconeri, “The not-so-staggering effect of staggered animated transitions on visual tracking,” IEEE Trans. Visualization & Computer Graphics, vol. 20, no. 12, pp. 2241–2250, Dec 2014.
  • [75] W. S. Cleveland and R. McGill, “Graphical perception: Theory, experimentation, and application to the development of graphical methods,” Journal of the American Statistical Association, vol. 79, no. 387, pp. 531–554, 1984.
  • [76] M. A. Correll, E. C. Alexander, and M. Gleicher, “Quantity estimation in visualizations of tagged text,” in Proc. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), 2013, pp. 2697–2706.
  • [77] M. Correll and M. Gleicher, “Error bars considered harmful: Exploring alternate encodings for mean and error,” IEEE Trans. Visualization & Computer Graphics, vol. 20, no. 12, pp. 2142–2151, Dec 2014.
  • [78] M. Correll and J. Heer, “Regression by eye: Estimating trends in bivariate visualizations,” in Proc. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), 2017, pp. 1387–1396.
  • [79] M. Correll, M. Li, G. Kindlmann, and C. Scheidegger, “Looks good to me: Visualizations as sanity checks,” IEEE Trans. Visualization & Computer Graphics, vol. 25, no. 1, pp. 830–839, Jan 2019.
  • [80] A. Dasgupta, J. Y. Lee, R. Wilson, R. A. Lafrance, N. Cramer, K. Cook, and S. Payne, “Familiarity Vs Trust: A comparative study of domain scientists’ trust in visual analytics and conventional analysis methods,” IEEE Trans. Visualization & Computer Graphics, vol. 23, no. 1, pp. 271–280, 2017.
  • [81] C. Demiralp, M. S. Bernstein, and J. Heer, “Learning perceptual kernels for visualization design,” IEEE Trans. Visualization & Computer Graphics, vol. 20, no. 12, pp. 1933–1942, Dec 2014.
  • [82] S. Diehl, F. Beck, and M. Burch, “Uncovering strengths and weaknesses of radial visualizations—an empirical approach,” IEEE Trans. Visualization & Computer Graphics, vol. 16, no. 6, pp. 935–942, Nov. 2010.
  • [83] E. Dimara, G. Bailly, A. Bezerianos, and S. Franconeri, “Mitigating the attraction effect with visualizations,” IEEE Trans. Visualization & Computer Graphics, vol. 25, no. 1, pp. 850–860, Jan 2019.
  • [84] E. Dimara, A. Bezerianos, and P. Dragicevic, “The attraction effect in information visualization,” IEEE Trans. Visualization & Computer Graphics, vol. 23, no. 1, pp. 471–480, 2017.
  • [85] R. Etemadpour, R. Motta, J. G. d. S. Paiva, R. Minghim, M. C. F. de Oliveira, and L. Linsen, “Perception-based evaluation of projection methods for multidimensional data visualization,” IEEE Trans. Visualization & Computer Graphics, vol. 21, no. 1, pp. 81–94, 2015.
  • [86] C. Felix, S. Franconeri, and E. Bertini, “Taking word clouds apart: An empirical investigation of the design space for keyword summaries,” IEEE Trans. Visualization & Computer Graphics, vol. 24, no. 1, pp. 657–666, Jan 2018.
  • [87] M. Fink, J. H. Haunert, J. Spoerhase, and A. Wolff, “Selecting the aspect ratio of a scatter plot based on its Delaunay triangulation,” IEEE Trans. Visualization & Computer Graphics, vol. 19, no. 12, pp. 2326–2335, 2013.
  • [88] J. Fuchs, P. Isenberg, A. Bezerianos, F. Fischer, and E. Bertini, “The influence of contour on similarity perception of star glyphs,” IEEE Trans. Visualization & Computer Graphics, vol. 20, no. 12, pp. 2251–2260, 2014.
  • [89] S. Ghani, N. Elmqvist, and J. S. Yi, “Perception of animated node-link diagrams for dynamic graphs,” Computer Graphics Forum, vol. 31, no. 3pt3, pp. 1205–1214, 2012.
  • [90] M. Gleicher, M. Correll, C. Nothelfer, and S. Franconeri, “Perception of average value in multiclass scatterplots,” IEEE Trans. Visualization & Computer Graphics, vol. 19, no. 12, pp. 2316–2325, 2013.
  • [91] C. C. Gramazio, D. H. Laidlaw, and K. B. Schloss, “Colorgorical: Creating discriminable and preferable color palettes for information visualization,” IEEE Trans. Visualization & Computer Graphics, vol. 23, no. 1, pp. 521–530, Jan 2017.
  • [92] C. C. Gramazio, K. B. Schloss, and D. H. Laidlaw, “The relation between visualization size, grouping, and user performance,” IEEE Trans. Visualization & Computer Graphics, vol. 20, no. 12, pp. 1953–1962, 2014.
  • [93] A. L. Griffin and A. C. Robinson, “Comparing color and leader line highlighting strategies in coordinated view geovisualizations,” IEEE Trans. Visualization & Computer Graphics, vol. 21, no. 3, pp. 339–349, 2015.
  • [94] T. Gschwandtner, M. Bögl, P. Federico, and S. Miksch, “Visual encodings of temporal uncertainty: A comparative user study,” IEEE Trans. Visualization & Computer Graphics, vol. 22, no. 1, pp. 539–548, 2016.
  • [95] H. Guo, J. Huang, and D. H. Laidlaw, “Representing uncertainty in graph edges: An evaluation of paired visual variables,” IEEE Trans. Visualization & Computer Graphics, vol. 21, no. 10, pp. 1173–1186, 2015.
  • [96] S. Haroz, R. Kosara, and S. L. Franconeri, “Isotype visualization: Working memory, performance, and engagement with pictographs,” in Proc. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), 2015, pp. 1191–1200.
  • [97] ——, “The connected scatterplot for presenting paired time series,” IEEE Trans. Visualization & Computer Graphics, vol. 22, no. 9, pp. 2174–2186, 2016.
  • [98] J. Heer and M. Bostock, “Crowdsourcing graphical perception: Using mechanical turk to assess visualization design,” in Proc. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), 2010, pp. 203–212.
  • [99] J. Heer, N. Kong, and M. Agrawala, “Sizing the horizon: The effects of chart size and layering on the graphical perception of time series visualizations,” in Proc. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), 2009, pp. 1303–1312.
  • [100] M. Höferlin, K. Kurzhals, B. Höferlin, G. Heidemann, and D. Weiskopf, “Evaluation of fast-forward video visualization,” IEEE Trans. Visualization & Computer Graphics, vol. 18, no. 12, pp. 2095–2103, 2012.
  • [101] H. Hofmann, L. Follett, M. Majumder, and D. Cook, “Graphical tests for power comparison of competing designs,” IEEE Trans. Visualization & Computer Graphics, vol. 18, no. 12, pp. 2441–2448, 2012.
  • [102] S. Huron, Y. Jansen, and S. Carpendale, “Constructing visual representations: Investigating the use of tangible tokens,” IEEE Trans. Visualization & Computer Graphics, vol. 20, no. 12, pp. 2102–2111, Dec 2014.
  • [103] P. Isenberg, A. Bezerianos, P. Dragicevic, and J. Fekete, “A study on dual-scale data charts,” IEEE Trans. Visualization & Computer Graphics, vol. 17, no. 12, pp. 2469–2478, Dec 2011.
  • [104] M. R. Jakobsen and K. Hornbaek, “Interactive visualizations on large and small displays: The interrelation of display size, information space, and scale,” IEEE Trans. Visualization & Computer Graphics, vol. 19, no. 12, pp. 2336–2345, 2013.
  • [105] M. R. Jakobsen, Y. S. Haile, S. Knudsen, and K. Hornbaek, “Information visualization and proxemics: Design opportunities and empirical findings,” IEEE Trans. Visualization & Computer Graphics, vol. 19, no. 12, pp. 2386–2395, 2013.
  • [106] Y. Jansen and K. Hornbaek, “A psychophysical investigation of size as a physical variable,” IEEE Trans. Visualization & Computer Graphics, vol. 22, no. 1, pp. 479–488, 2016.
  • [107] W. Javed, B. McDonnel, and N. Elmqvist, “Graphical perception of multiple time series,” IEEE Trans. Visualization & Computer Graphics, vol. 16, no. 6, pp. 927–934, Nov 2010.
  • [108] R. Kanjanabose, A. Abdul-Rahman, and M. Chen, “A multi-task comparative study on scatter plots and parallel coordinates plots,” Computer Graphics Forum, vol. 34, no. 3, pp. 261–270, 2015.
  • [109] M. Kersten-Oertel, S. J. Chen, and D. L. Collins, “An evaluation of depth enhancing perceptual cues for vascular volume visualization in neurosurgery,” IEEE Trans. Visualization & Computer Graphics, vol. 20, no. 3, pp. 391–403, March 2014.
  • [110] S. H. Kim, Z. Dong, H. Xian, B. Upatising, and J. S. Yi, “Does an eye tracker tell the truth about visualizations?: Findings while investigating visualizations for decision making,” IEEE Trans. Visualization & Computer Graphics, vol. 18, no. 12, pp. 2421–2430, 2012.
  • [111] Y. Kim and J. Heer, “Assessing effects of task and data distribution on the effectiveness of visual encodings,” Computer Graphics Forum, 2018.
  • [112] X. Kuang, H. Zhang, S. Zhao, and M. J. McGuffin, “Tracing tuples across dimensions: A comparison of scatterplots and parallel coordinate plots,” Computer Graphics Forum, vol. 31, no. 3pt4, pp. 1365–1374, 2012.
  • [113] K. Kurzhals, M. Höferlin, and D. Weiskopf, “Evaluation of attention-guiding video visualization,” Computer Graphics Forum, vol. 32, no. 3pt1, pp. 51–60, 2013.
  • [114] O. H. Kwon, C. Muelder, K. Lee, and K. L. Ma, “A study of layout, rendering, and interaction methods for immersive graph visualization,” IEEE Trans. Visualization & Computer Graphics, vol. 22, no. 7, pp. 1802–1815, 2016.
  • [115]

    D. H. Laidlaw, J. S. Davidson, T. S. Miller, M. da Silva, R. M. Kirby, W. H. Warren, and M. Tarr, “Quantitative comparative evaluation of 2D vector field visualization methods,” in

    Proc. IEEE Visualization, 2001, pp. 143–150.
  • [116] J. Li, J.-B. Martens, and J. J. Van Wijk, “Judging correlation from scatterplots and parallel coordinate plots,” Information Visualization, vol. 9, no. 1, pp. 13–30, 2010.
  • [117] I. Liccardi, A. Abdul-Rahman, and M. Chen, “I know where you live: Inferring details of people’s lives by visualizing publicly shared location data,” in Proc. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), 2016, pp. 1–12.
  • [118] S. Lin, J. Fortuna, C. Kulkarni, M. Stone, and J. Heer, “Selecting semantically-resonant colors for data visualization,” Computer Graphics Forum, vol. 32, no. 3pt4, pp. 401–410, 2013.
  • [119] A. J. Lind and S. Bruckner, “Comparing cross-sections and 3d renderings for surface matching tasks using physical ground truths,” IEEE Trans. Visualization & Computer Graphics, vol. 23, no. 1, pp. 781–790, Jan 2017.
  • [120] M. Livingston and J. Decker, “Evaluation of trend localization with multi-variate visualizations,” IEEE Trans. Visualization & Computer Graphics, vol. 17, no. 12, pp. 2053–2062, Dec 2011.
  • [121] M. A. Livingston, J. W. Decker, and Z. Ai, “Evaluation of multivariate visualization on a multivariate task,” IEEE Trans. Visualization & Computer Graphics, vol. 18, no. 12, pp. 2114–2121, 2012.
  • [122] A. M. MacEachren, R. E. Roth, J. O’Brien, B. Li, D. Swingley, and M. Gahegan, “Visual semiotics & uncertainty visualization: An empirical study,” IEEE Trans. Visualization & Computer Graphics, vol. 18, no. 12, pp. 2496–2505, 2012.
  • [123] K. Marriott, H. Purchase, M. Wybrow, and C. Goncu, “Memorability of visual features in network diagrams,” IEEE Trans. Visualization & Computer Graphics, vol. 18, no. 12, pp. 2477–2485, 2012.
  • [124] M. Mazurek and M. Waldner, “Visualizing expanded query results,” Computer Graphics Forum, vol. 37, no. 3, pp. 87–98, 2018.
  • [125] L. Micallef, P. Dragicevic, and J. D. Fekete, “Assessing the effect of visualizations on bayesian reasoning through crowdsourcing,” IEEE Trans. Visualization & Computer Graphics, vol. 18, no. 12, pp. 2536–2545, 2012.
  • [126] S. Mittelstädt and D. A. Keim, “Efficient contrast effect compensation with personalized perception models,” Computer Graphics Forum, vol. 34, no. 3, pp. 211–220, 2015.
  • [127] C. J. Morris, D. S. Ebert, and P. L. Rheingans, “Experimental analysis of the effectiveness of features in Chernoff faces,” Proc. SPIE, vol. 3905, 2000.
  • [128] R. Netzel, M. Burch, and D. Weiskopf, “Comparative eye tracking study on node-link visualizations of trajectories,” IEEE Trans. Visualization & Computer Graphics, vol. 20, no. 12, pp. 2221–2230, 2014.
  • [129] R. Netzel, M. Hlawatsch, M. Burch, S. Balakrishnan, H. Schmauder, and D. Weiskopf, “An evaluation of visual search support in maps,” IEEE Trans. Visualization & Computer Graphics, vol. 23, no. 1, pp. 421–430, 2017.
  • [130] L. Nowell, R. Schulman, and D. Hix, “Graphical encoding for information visualization: an empirical study,” in IEEE Symp. Information Visualization, Oct 2002, pp. 43–50.
  • [131] B. Ondov, N. Jardine, N. Elmqvist, and S. Franconeri, “Face to face: Evaluating visual comparison,” IEEE Trans. Visualization & Computer Graphics, vol. 25, no. 1, pp. 861–871, Jan 2019.
  • [132] A. Ottley, E. M. Peck, L. T. Harrison, D. Afergan, C. Ziemkiewicz, H. A. Taylor, P. K. J. Han, and R. Chang, “Improving bayesian reasoning: The effects of phrasing, visualization, and spatial ability,” IEEE Trans. Visualization & Computer Graphics, vol. 22, no. 1, pp. 529–538, 2016.
  • [133] L. Padilla, P. S. Quinan, M. Meyer, and S. H. Creem-Regehr, “Evaluating the impact of binning 2D scalar fields,” IEEE Trans. Visualization & Computer Graphics, vol. 23, no. 1, pp. 431–440, 2017.
  • [134] A. V. Pandey, K. Rall, M. L. Satterthwaite, O. Nov, and E. Bertini, “How deceptive are deceptive visualizations?: An empirical analysis of common distortion techniques,” in Proc. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), 2015, pp. 1469–1478.
  • [135] A. V. Pandey, J. Krause, C. Felix, J. Boy, and E. Bertini, “Towards understanding human similarity perception in the analysis of large sets of scatter plots,” in Proc. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), 2016, pp. 3659–3669.
  • [136] I. Poupyrev, T. Ichikawa, S. Weghorst, and M. Billinghurst, “Egocentric object manipulation in virtual environments: Empirical evaluation of interaction techniques,” Computer Graphics Forum, vol. 17, no. 3, pp. 41–52, 1998.
  • [137] E. D. Ragan, R. Kopper, P. Schuchardt, and D. A. Bowman, “Studying the effects of stereo, head tracking, and field of regard on a small-scale spatial judgment task,” IEEE Trans. Visualization & Computer Graphics, vol. 19, no. 5, pp. 886–896, 2013.
  • [138] G. Ryan, A. Mosca, R. Chang, and E. Wu, “At a glance: Pixel approximate entropy as a measure of line chart complexity,” IEEE Trans. Visualization & Computer Graphics, vol. 25, no. 1, pp. 872–881, Jan 2019.
  • [139] B. Saket, A. Endert, and C. Demiralp, “Task-based effectiveness of basic visualizations,” IEEE Trans. Visualization & Computer Graphics, 2018.
  • [140] B. Saket, P. Simonetto, S. Kobourov, and K. Börner, “Node, node-link, and node-link-group diagrams: An evaluation,” IEEE Trans. Visualization & Computer Graphics, vol. 20, no. 12, pp. 2231–2240, 2014.
  • [141] B. Saket, C. Scheidegger, S. G. Kobourov, and K. Börner, “Map-based visualizations increase recall accuracy of data,” Computer Graphics Forum, vol. 34, no. 3, pp. 441–450, 2015.
  • [142] B. Saket, C. Scheidegger, and S. Kobourov, “Comparing node-link and node-link-group visualizations from an enjoyment perspective,” Computer Graphics Forum, vol. 35, no. 3, pp. 41–50, 2016.
  • [143] A. Sarvghad, M. Tory, and N. Mahyar, “Visualizing dimension coverage to support exploratory analysis,” IEEE Trans. Visualization & Computer Graphics, vol. 23, no. 1, pp. 21–30, 2017.
  • [144] K. B. Schloss, C. C. Gramazio, A. T. Silverman, M. L. Parker, and A. S. Wang, “Mapping color to meaning in colormap data visualizations,” IEEE Trans. Visualization & Computer Graphics, vol. 25, no. 1, pp. 810–819, Jan 2019.
  • [145] V. Sher, K. G. Bemis, I. Liccardi, and M. Chen, “An empirical study on the reliability of perceiving correlation indices using scatterplots,” Computer Graphics Forum, vol. 36, no. 3, pp. 61–72, 2017.
  • [146] D. Skau, L. Harrison, and R. Kosara, “An evaluation of the impact of visual embellishments in bar charts,” Computer Graphics Forum, vol. 34, no. 3, pp. 221–230, 2015.
  • [147] D. Skau and R. Kosara, “Arcs, angles, or areas: Individual data encodings in pie and donut charts,” Computer Graphics Forum, vol. 35, no. 3, pp. 121–130, 2016.
  • [148] H. Song and D. A. Szafir, “Where’s my data? evaluating visualizations with missing data,” IEEE Trans. Visualization & Computer Graphics, vol. 25, no. 1, pp. 914–924, Jan 2019.
  • [149] A. Srinivasan, M. Brehmer, B. Lee, and S. M. Drucker, “What’s the difference?: Evaluating variations of multi-series bar charts for visual comparison tasks,” in Proc. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), 2018, pp. 304:1–304:12.
  • [150] H. Strobelt, D. Oelke, B. C. Kwon, T. Schreck, and H. Pfister, “Guidelines for effective usage of text highlighting techniques,” IEEE Trans. Visualization & Computer Graphics, vol. 22, no. 1, pp. 489–498, 2016.
  • [151] J. Talbot, J. Gerth, and P. Hanrahan, “An empirical model of slope ratio comparisons,” IEEE Trans. Visualization & Computer Graphics, vol. 18, no. 12, pp. 2613–2620, 2012.
  • [152] D. A. Szafir, “Modeling color difference for visualization design,” IEEE Trans. Visualization & Computer Graphics, vol. 24, no. 1, pp. 392–401, Jan 2018.
  • [153] D. A. Szafir, A. Sarikaya, and M. Gleicher, “Lightness constancy in surface visualization,” IEEE Trans. Visualization & Computer Graphics, vol. 22, no. 9, pp. 2107–2121, Sep. 2016.
  • [154] J. Talbot, J. Gerth, and P. Hanrahan, “An empirical model of slope ratio comparisons,” IEEE Trans. Visualization & Computer Graphics, 2012.
  • [155] J. Talbot, V. Setlur, and A. Anand, “Four experiments on the perception of bar charts,” IEEE Trans. Visualization & Computer Graphics, vol. 20, no. 12, pp. 2152–2160, 2014.
  • [156] Y. Tanahashi, N. Leaf, and K.-L. Ma, “A study on designing effective introductory materials for information visualization,” Computer Graphics Forum, vol. 35, no. 7, pp. 117–126, 2016.
  • [157] M. Tory, “Mental registration of 2d and 3d visualizations (an empirical study),” in IEEE Visualization, Oct 2003, pp. 371–378.
  • [158] A. Vande Moere, M. Tomitsch, C. Wimmer, B. Christoph, and T. Grechenig, “Evaluating the effect of style in information visualization,” IEEE Trans. Visualization & Computer Graphics, vol. 18, no. 12, pp. 2739–2748, 2012.
  • [159] M. Volante, S. V. Babu, H. Chaturvedi, N. Newsome, E. Ebrahimi, T. Roy, S. B. Daily, and T. Fasolino, “Effects of virtual human appearance fidelity on emotion contagion in affective inter-personal simulations,” IEEE Trans. Visualization & Computer Graphics, vol. 22, no. 4, pp. 1326–1335, 2016.
  • [160] J. A. Wagner Filho, C. M. Freitas, and L. Nedel, “VirtualDesk: A Comfortable and Efficient Immersive Information Visualization Approach,” Computer Graphics Forum, 2018.
  • [161] J. Walker, R. Borgo, and M. W. Jones, “TimeNotes: A study on effective chart visualization and interaction techniques for time-series data,” IEEE Trans. Visualization & Computer Graphics, vol. 22, no. 1, pp. 549–558, 2016.
  • [162] Y. Wang, F. Han, L. Zhu, O. Deussen, and B. Chen, “Line graph or scatter plot? automatic selection of methods for visualizing trends in time series,” IEEE Trans. Visualization & Computer Graphics, vol. 24, no. 2, pp. 1141–1154, Feb 2018.
  • [163] C. Ware, “Color sequences for univariate maps: Theory, experiments, and principles,” IEEE Computer Graphics and Applications, vol. 8, no. 5, pp. 41–49, 1988.
  • [164] Y. Wu, N. Cao, D. Archambault, Q. Shen, H. Qu, and W. Cui, “Evaluation of graph sampling: A visualization perspective,” IEEE Trans. Visualization & Computer Graphics, vol. 23, no. 1, pp. 401–410, 2017.
  • [165] T. Wun, J. Payne, S. Huron, and S. Carpendale, “Comparing bar chart authoring with Microsoft Excel and tangible tiles,” Computer Graphics Forum, vol. 35, no. 3, pp. 111–120, 2016.
  • [166] K. Xu, C. Rooney, P. Passmore, D. H. Ham, and P. H. Nguyen, “A user study on curved edges in graph visualization,” IEEE Trans. Visualization & Computer Graphics, vol. 18, no. 12, pp. 2449–2456, 2012.
  • [167] Y. Yang, T. Dwyer, S. Goodwin, and K. Marriott, “Many-to-many geographically-embedded flow visualisation: An evaluation,” IEEE Trans. Visualization & Computer Graphics, vol. 23, no. 1, pp. 411–420, 2017.
  • [168] Y. Yang, B. Jenny, T. Dwyer, K. Marriott, H. Chen, and M. Cordeil, “Maps and globes in virtual reality,” Computer Graphics Forum, vol. 37, no. 3, pp. 427–438, 2018.
  • [169] B. Yost and C. North, “The perceptual scalability of visualization,” IEEE Trans. Visualization & Computer Graphics, vol. 12, no. 5, pp. 837–844, Sep. 2006.
  • [170] H. Zhao, G. W. Bryant, W. Griffin, J. E. Terrill, and J. Chen, “Validation of splitvectors encoding for quantitative visualization of large-magnitude-range vector fields,” IEEE Trans. Visualization & Computer Graphics, vol. 23, no. 6, pp. 1691–1705, June 2017.
  • [171] J. Zhao, Z. Liu, M. Dontcheva, A. Hertzmann, and A. Wilson, “Matrixwave: Visual comparison of event sequence data,” in Proc. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), 2015, pp. 259–268.
  • [172] Y. Zhao, F. Luo, M. Chen, Y. Wang, J. Xia, F. Zhou, Y. Wang, Y. Chen, and W. Chen, “Evaluating multi-dimensional visualizations for understanding fuzzy clusters,” IEEE Trans. Visualization & Computer Graphics, vol. 25, no. 1, pp. 12–21, Jan 2019.
  • [173] L. Zheng, Y. Wu, and K. L. Ma, “Perceptually-based depth-ordering enhancement for direct volume rendering,” IEEE Trans. Visualization & Computer Graphics, vol. 19, no. 3, pp. 446–459, 2013.
  • [174] C. Ziemkiewicz, A. Ottley, R. J. Crouser, A. R. Yauilla, S. L. Su, W. Ribarsky, and R. Chang, “How visualization layout relates to locus of control and other personality factors,” IEEE Trans. Visualization & Computer Graphics, vol. 19, no. 7, pp. 1109–1121, 2013.
  • [175] J. Jastrow, “Concepts and ‘isms’ in psychology,” The American Journal of Psychology, vol. 39, no. 1/4, pp. 1–6, 1927.
  • [176] J. P. Byrnes, “Categorizing and combining theories of cognitive development and learning,” Educational Psychology Review, vol. 4, no. 3, pp. 309–343, 1992.
  • [177] T. H. Leahey, A History of Psychology: From Antiquity to Modernity.   T&F India, 2017.
  • [178] E. Kardas, History of Psychology: The Making of a Science.   Wadsworth Publishing, 2013.
  • [179] M. Chen, L. Floridi, and R. Borgo, “What is visualization really for?” in The Philosophy of Information Quality, Springer Synthese Library, vol. 358, 2014, pp. 75–93.
  • [180] L. R. Beach and T. S. Scopp, “Inferences about correlations,” Psychpnomic Science, vol. 6, pp. 253–254, 1966.
  • [181] D. E. Erlick, “Human estimates of statistical relatedness,” Psychonomic Science, vol. 5, pp. 365–366, 1966.
  • [182] R. F. Strahan and C. J. Hansen, “Underestimating correlation from scatterplots,” Applied Psychological Measurement, vol. 2, no. 4, pp. 543–550, 1978.
  • [183] P. Bobko and R. Karren, “The perception of pearson product moment correlations from bivariate scatterplots,” Personnel Psychology, vol. 32, no. 2, pp. 313–325, 1979.
  • [184] D. M. Lane, C. A. Anderson, and K. L. Kellam, “Judging the relatedness of variables: The psychophysics of covariation detection.” Journal of Experimental Psychology: Human Perception and Performance, vol. 11, no. 5, p. 640, 1985.
  • [185] T. W. Lauer and G. V. Post, “Density in scatterplots and the estimation of correlation,” Behaviour & Information Technology, vol. 8, no. 3, pp. 235–244, 1989.
  • [186] C. E. Collyer, K. A. Stanley, and C. Bowater, “Psychology of the scientist: LXIII. perceiving scattergrams: Is visual line fitting relatsed to estimation of the correlation coefficient?” Perceptual and Motor Skills, vol. 71, no. 2, pp. 371–378, 1990.
  • [187] J. Meyer and D. Shinar, “Estimating correlations from scatterplots,” Human Factors: The Journal of the Human Factors and Ergonomics Society, vol. 34, no. 3, pp. 335–349, 1992.
  • [188] M. E. Doherty, R. B. Anderson, A. M. Angott, and D. S. Klopfer, “The perception of scatterplots,” Perception & Psychophysics, vol. 69, no. 7, pp. 1261–1272, 2007.
  • [189] R. A. Rensink, “The nature of correlation perception in scatterplots,” Psychonomic Bulletin & Review, pp. 1–22, 2016.
  • [190] W. S. Cleveland, P. Diaconis, and R. McGill, “Variables on scatterplots look more highly correlated when the scales are increased,” Science, vol. 216, pp. 1138–1141, 1982.
  • [191] R. Borgo, L. Micallef, B. Bach, F. Mcgee, and B. Lee, “Information visualization evaluation using crowdsourcing,” Computer Graphics Forum, vol. 37, pp. 573–595, 06 2018.
  • [192] M. Chen and A. Golan, “What may visualization processes optimize?” IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 12, pp. 2619–2632, 2016.
  • [193] A. Treisman and S. Fearnley, “The stroop test: Selective attention to colours and words,” Nature, vol. 222, pp. 437–439, 1969.
  • [194] A. Treisman, M. Sykes, and G. Gelade, “Selective attention and stimulus integratio,” in Attention and Performance VI, S. Dornic, Ed.   Hillsdale, NJ: Erlbaum, 1977.
  • [195] A. Treisman, “Focused attention in the perception and retrieval of multidimensional stimuli,” Perception and Psychophysics, vol. 22, no. 1, pp. 1–11, 1977.
  • [196] A. Treisman and G. Gelade, “A feature integration theory of attention,” Cognitive Psychology, vol. 12, pp. 97–136, 1980.
  • [197] R. Shepard, “Attention and the metric structure of the stimulus space,” Journal of Mathematical Psychology, vol. 1, no. 1, pp. 54–141, 1964.
  • [198] L. Williams, “The effects of target specification on objects fixated during visual search,” Acta Psychologica, vol. 27, p. 355–415, 1967.
  • [199] B. Burns, B. E. Shepp, D. McDonough, and W. K. Wiener-Ehrlich, “The relation between stimulus analyzability and perceived dimensional structure,” The Psychology of Learning and Motivation, vol. 12, p. 77–115, 1978.
  • [200] P. Quinlan and G. Humphreys, “Visual search for targets defined by combinations of color, shape, and size: an examination of the task constraints on feature and conjunction searches,” Perception & Psychophysics, vol. 41, no. 5, p. 455–527, 1987.
  • [201] E. R. Heider and D. C. Olivier, “The structure of the colour space in naming and memory for two languages,” Cognitive Psychology, vol. 3, pp. 337–354, 1972.
  • [202] J. Davidoff, I. Davies, and D. Roberson, “Colour categories in a stone-age tribe,” Nature, vol. 398, pp. 203–204, 1999.
  • [203] G. Thierry, P. Athanasopoulos, A. Wiggett, B. Dering, and J.-R. Kuipers, “Unconscious effects of language-specific terminology on preattentive color perception,” Proceedings of the National Academy of Sciences of the United States of America, vol. 106, no. 11, pp. 4567–4570, 2009.
  • [204] W. Zhong, Y. Li, Y. Huang, H. Li, and L. Mo, “Is the lateralized categorical perception of color a situational effect of language on color perception?” Cognitive Science, vol. 42, pp. 350–364, 2018.
  • [205] R. Shapley and M. Hawken, “Neural mechanisms for color perception in the primary visual cortex,” Current Opinion in Neurobiology, vol. 12, pp. 426–432, 2002.
  • [206] P. Gouras and J. Kruger, “Responses of cells in foveal visual cortex of the monkey to pure color contrast,” Journal of Neurophysiol, vol. 42, pp. 850–860, 1979.
  • [207] D. H. Foster, “Color constancy,” Vision Research, vol. 51, pp. 674–700, 2011.
  • [208] J. G. Martín-Moro, F. P. Garrido, F. G. Sanz, I. F. Vega, M. C. Rebollo, and P. M. Martína, “Which are the colors of the dress? review of an atypical optic illusion,” Archivos De La Sociedad Española De Oftalmología, vol. 3, no. 4, pp. 186–192, 2018.
  • [209] C. Witzel and K. R. Gegenfurtner, “Color perception: Objects, constancy, and categories,” Annual Review of Vision Science, vol. 4, pp. 475–99, 2018.
  • [210] Z. W. Pylyshyn, “Visual indexes, preconceptual objects, and situated vision,” Cognition, vol. 80, no. 1-2, pp. 127–158, 2001.
  • [211] D. Kirsh, “Thinking with external representations,” AI & Society, vol. 25, no. 4, pp. 441–454, 2010.
  • [212] L. W. Barsalou, “Grounded cognition,” Annual Review of Psychology, vol. 59, pp. 617–645, 2008.
  • [213] W. D. Gray, “The cognitive science of intermediate interactive behaviour or why milliseconds matter for reality-based interfaces,” in Challenges in the Evaluation of Usability and User Experience in Reality Based Interaction (CHI 2009 Workshop), 2009, pp. 5–8.
  • [214] ——, Integrated models of cognitive systems.   New York: Oxford University Press, 2007.
  • [215] D. Kirsh and P. Maglio, “On distinguishing episodic from pragmatic action,” Cognitive Science, vol. 18, pp. 513–549, 1994.
  • [216] J. K. Lindstedt and W. D. Gray, “Distinguishing experts from novices by the mind’s hand and mind’s eye,” Cognitive Psychology, vol. 109, pp. 1–19, 2019.
  • [217] L. T. Kaastra and B. Fisher, “Field experiment methodology for pair analytics,” in Proc. of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization, ser. BELIV’ 14, 2014.
  • [218] E. Hutchins, Cognition in the Wild.   The MIT Press, 1995.
  • [219] A. Bangerter and H. H. Clark, “Navigating joint projects with dialogue,” Cognitive Science, vol. 27, no. 2, pp. 195–225, 2003.
  • [220] R. Arias-Hernandez, L. T. Kaastra, and B. Fisher, “Joint action theory and pair analytics: In-vivo studies of cognition and social interaction in collaborative visual analytics,” in Proc. 33rd Annual Conference of the Cognitive Science Society, L. Carlson, C. Hoelscher, and T. Shipley, Eds., 2011, pp. 3244–3249.
  • [221] J. Kabat-Zinn, “Mindfulness-based interventions in context: past, present, and future,” Clinical Psychology: Science and Practice, vol. 10, no. 2, pp. 144–156, 2003.
  • [222] A. Moore and P. Malinowski, “Meditation, mindfulness and cognitive flexibility,” Consciousness and cognition, vol. 18, no. 1, pp. 176–186, 2009.
  • [223] R. Andersen and S. M. Kosslyn, Eds., Frontiers in Cognitive Neuroscience.   MIT Press, 1992.
  • [224] K. Ochsner and S. M. Kosslyn, Eds., The Oxford Handbook of Cognitive Neuroscience.   Oxford University Press, 2017.
  • [225] S. M. Courtney and L. G. Ungerleider, “What fmri has taught us about human vision,” Current Opinion in Neurobiology, vol. 7, no. 4, pp. 554–561, 1997.
  • [226] C. Hickey and M. V. Peelen, “Neural mechanisms of incentive salience in naturalistic human vision,” Neuron, vol. 85, no. 3, pp. 512–518, 2015.
  • [227] S. Schwartz, P. Vuilleumier, C. Hutton, A. Maravita, R. J. Dolan, and J. Driver, “Attentional load and sensory competition in human vision: Modulation of fMRI responses by load at fixation during task-irrelevant stimulation in the peripheral visual field,” Cerebral Cortex, vol. 15, no. 6, pp. 770–786, 2005.
  • [228] P. A. Carpenter, M. A. Just, and E. D. Reichle, “Working memory and executive function: Evidence from neuroimaging,” Current Opinion in Neurobiology, vol. 10, no. 2, pp. 195–199, 2000.
  • [229] S. J. Durning, M. Costanzo, T. Beckman, A. Artino Jr., M. Roy, and C. Van Der Vleuten, “Functional neuroimaging correlates of thinking flexibility and knowledge structure in memory: Exploring the relationships between clinical reasoning and diagnostic thinking,” Medical Teacher, vol. 16, pp. 1–8, 2015.
  • [230]

    S. B. Katwal, J. C. Gore, R. Marois, and B. P. Rogers, “Unsupervised spatiotemporal analysis of fmri data using graph-based visualizations of self-organizing maps,”

    IEEE Transactions on Biomedical Engineering, vol. 60, no. 9, pp. 2472–2483, 2013.
  • [231]

    N. K. Kasabov, M. G. Doborjeh, and Z. G. Doborjeh, “Mapping, learning, visualization, classification, and understanding of fMRI data in the neucube evolving spatiotemporal data machine of spiking neural networks,”

    IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 4, pp. 887–899, 2017.