VizML: A Machine Learning Approach to Visualization Recommendation

08/14/2018 ∙ by Kevin Z. Hu, et al. ∙ MIT 8

Data visualization should be accessible for all analysts with data, not just the few with technical expertise. Visualization recommender systems aim to lower the barrier to exploring basic visualizations by automatically generating results for analysts to search and select, rather than manually specify. Here, we demonstrate a novel machine learning-based approach to visualization recommendation that learns visualization design choices from a large corpus of datasets and associated visualizations. First, we identify five key design choices made by analysts while creating visualizations, such as selecting a visualization type and choosing to encode a column along the X- or Y-axis. We train models to predict these design choices using one million dataset-visualization pairs collected from a popular online visualization platform. Neural networks predict these design choices with high accuracy compared to baseline models. We report and interpret feature importances from one of these baseline models. To evaluate the generalizability and uncertainty of our approach, we benchmark with a crowdsourced test set, and show that the performance of our model is comparable to human performance when predicting consensus visualization type, and exceeds that of other ML-based systems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 6

page 7

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Problem Formulation

Data visualization communicates information by representing data with visual elements. These representations are specified using encodings that map from data to the retinal properties (e.g. position, length, or color) of graphical marks (e.g. points, lines, or rectangles) [5, 11].

Concretely, consider a dataset that describes 406 automobiles (rows) with eight attributes (columns) such as miles per gallon (MPG), horsepower (Hp), and weight in pounds (Wgt)  [52]. To create a scatterplot showing the relationship between MPG and Hp, an analyst encodes each pair of data points with the position of a circle on a 2D plane, while also specifying many other properties like size and color:

To create bespoke visualizations, analysts might need to exhaustively specify encodings in detail using expressive tools. But a scatterplot is specified with the Vega-lite [57] grammar by selecting a mark type and fields to be encoded along the x- and y-axes, and in Tableau [64] by placing the two columns onto the respective column and row shelves.

That is, to create basic visualizations in many grammars or tools, an analyst specifies higher-level design choices, which we define as statements that compactly and uniquely specify a bundle of lower-level encodings. Equivalently, each grammar or tool affords a design space of visualizations, which a user constrains by making choices.

1.1 Visualization as Making Design Choices

We formulate basic visualization of a dataset as a set of interrelated design choices , each of which is selected from a possibility space . However, not all design choices result in valid visualizations – some choices are incompatible with each other. For instance, encoding a categorical column with the Y position of a line mark is invalid. Therefore, the set of choices that result in valid visualizations is a subset of the space of all possible choices .

Figure 1: Creating visualizations is a process of making design choices, which can be recommended by a system or specified by an analyst.

The effectiveness of a visualization can be defined by informational measures such as efficiency, accuracy, and memorability [79, 6], or emotive measures like engagement [28, 20]. Prior research also shows that effectiveness is informed by low-level perceptual principles [15, 32, 23, 53] and dataset properties [56, 29], in addition to contextual factors such as task [55, 29, 3], aesthetics [13], domain [25], audience [62], and medium [37, 59]. In other words, an analyst makes design choices that maximize visualization effectiveness Eff given a dataset and contextual factors :

(1)

But making design choices can be expensive. A goal of visualization recommendation is to reduce the cost of creating visualizations by automatically suggesting a subset of design choices .

1.2 Modeling Design Choice Recommendation

Consider a single design choice . Let denote the set of all other design choices excluding . Given , a dataset , and context , there is an ideal design choice recommendation function that outputs the design choice from Eqn. 1 that maximizes visualization effectiveness:

(2)

Our goal is to approximate with a function . Assume now a corpus of datasets and corresponding visualizations , each of which can be described by design choices . Machine learning-based recommender systems consider as a model with a set of parameters that can be trained on this corpus by a learning algorithm that maximizes an objective function Obj:

(3)

Without loss of generality, say the objective function maximizes the likelihood of observing the training output . Even if an analyst makes sub-optimal design choices, collectively optimizing the likelihood of all observed design choices can still be optimal [40]. This is precisely the case with our observed design choices . Therefore, given an unseen dataset , maximizing this objective function can plausibly lead to a recommendation that maximizes effectiveness of a visualization.

(4)

In this paper, our model is a neural network and are connection weights. We simplify the recommendation problem by optimizing each independently, and without contextual factors: . We note that independent recommendations may not be compatible, nor do they necessarily maximize overall effectiveness. Generating a complete visualization output will require modeling dependencies between for each , which we discuss in section 9.

Figure 2: Basic setup of learning models to recommend design choices with a corpus of datasets and corresponding design choices.
System Source Generation Learning Task Training Data Features Model
VizML
Public
(Plotly)
Human
Design Choice
Recommendation
Dataset-
Visualization Pairs
Single + Pairwise +
Aggregated
Neural Network
DeepEye Crowd
1) 33.4K
2) 285K
Rules
Annotation
1) Good-Bad Classif.
2) Ranking
1) Good-Bad Labels
2) Pairwise Comparisons
Column Pair

1) Decision Tree

2) RankNet
Data2Vis
Tool
(Voyager)
4,300
Rules
Validation
End-to-End
Viz. Generation
Dataset Subset-
Visualization Pairs
Raw Seq2Seq NN
Draco-Learn
Crowd
1,100 +
10
Rules
Annotation
Soft Constraint
Weights
Pairwise Comparisons
Soft Constraint
Violation Counts
RankSVM
Table 1: Comparison of machine learning-based visualization recommendation systems. The major differences are that of Learning Task definition, and the quantity () and quality (Generation and Training Data) of training data.

2 Related Work

We relate and compare our work to existing Rule-based Visualization Recommender Systems, ML-based Visualization Recommender Systems, and prior Descriptions of Public Data and Visualizations.

2.1 Rule-based Visualization Recommender Systems

Visualization recommender systems either suggest data queries (selecting what data to visualize) or visual encodings (how to visualize selected data) [75]. Data query recommenders vary widely in their approaches [61, 73], with recent systems optimizing statistical “utility” functions [68, 19]. Though specifying data queries is crucial to visualization, it is distinct from design choice recommendation.

Most visual encoding recommenders implement guidelines informed the seminal work of Bertin [5], Cleveland and McGill [15], and others. This approach is exemplified by Mackinlay’s APT [36] – the ur-recommender system – which enumerates, filters, and scores visualizations using expressiveness and perceptual effectiveness criteria. The closely related SAGE [54], BOZ [12], and Show Me [35] support more data, encoding, and task types. Recently, hybrid systems such as Voyager [76, 77, 75], Explore in Google Sheets [21, 69], VizDeck [45], and DIVE [24] combine visual encoding rules with the recommendation of visualizations that include non-selected columns.

Though effective for many use cases, these systems suffer from four major limitations. First, visualization is a complex process that may require encoding non-linear relationships that are difficult to capture with simple rules. Second, even crafting simple rule sets is a costly process that relies on expert judgment. Third, like rule-based systems in other domains, these systems face the

cold-start problem of presenting non-trivial results for datasets or users about which they have not yet gathered sufficient information [1]. Lastly, as the dimension of input data increases, the combinatorial nature of rules result in an explosion of possible recommendations.

2.2 ML-based Visualization Recommender Systems

The guidelines encoded by rule-based systems often derive from experimental findings and expert experience. Therefore, an indirect manner, heuristics distill best practices learned from another analyst’s experience while creating or consuming visualizations. Instead of aggregating best practices learned from data, and representing them in a system with rules, ML-based systems propose to train models that learn directly from data, and can be embedded into systems

as-is.

DeepEye [34]

combines rule-based visualization generation with models trained to 1) classify a visualization as “good” or “bad” and 2) rank lists of visualizations. The DeepEye corpus consists of 33,412 bivariate visualizations of columns drawn from 42 public datasets. 100 students annotated these visualizations as good/bad, and compared 285,236 pairs. These annotations, combined with 14 features for each column pair, train a decision tree for the classification task and a ranking neural network 

[10] for the “learning to rank” task.

Data2Vis [18]

uses a neural machine translation approach to create a sequence-to-sequence model that maps JSON-encoded datasets to Vega-lite visualization specifications. This model is trained using 4,300 automatically generated Vega-Lite examples, consisting of 1-3 variables, generated from 11 distinct datasets. The model is qualitatively validated by examining the visualizations generated from 24 common datasets.

Draco-Learn [38]

learns trade-offs between constraints in Draco, a formal model that represents 1) visualizations as logical facts and 2) design guidelines as hard and soft constraints. Draco-Learn uses a ranking support vector machine trained on ranked pairs of visualizations harvested from graphical perception studies 

[29, 55]

. Draco can recommend visualizations that satisfy these constraints by solving a combinatorial optimization problem.

VizML differs from these systems in three major respects, as shown in Table 1. In terms of the learning task, DeepEye learns to classify and rank visualizations, Data2Vis learns an end-to-end generation model, and Draco-Learn learns soft constraints weights. By learning to predict design choices, VizML models are easier to quantitatively validate, provide interpretable measures of feature importance, and can be more easily integrated into visualization systems.

In terms of data quantity, the VizML training corpus is orders of magnitude larger than that of DeepEye and Data2Vis. The size of our corpus permits the use of 1) large feature sets that capture many aspects of a dataset and 2) high-capacity models like deep neural networks that can be evaluated against a large test set.

The third major difference is one of data quality. The datasets used to train VizML models are extremely diverse in shape, structure, and other properties, in contrast to the few datasets used to train the three existing systems. Furthermore, the visualizations used by other ML-based recommender systems are still generated by rule-based systems, and evaluated in controlled settings. The corpus used by VizML is the result of real visual analysis by analysts on their own datasets.

However, VizML faces two major limitations. First, these three ML-based systems recommend both data queries and visual encodings, while VizML only recommends the latter. Second, in this paper, we do not create an application that employs our visualization model. Design considerations for user-facing systems that productively and properly employ ML-based visualization recommendation are important, but beyond the scope of this paper.

2.3 Descriptions of Public Data and Visualizations

Beagle [4] is an automated system for scraping over visualizations across five tools from the web. Beagle shows that a few visualization types represent a large portion of visualizations, and shows difference in visualization type usage between tools. However, Beagle does not collect the data used to generate these visualizations.

A 2013 study of ManyEyes and Tableau Public [39] analyzes hundreds of thousands of datasets and visualizations from two popular tools [70, 64]. The authors report usage patterns, distribution of dataset properties, and characteristics of visualizations. This study also relates dataset properties with visualization types, similar to predicting visualization type using dimension-based features in our approach.

3 Data

We describe our process for collecting and cleaning a corpus of 2.3 million dataset-visualization pairs, describing each dataset and column with features, and extracting design choices from each visualization. These are steps 1, 2, and 3 of the workflow shown in Fig. 8.

3.1 Collection and Cleaning

Plotly [46] is a software company that creates tools and software libraries for data visualization and analysis. For example, Plotly Chart Studio [47] is a web application that lets users upload datasets and manually create interactive D3.js and WebGL visualizations of over 20 visualization types. Users familiar with Python can use the Plotly Python library [49] to create those same visualizations with code.

Visualizations in Plotly are specified with a declarative schema. In this schema, each visualization is specified with two data structures. The first is a list of traces that specify how a collection of data is visualized. The second is a dictionary that specifies aesthetic aspects of a visualization untied from the data, such as axis labels and annotations. For example, the scatterplot from section 1 is specified with a single “scatter” trace with Hp as the x parameter and MPG as the y parameter:

The Plotly schema is similar to that of MATLAB and of the matplotlib Python library. The popular Vega [58] and Vega-lite [57] schemas are more opinionated, which “allows for complicated chart display with a concise JSON description, but leaves less control to the user” [51]. Despite these differences, it is straightforward to convert Plotly schemas into other schemas, and vice versa.

Plotly also supports sharing and collaboration. Starting in 2015, users could publish charts to the Plotly Community Feed [48], which provides an interface for searching, sorting, and filtering millions of visualizations, as shown in Fig. 3. The underlying /plots endpoint from the Plotly REST API [50] associates each visualization with three objects: data contains the source data, specification contains the traces, and layout defines display configuration.

Figure 3: Screenshot of the Plotly Community Feed [48].

3.2 Data Description

Using the Plotly API, we collected approximately 2.5 years of public visualizations from the feed, starting from 2015-07-17 and ending at 2018-01-06. We gathered 2,359,175 visualizations in total, 2,102,121 of which contained all three configuration objects, and 1,989,068 of which were parsed without error. To avoid confusion between user-uploaded datasets and our dataset of datasets, we refer to this collection of dataset-visualization pairs as the Plotly corpus.

The Plotly corpus contains visualizations created by unique users, who vary widely in their usage. The distribution of visualizations per user is shown in Fig. 4. Excluding the top of users with the most visualizations, many of whom are bots that programmatically generate visualizations, users created a mean of and a median of visualizations each.

Figure 4: Distribution of plots per user, visualized on a log-log scale.
(a) Distribution of columns per dataset, after removing the of datasets with more than 25 columns, visualized on a log-linear scale.
(b) Distribution of rows per dataset, visualized on a log-log scale.
Figure 5: Distribution of dataset dimensions in the Plotly corpus.

Datasets also vary widely in number of columns and rows. Though some datasets contain upwards of columns, contain less than or equal to columns. Excluding datasets with more than columns, the average dataset has columns, and the median dataset has columns. The distribution of columns per visualization is shown in Fig. 4(a). The distribution of rows per dataset is shown in Fig. 4(b), and has a mean of , median of , and maximum of . These heavy-tailed distributions are consistent with those of IBM ManyEyes and Tableau Public as reported by [39].

Though Plotly lets users generate visualizations using multiple datasets, of visualizations used only one source dataset. Therefore, we are only concerned with visualizations using a single dataset. Furthermore, over of visualizations used all columns in the source dataset, so we are not able to address data query selection. Lastly, out of traces, only of have transformations or aggregations. Given this extreme class imbalance, we are not able to address column transformation or aggregation as learning tasks.

3.3 Feature Extraction

We describe each column with the 81 single-column features shown in Table 3(a) in the Appendix. These features fall into four categories. The Dimensions (D) feature is the number of rows in a column. Types (T) features capture whether a column is categorical, temporal, or quantitative. Values (V) features describe the statistical and structural properties of the values within a column. Names (N) features describe the column name.

We distinguish between these feature categories for three reasons. First, these categories let us organize how we create and interpret features. Second, we can observe the contribution of different types of features. Third, some categories of features may be less generalizable than others. We order these categories (D T V N) by how biased we expect those features to be towards the Plotly corpus.

Nested within these categories are more groupings of features. For instance, within the Values category, the Sequence group includes measures of sortedness, while the features within the Unique group describes the uniqueness of values in a column.

We describe each pair of columns with 30 pairwise-column features. These features fall into two categories: Values and Names, some of which are shown in Table 3(b)

. Note that many pairwise-column features, depend on the individual column types determined through single-column feature extraction. For instance, the Pearson correlation coefficient requires two numeric columns, and the “number of shared values” feature requires two categorical columns.

We create 841 dataset-level features by aggregating these single- and pairwise-column features using the 16 aggregation functions shown in Table 3(c)

. These aggregation functions convert single-column features (across all columns) and pairwise-column features (across all pairs of columns) into scalar values. For example, given a dataset, we can count the number of columns, describe the percent of columns that are categorical, and compute the mean correlation between all pairs of quantitative columns. Two other approaches to incorporating single-column features are to train separate models per number of columns, or to include column features with padding. Neither approach yielded a significant improvement over the results in

section 5.

Figure 6: Extracting features from the Automobile MPG dataset [52].

3.4 Design Choice Extraction

Each visualization in Plotly consists of traces that associate collections of data with visual elements. Therefore, we extract an analyst’s design choices by parsing these traces. Examples of encoding-level design choices include mark type, such as scatter, line, bar; and X or Y column encoding, which specifies which column is represented on which axis; and whether or not an X or Y column is the single column represented along that axis. For example, the visualization in Fig. 7 consists of two scatter traces, both of which have the same column encoded on the X axis (Hp), and two distinct columns encoded on the Y axis (MPG and Wgt).

By aggregating these encoding-level design choices, we can characterize visualization-level design choices of a chart. Within our corpus, over of the visualizations consist of homogeneous mark types. Therefore, we use visualization type to describe the type shared among all traces, and also determined whether the visualization has a shared axis. The example in Fig. 7 has a scatter visualization type and a single shared axis (X).

Figure 7: Extracting design choices from a dual-axis scatterplot.
Figure 8: Diagram of data processing and analysis flow, starting from (1) the original Plotly API endpoints, proceeding to (2) the deduplicated dataset-visualization pairs, (3a) the features describing each individual column, pair of columns, and dataset, (3b) design choices extracted from visualizations extraction process, (4) the task-specific models trained on these features, (5) and predicted choices.

4 Methods

We describe our feature processing pipeline, the machine learning models we use, how we train those models, and how we evaluate performance. These are steps 4 and 5 of the workflow in Fig. 8.

4.1 Feature Processing

We converted raw features into a form suitable for modeling with a five-stage pipeline. First, we apply one-hot encoding to categorical features. Second, we set numeric values above the 99th percentile or below the 1st percentile to those respective cut-offs. Third, we imputed missing categorical values using the mode of non-missing values, and missing numeric values with the mean of non-missing values. Fourth, we removed the mean of numeric fields and scaled to unit variance.

Lastly, we randomly removed datasets that were exact deduplicates of each other, resulting in unique datasets and columns. However, many datasets were slight modifications of each other, uploaded by the same user. Therefore, we removed all but one randomly selected dataset per user, which also removed bias towards more prolific Plotly users. This aggressive deduplication resulted in a final corpus of 119,815 datasets and 287,416 columns. Results from only exact deduplication result in significantly higher within-corpus test accuracies, while a soft threshold-based deduplication results in similar test accuracies.

4.2 Prediction Tasks

Our task is to train models that use the features described in section 3.3 to predict the design choices described in section 3.4. Two visualization-level prediction tasks use dataset-level features to predict visualization-level design choices:

[backgroundcolor=black!10, nobreak=true]

  1. [leftmargin=*]

  2. Visualization Type [VT]: 2-, 3-, and 6-class
    Given that all traces are the same type, what type is it?

    Scatter Line Bar Box Histogram Pie
    44829 26209 16002 4981 4091 3144
  3. Has Shared Axis [HSA]: 2-class
    Do the traces in the chart all share one axis (either X or Y)?

    False True
    95723 24092

The three encoding-level prediction tasks use features about individual columns to predict how they are visually encoded. That is, these prediction tasks consider each column independently, instead of alongside other columns in the same dataset. This bag-of-columns approach accounts for the effect of column order.

[backgroundcolor=black!10, nobreak=true]

  1. [leftmargin=*]

  2. Mark Type [MT]: 2-, 3-, and 6-class
    What mark type is used to represent this column?

    Scatter Line Bar Box Histogram Heatmap
    68931 64726 30023 13125 5163 1032
  3. Is Shared X-axis or Y-axis [ISA]: 2-class
    Is this column the only column on encoded on its axis?

    False True
    275886 11530
  4. Is on X-axis or Y-axis [XY]: 2-class
    Is this column encoded on the X-axis or the Y-axis?

    False True
    144364 142814

For the Visualization Type and Mark Type tasks, the 2-class task predicts line vs. bar, and the 3-class predicts scatter vs. line vs. bar. Though Plotly supports over twenty mark types, we limited prediction outcomes to the few types that comprise the majority of visualizations within our corpus. This heterogeneity of visualization types is consistent with the findings of [4, 39].

Visualization Type HSA
Model Features d C=2 C=3 C=6 C=2
NN D 15 66.3 50.4 51.3 84.1
D+T 52 75.7 59.6 60.8 86.7
D+T+V 717 84.5 77.2 87.7 95.4
All 841 86.0 79.4 89.4 97.3
NB All 841 63.4 49.5 46.2 72.9
KNN All 841 76.5 59.9 53.8 81.5
LR All 841 81.8 64.9 69.0 90.2
RF All 841 81.2 65.1 66.6 90.4
(in 1000s) 42.2 87.0 99.3 119
(a) Prediction accuracies for two visualization-level tasks.
Mark Type ISA XY
Model Features d C=2 C=3 C=6 C=2 C=2
NN D 1 65.2 44.3 30.5 52.1 49.9
D+T 9 68.5 46.8 35.0 70.3 57.3
D+T+V 66 79.4 59.4 76.0 95.5 67.4
All 81 84.9 67.8 82.9 98.3 83.1
NB All 81 57.6 41.1 27.4 81.2 70.0
KNN All 81 72.4 51.9 37.8 72.0 65.6
LR All 81 73.6 52.6 43.7 84.8 79.1
RF All 81 78.3 60.1 46.7 74.2 83.4
(in 1000s) 94.7 163 183 287 287
(b) Prediction accuracies for three encoding-level tasks.
Table 2:

Design choice prediction accuracies for five models, averaged over 5-fold cross-validation. The standard error of the mean was

for all results. Results are reported for a neural network (NN), naive Bayes (NB), K-nearest neighbors (KNN), logistic regression (LR), and random forest (RF). Features are separated into four categories: dimensions (D), types (T), values (V), and names (N).

is the size of the training set before resampling, d is the number of features, C is the number of outcome classes. HSA = Has Shared Axis, ISA = Is Shared X-axis or Y-Axis and XY = Is on X-axis or Y-axis.

4.3 Neural Network and Baseline Models

Our primary model is a fully-connected feedforward neural network (NN), which consists of non-linear functions connected as nodes in a network. Our network had hidden layers, each consisting of neurons with ReLU activation functions.

We chose four simpler models as baselines: naive Bayes (NB), which makes predictions based on conditional probabilities determining by applying Bayes’ theorem while assuming independent features; K-nearest neighors (KNN), which predicts based on the majority vote of the

most similar points; logistic regression (LR), a generalized linear model that predicts the probability of a binary event with a logistic function; and random forests (RF), an ensemble of decision trees that continually split the input by individual features.

We implemented the NN using PyTorch 

[43], and the baseline models using scikit-learn. The baseline models used default scikit-learn [44] parameters. Specifically, KNN used 5 neighbors with a Euclidean distance metric. LR used an L1 regularization penalization norm, and a regularization strength of . RF had no maximum depth, used Gini impurity criteria, and considered features when looking for a split, where is the total number of features. Randomized parameter search did not result in a significant performance increase over the results reported in the next section.

4.4 Training and Testing Models

The neural network was trained with the Adam optimizer and mini-batch size of . The learning rate was initialized at , and followed a learning rate schedule that reduces the learning rate by a factor of upon encountering a plateau. A plateau was defined as epochs with validation accuracy that do vary beyond a threshold of . Training ended after the third decrease in the learning rate, or at epochs. We found that weight decay and dropout did not significantly improve performances.

For the neural network, we split the data into 60/20/20 train/validation/test sets. That is, we train the NN on 60% of the data to optimize performance on a separate 20% validation set. Then, we evaluate performance at predicting the remaining 20% test set. For the baseline models, which do not require a validation set, we used a 60/20 train/test split.

We oversample the train, validation, and test sets to the size of the majority class and ensure no overlap between the three sets. We oversample for two reasons. First, because of the heterogeneous outcomes, naive classifiers guessing the base rates would have high accuracies. Second, for ease of interpretation, balanced classes allow us to report standard accuracies, which is ideal for prediction tasks with number of outcome classes .

We train and test each model five times (5-fold cross-validation), so that each sample in the corpus was included in exactly one test set. We report the average performance across these tests. Reported results are the average of 5-fold cross-validation, such that each sample in total corpus was included in exactly one test set.

In terms of features, we constructed four different feature sets by incrementally adding the Dimensions (D), Types (T), Values (V), and Names (N) categories of features, in that order. We refer to these feature sets as D, D+T, D+T+V, and D+T+V+N=All. The neural network was trained and tested using all four feature sets. The four baseline models only used the full feature set (D+T+V+N=All).

Lastly, we use accuracy (fraction of correct predictions) instead of other measures of performance, such as score and AUROC, because it easily generalizes to multi-class cases, its straight-forward interpretation, and because we equally weigh the outcomes of our tasks.

5 Evaluating Prediction Performance

We report performance of each model on the seven prediction tasks in Table 2. The highest achieved mean accuracies for both the neural network and the baseline models are highlighted in bold. The top accuracies are achieved by the neural network. Across the board, each model achieved accuracies above the random guessing baseline of (e.g. 50% accuracy on the two-type visualization type prediction task). Model performance generally progressed as NB KNN LR RF NN. That said, the performance of both RF and LR is not significantly lower than that of the NN in most cases. Simpler classifiers may be desirable, depending on the need for optimized accuracy, and the trade-off with other factors such as interpretability and training cost.

(a) Marginal accuracies by feature set for visualization-level prediction tasks.
(b) Marginal accuracies by feature set for encoding-level prediction tasks.
Figure 9: Marginal contribution to NN accuracy by feature set, for each task. Baseline accuracies are shown as solid and dashed lines.
# Visualization Type (C=2) Visualization Type (C=3) Visualization Type (C=6) Has Shared Axis (C=2)
1 % of Values are Mode std Entropy std Is Monotonic % Number of Columns
2 Min Value Length max Entropy var Number of Columns Is Monotonic %
3 Entropy var String Type % Sortedness max Field Name Length AAD
4 Entropy std Mean Value Length var Y In Name # # Words In Name NR
5 String Type has Min Value Length var Y In Name % X In Name #
6 Median Value Length max String Type has # Shared Unique Vals std # Words In Name range
7 Mean Value Length AAD Percentage Of Mode std # Shared Values MAD Edit Distance mean
8 Entropy mean Median Value Length max Entropy std Edit Distance max
9 Entropy max Entropy mean Entropy range Length std
10 Min Value Length AAD Length mean % of Values are Mode std Edit Distance NR
(a) Feature importances for two visualization-level prediction tasks. The second column describes how each feature was aggregated, using the abbreviations in Table 3(c).
# Mark Type (C=2) Mark Type (C=3) Mark Type (C=6) Is Shared Axis (C=2) Is X or Y Axis (C=2)
1 Entropy Length Length # Words In Name Y In Name
2 Length Entropy Field Name Length Unique Percent X In Name
3 Sortedness Field Name Length Entropy Field Name Length Field Name Length
4

% Outliers (1.5IQR)

Sortedness Sortedness Is Sorted Sortedness
5 Field Name Length Lin Space Seq Coeff Lin Space Seq Coeff Sortedness Length
6 Lin Space Seq Coeff % Outliers (1.5IQR) Kurtosis X In Name Entropy
7 % Outliers (3IQR) Gini Gini Y In Name Lin Space Seq Coeff
8 Norm. Mean Skewness Normality Statistic Lin Space Seq Coeff Kurtosis
9 Skewness Norm. Range Norm Range Min # Uppercase Chars
10 Norm. Range Norm. Mean Skewness Length Skewness
(b) Feature importances for four encoding-level prediction tasks.
Table 3: Top-10 feature importances for chart- and encoding-level prediction tasks. Feature importance is determined by mean decrease impurity for the top performing random forest models. Colors represent different feature groupings: dimensions (), type (), statistical [Q] (), statistical [C] (), sequence (), scale of variation (), outlier (), unique (), name (), and pairwise-relationship ().

Because the four feature sets are a sequence of supersets (D D+T D+T+V D+T+V+N), we consider the accuracy of each feature set above and beyond the previous. For instance, the increase in accuracy of a model trained on D+T+V over a model trained on D+T is a measure of the contribution of value-based (V) features. These marginal accuracies are visualized alongside baseline model accuracies in Fig. 8(a).

We note that the value-based features (e.g. the statistical properties of a column) contribute more to performance than the type-based features (e.g. whether a column is categorical), potentially because there are many more value-based features than type-based features. Or, because many value-based features are dependent on column type, there may be overlapping information between value- and type-based features.

6 Interpreting Feature Importances

We calculate feature importances to interpret our models, justify our feature extraction pipeline, and relate our features to prior literature. Feature importances can also be used to inform visualization design guidelines, derive rules for rule-based systems, and perform feature selection for more parsimonious models.

Here, we determine feature importances for our top performing random forest models using the standard mean decrease impurity (MDI) measure [33, 8]. The top ten features by MDI are shown in Table 2(a). We choose this method for its interpretability and its stability across runs. The reported features are generally consistent with those calculated through filter-based methods such as mutual information, or wrapper-based methods like recursive feature elimination.

We first note the importance of dimensionality (), like the length of columns (i.e. the number of rows) or the number of columns. For example, the length of a column is the second most important feature for predicting whether that column is visualized in a line or a bar trace. The dependence of mark type on number of visual elements is consistent with heuristics like “keep the total number of bars under 12” for showing individual differences in a bar chart [63], and not creating pie charts with more “more than five to seven” slices [31]. The dependence on number of columns is related to the heuristics described by Bertin [5] and encoded in Show Me [35].

Features related to column type () are consistently important for each prediction task. For example, the whether a dataset has a string column is the fifth most important feature for determining whether that dataset is visualized as a bar or a line chart. The dependence of visualization type choice on column data type is consistent with the type-dependency of the perceptual properties of visual encodings described by Mackinlay and Cleveland and McGill [36, 15].

Statistical features (quantitative: , categorical:

) such as Gini, entropy, skewness and kurtosis are important across the board. The presence of these higher order moments is striking because lower-order moments such as mean and variance are low in importance. The importance of these moments highlight the potential importance of capturing high-level characteristics of distributional shape. These observations support the use of statistical properties in visualization recommendation, like in 

[61, 74], but also the use of higher-order properties such as skewness, kurtosis, and entropy in systems such as Foresight [16], VizDeck [45], and Draco [38].

Measures of orderedness (), specifically sortedness and monotonicity, are important for many tasks as well. Sortedness is defined as the element-wise correlation between the sorted and unsorted values of a column, that is , which lies in the range . Monotonicity is determined by strictly increasing or decreasing values in . The importance of these features could be due to pre-sorting of a dataset by the user, which may reveal which column is considered to be the independent or explanatory column, which is typically visualized along the X-axis. While intuitive, we have not seen orderedness factor into existing systems.

We also note the importance of the linear or logarithmic space sequence coefficients, which are heuristic-based features that roughly capture the scale of variation (). Specifically, the linear space sequence coefficient is determined by , where for the linear space sequence coefficient, and for the logarithmic space sequence coefficient. A column “is” linear or logarithmic if its coefficient . Both coefficients are important in all four selected encoding-level prediction tasks. We have not seen similar measures of scale used in prior systems.

In sum, the contribution of these features to determining an outcome can be intuitive. In this way, these feature importances are perhaps unremarkable. However, the ability to quantitatively interpret these feature importances could serve as validation for visualization heuristics. Furthermore, the diversity of features in this list suggests that rule-based recommender systems, many of which incorporate only type information (e.g. [35, 77]), should expand the set of considered features. This is computationally feasible because most features extracted by our system can be determined by inexpensive linear operations. That said, it would still be difficult in rule-based systems to capture the non-linear dependencies of task outcomes on features, and the complex relationships between features.

7 Benchmarking with Crowdsourced Effectiveness

We expand our definition of effectiveness from a binary to a continuous function that can be determined through crowdsourced consensus. Then, we describe our experimental procedure for gathering visualization type evaluations from Mechanical Turk workers. We compare different predictors at predicting these evaluations using a consensus-based effectiveness score.

7.1 Modeling and Measuring Effectiveness

As discussed in section 1, we model data visualization as a process of making a set of design choices that maximize an effectiveness criteria Eff that depends on dataset , task, and context. In section 5, we predict these design choices by training a machine learning model on a corpus of dataset-design choice pairs . But because each dataset was visualized only once by each user, we consider the user choices to be effective, and each other choice as ineffective. That is, we consider effectiveness to be binary.

But prior research suggests that effectiveness is continuous. For example, Saket et al. use time and accuracy preference to measure task performance [55], Borkin et al. use a normalized memorability score [6], and Cleveland and McGill use absolute error rates to measure performance on elementary perceptual tasks [15]. Discussions by visualization experts [30, 26] also suggest that multiple visualizations can be equally effective at displaying the same data.

Our effectiveness metric should be continuous and reflect the ambiguous nature of data visualization, which leads to multiple choices receiving a non-zero or even maximal score for the same dataset. This is in agreement with measures of performance for other machine learning tasks such as the BLEU score in language translation [42]

and the ROUGE metric in text summarization 

[14], where multiple results can be partly correct.

To estimate this effectiveness function, we need to observe a dataset

visualized by potential users: . Assume that a design choice can take on multiple discrete values . For instance, we consider the choice of Visualization Type, which can take on the values . Using to denote the number of times was chosen, we compute the probability of making choice as , and use to denote the collection of probabilities across all . We normalize the probability of choice by the maximum probability to define an effectiveness score:

(5)

Now, if all users make the same choice , only will get the maximimum score while every other choice will receive a zero score. However, if two choices are chosen with an equal probability and are thus both equally effective, the normalization will ensure that both receive a maximum score.

Developing this crowdsourced score that reflects the ambiguous nature of making data visualization choices serves three main purposes. First, it lets us establish uncertainty around our models – in this case, by bootstrap. Second, it lets us test whether models trained on the Plotly corpus can generalize and if Plotly users are actually making optimal choices. Lastly, it lets us benchmark against performance of the Plotly users as well as other predictors.

7.2 Data Preparation

To select the datasets in our benchmarking test set, we first randomly surfaced a set of candidate datasets that were visualized as either a bar, line, or scatter chart. Then, we removed obviously incomplete visualizations (e.g. blank visualizations). Finally, we removed datasets that could not be visually encoded in all three visualization types without losing information. From the remaining set of candidates, we randomly selected 33 bar charts, 33 line charts, and 33 scatter charts.

As we cleaned the data, we adhered to four principles: modify the user’s selections as little as possible, apply changes consistently to every dataset, rely on Plotly defaults, and don’t make any change that is not obvious. For each of these datasets, we modified the raw column names to remove Plotly-specific biases (e.g. removing “,x” or “,y” that was automatically append to column names). We also wanted to make the user evaluation experience as close to the original chart creation experience as possible. Therefore, we changed column names from machine-generated types if they are obvious from the user visualization axis labels or legend (e.g. the first column is unlabeled but visualized as Sepal Width on the X-axis). Because of these modifications, both the Plotly users and the Mechanical Turkers accessed more information than our model.

We visualized each of these 99 datasets as a bar, line, and scatter chart. We created these visualizations by forking the original Plotly visualization then modifying Mark Types using Plotly Chart Studio. We ensured that color choices and axis ranges were consistent between all visualization types. The rest of the layout was held constant to the user’s original specification, or the defaults provided by Plotly.

7.3 Crowdsourced Evaluation Procedure

Figure 10:

Experiment flow. The original user-generated visualizations are highlighted in blue, while we generated the visualizations of the remaining types. After crowdsourced evaluation, we have a set of votes for the best visualization type of that dataset. We calculate confidence intervals for model scores through bootstrapping.

We recruited participants through Amazon Mechanical Turk. To participate in the experiment, workers had to hold a U.S. bachelor degree and be at least 18 years of age, and be completing the survey on a phone. Workers also had to successfully answer three prescreen questions: 1) Have you ever seen a data visualization? [Yes or No], 2) Does the x-axis of a two-dimensional plot run horizontally or vertically? [Horizontally, Vertically, Both, Neither], 3) Which of the following visualizations is a bar chart? [Picture of Bar Chart, Picture of Line Chart, Picture of Scatter]. 150 workers successfully completed the two-class experiment, while 150 separate workers completed the three-class experiment.

After successfully completing the pre-screen, workers evaluated the visualization type of 30 randomly selected datasets from our test set. Each evaluation had two stages. First, the user was presented the first 10 rows of the dataset, and told to ”Please take a moment to examine the following dataset. (Showing first 10 out of X rows).” Then, after five seconds, the ”next” button appeared. At the next stage, the user was asked ”Which visualization best represents this dataset? (Showing first 10 out of X rows).” On this stage, the user was shown both the dataset and the corresponding bar, line, and scatter charts representing that dataset. A user could submit this question after a minimum of ten seconds. The evaluations were split into two groups of 15 by an attention check question. Therefore, each of the 66 datasets were evaluated times on average, while each of the ground truth datasets was evaluated times on average.

7.4 Benchmarking Procedure

We use three types of predictors in our benchmark: human, model, and baseline. The two human predictors are the Plotly predictor, which is the visualization type of the original plot created by the Plotly user, and the MTurk predictor is the choice of a single random Mechanical Turk participant. When evaluating the performance of individual Mechanical Turkers, that individual’s vote was excluded from the set of vote used in the mode estimation.

The two learning-based predictors are DeepEye and Data2Vis. In both cases, we tried to make choices that maximize their CARS, within reason. We uploaded datasets to DeepEye as comma-separated values (CSV) files, and to Data2Vis as JSON objects. Unlike VizML and Data2Vis, DeepEye supports pie, bar, and scatter visualization types. We marked both pie and bar recommendations were both bar predictions, and scatter recommendations as line predictions in the two-type case. For both tools, we modified the data within reason to maximize the number of valid results. For the remaining errors (4 for Data2Vis and 14 for DeepEye), and cases without returned results (12 for DeepEye) we assigned a random chart prediction.

We evaluate the performance of a predictor using a score that assigns points to estimators based on the normalized effectiveness of a predicted value, from Equation 5. This Consensus-Adjusted Recommendation Score (CARS) of a predictor is defined as:

[nobreak=true, backgroundcolor=black!10]

(6)

where is the number of datasets ( for two-class and for three-class), is the predicted visualization type for dataset , and returns the fraction of Mechanical Turker votes for a given visualization type. Note that the minimum CARS . We establish 95% confidence intervals around these scores by comparing against

bootstrap samples of the votes, which can be thought of as synthetic votes drawn from the observed probability distribution.

7.5 Benchmarking Results

We first measure the degree of consensus using the Gini coefficient, the distribution of which is shown in Fig. 11. If a strong consensus was reached for all visualizations, then the Gini distributions would be strongly skewed towards the maximum, which is for the two-class case, and for the three-class case. Conversely, a lower Gini implies a weaker consensus, indicating an ambiguous ideal visualization type. The Gini distributions are not skewed towards either extreme, which supports the use of a soft scoring metric such as CARS over a hard measure like accuracy.

Figure 11: Distribution of Gini coefficients
(a) Two-type (bar vs. line) visualization type CARS.
(b) Three-type (bar vs. line vs. scatter) visualization type CARS.
Figure 12: Consensus-Adjusted Recommendation Score of three ML-based and two human predictors when predicting consensus visualization type. Error bars show 95% bootstrapped confidence intervals, with bootstraps. The mean minimum achievable score is the lower dashed line, while the highest achieved CARS is the upper dotted line.

The Consensus-Adjusted Recommendation Scores for each model and task are visualized as a bar chart in Fig. 12. We first compare the CARS of VizML () against that of Mechanical Turkers () and Plotly users () for the two-class case, as shown in Fig. 11(a). It is surprising that VizML performs comparably to the original Plotly users, who possess domain knowledge and invested time into visualizing their own data. VizML significantly out-performs Data2Vis () and DeepEye (). While neither Data2Vis nor DeepEye were trained to perform visualization type prediction, it is promising for ML-based recommender systems that both perform slightly better than the random classifier (). For this task, the absolute minimum score was ().

The same results are true for the three-class case shown in Fig. 11(b), in which the CARS of VizML () is slightly higher, but within error bars, than that of Mechanical Turkers (), and Plotly users (). Data2Vis () and DeepEye () outperform the Random () with a larger margin, but still within error. The minimum score was ().

8 Discussion

In this paper, we introduce VizML, a machine learning approach to visualization recommendation using a large corpus of datasets and corresponding visualizations. We identify five key prediction tasks and show that neural network classifiers attain high test accuracies on these tasks, relative to both random guessing and simpler classifiers. We also benchmark with a test set established through crowdsourced consensus, and show that the performance of neural networks is comparable that of individual humans.

We acknowledge the limitations of this corpus and our approach. First, despite aggressive deduplication, our model is certainly biased towards the Plotly dataset. This bias could manifest on the user level (Plotly draws certain types of analysts), the system level (Plotly encourages or discourages certain types of plots, either by interface design or defaults), or the dataset level (Plotly is appropriate only for smaller datasets). We discuss approaches to improving the generalizability of VizML in the next section.

Second, neither the Plotly user nor the Mechanical Turker is an expert in data visualization. However, if we consider laypeople the target audience of visualizations, the consensus opinion of crowdsourced agents may be a good measure of visualization quality. Thirdly, we acknowledge that this paper was only focused on a subset of the tasks usually considered in a visualization recommendation pipeline. An ideal user-facing tool would include functionality that supports all tasks in the pipeline.

Yet, the high within-corpus test accuracies, and performance on the consensus dataset comparable to that of humans, lead us to claim that the structural and statistical properties of datasets influence how they are visualized. Furthermore, machine learning, by virtue of the ability to use or learn complex features for many datasets, can take advantage of these properties to augment the data visualization process.

Machine learning tasks like image annotation or medical diagnosis are often objective, in that there exists a clear human-annotated ground truth. Other tasks are subjective, like language translation or text summarization tasks, which are benchmarked by human evaluation or against human-generated results. The question remains: is data visualization an objective or subjective process? Because of the high accuracies, we claim that there are definite regularities in how humans choose to visualize data that can be captured and leveraged by machine learning models. However, because crowd-sourced agents themselves do not agree with the consensus all of the time, there is an element of subjectivity in making visualization design choices.

9 Future Research Directions

To close, we discuss promising directions towards improving the data, methods, and tasks of machine learning-based recommender systems.

Public Training and Benchmarking Corpuses

Despite the increasing prevalence of recommendation features within visualization tools, research progress in visualization recommendation is impeded due to the lack of a standard benchmark. Without a benchmark, it is difficult to bootstrap a recommender system or compare different approaches to this problem. Just as large repositories like ImageNet 

[17]

and CIFAR-10 played a significant role in shaping computer vision research, and serve as a useful benchmarkings, the same should exist for visualization recommendation.

Diverse Data Sources

By using Plotly data, we constrain ourselves to the final step of data visualization by assuming that the datasets are clean, and that a visualization encodes all columns of data. Yet, “upstream” tasks like feature selection and data transformation are some of the most time-consuming tasks in data analysis. Tools like Tableau and Excel, which support selective visualization of columns and data transformation, could potentially provide the data needed to train models to augment these tasks.

Transfer Learning

One explanation for the lack of prior ML-based visualization recommendation systems is the lack of available training data. Though our approach with using public data increases the size of the training set by an order of magnitude relative to that used by other systems, the monotonically increasing (unsaturated) learning curves of our models suggest that there is still room for more data. A common approach in other machine learning applications is to employ transfer learning [41], which uses models trained on one task to scaffold a model on another task. For example, just as many neural networks in computer vision are pretrained on ImageNet, visualization recommendation models can be pretrained on the Plotly corpus and then transferred to domains with smaller training corpus sizes.

Representation Learning

An approach trained on features extracted from raw data lends itself to straightforward interpretation and the use of standard machine learning models. But a representation learning approach trained on the raw data, instead of extracted features, has two advantages. First, it bypasses the laborious process of feature engineering. Second, via the universal approximation theorem for neural networks, it would be able to derive all hand-engineered features, and more, if important for predicting the outcome.

Unsupervised Learning

Another approach to end-to-end visualization recommendation is to use semantic measures between datasets in a “dataset space” with a traditional recommendation system (e.g. model-based collaborative filtering). Initial explorations of unsupervised clustering techniques like t-distributed stochastic neighbor embedding (t-SNE) [66] and UMAP suggest non-trivial structure in the dataset space.

Addressing the Multiple Comparisons Problem

Analysts continually using visualization to both explore and confirm hypotheses are at risk of arriving at spurious insights, via the multiple comparisons problem (MCP) [78]. But if visual analytics tools are fishing rods for spurious insights, then visualizations recommender systems are deep ocean bottom trawlers. The MCP is exacerbated by opaque ML-based recommender systems, in which the number of implicit comparisons is difficult to track.

Integrating Prediction Tasks into Pipeline Model

The “holy grail” of visualization recommendation remains an end-to-end model which accepts a dataset as input and produces visualizations as output, which can then be evaluated in a user-facing system. An end-to-end model based on our approach of recommending design choices would combine the outcomes of each prediction task into a ranked list of recommendations. However, the predicted outcomes are sometimes inconsistent. The simplest approach is combining outcomes with heuristics. Two other approaches are generating a multi-task learning model that outputs all design choices, or to developing a pipeline model that predicts outcomes in sequence.

Acknowledgements.
The authors thank Owais Khan, Çaǧatay Demiralp, Sharon Zhang, Diana Orghian, Madelon Hulsebos, Laura Pang, David Alvarez-Melis, and Tommi Jaakkola for their feedback. We also thank Alex Johnson and Plotly for making the Community Feed data available. This work was supported in part by the MIT Media Lab consortium.

Appendix A Features and Aggregations

Dimensions (1)
Length (1)
Number of values
Types (8)
General (3)
Categorical (C), quantitative (Q), temporal (T)
Specific (5) String, boolean, integer, decimal, datetime
Values (58)
Statistical [Q, T]
(16)
Mean, median, range × (Raw/normalized by max),

variance, standard deviation, coefficient of variance,

minimum, maximum, (25th/75th) percentile,
median absolute deviation, average absolute
deviation, quantitative coefficient of dispersion
Distribution [Q]
(14)
Entropy, Gini, skewness, kurtosis, moments
(5-10), normality (statistic, p-value),
is normal at (p 0.05, p 0.01).
Outliers (8)
(Has/%) outliers at (1.5 IQR, 3 IQR, 99%ile, 3)
Statistical [C] (7)
Entropy, (mean/median) value length, (min, std,
max) length of values, % of mode
Sequence (7)
Is sorted, is monotonic, sortedness, (linear/log)
space sequence coefficient, is (linear/space) space
Unique (3) (Is/#/%) unique
Missing (3) (Has/#/%) missing values
Names (14)
Properties (4)
Name length, # words, # uppercase characters,
starts with uppercase letter
Value (10)
(“x”, “y”, “id”, “time”, digit, whitespace, “$”,
“€”, “£”, “Y”) in name
(a) 81 single-column features describing the dimensions, types, values, and names of individual columns.
Values (25)
[Q-Q] (8)
Correlation (value, , ),
Kolmogorov-Smirnov (value, , ),
(has, %) overlapping range
[C-C] (6)
(value, , ),
nestedness (value, , )
[C-Q] (3)
One-Way ANOVA (value, , )
Shared values (8)
is identical, (has/#/%) shared values, unique values
are identical, (has/#/%) shared unique values
Names (5)
Character (2)
Edit distance (raw/normalized)
Word (3)
(Has, #, %) shared words
(b) 30 pairwise-column features describing the relationship between values and names of pairs of columns.
Categorical (5)
Number (#), percent (%), has, only one (#=1), all
Quantitative (10)
Mean, variance, standard deviation, coefficient
of variance (CV), min, max, range, normalized
range (NR), average absolute deviation (AAD)
median absolute deviation (MAD)
Special (1)
Entropy of data types
(c) 16 Aggregation functions used to aggregate single- and pairwise-column features into 841 dataset-level features.
Table 4: Features and aggregation functions.

References

  • [1] C. C. Aggarwal. Recommender Systems: The Textbook. Springer Publishing Company, Incorporated, 1st edition, 2016.
  • [2] C. Ahlberg. Spotfire: An Information Exploration Environment. SIGMOD Rec., 25(4):25–29, Dec. 1996.
  • [3] R. Amar, J. Eagan, and J. Stasko. Low-Level Components of Analytic Activity in Information Visualization. In Proceedings of the Proceedings of the 2005 IEEE Symposium on Information Visualization, INFOVIS ’05, pages 15–, Washington, DC, USA, 2005. IEEE Computer Society.
  • [4] L. Battle, P. Duan, Z. Miranda, D. Mukusheva, R. Chang, and M. Stonebraker. Beagle: Automated Extraction and Interpretation of Visualizations from the Web. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI ’18, pages 594:1–594:8, New York, NY, USA, 2018. ACM.
  • [5] J. Bertin. Semiology of Graphics. University of Wisconsin Press, 1983.
  • [6] M. A. Borkin, A. A. Vo, Z. Bylinskii, P. Isola, S. Sunkavalli, A. Oliva, and H. Pfister. What Makes a Visualization Memorable? IEEE Transactions on Visualization and Computer Graphics, 19(12):2306–2315, Dec 2013.
  • [7] M. Bostock, V. Ogievetsky, and J. Heer. D3 Data-Driven Documents. IEEE Transactions on Visualization and Computer Graphics, 17(12):2301–2309, Dec. 2011.
  • [8] L. Breiman, J. Friedman, R. A. Olshen, and C. J. Stone. Classification and Regression Trees. Chapman and Hall/CRC, 1984.
  • [9] E. Brynjolfsson and K. McElheran. The Rapid Adoption of Data-Driven Decision-Making. American Economic Review, 106(5):133–39, May 2016.
  • [10] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. Learning to Rank Using Gradient Descent. In Proceedings of the 22Nd International Conference on Machine Learning, ICML ’05, pages 89–96, New York, NY, USA, 2005. ACM.
  • [11] S. K. Card, J. D. Mackinlay, and B. Shneiderman, editors. Readings in Information Visualization: Using Vision to Think. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1999.
  • [12] S. M. Casner. Task-analytic Approach to the Automated Design of Graphic Presentations. ACM Trans. Graph., 10(2):111–151, Apr. 1991.
  • [13] N. Cawthon and A. V. Moere. The Effect of Aesthetic on the Usability of Data Visualization. In Information Visualization, 2007. IV ’07. 11th International Conference, pages 637–648, July 2007.
  • [14] Chin-yew Lin. ROUGE: a package for automatic evaluation of summaries. pages 25–26, 2004.
  • [15] W. S. Cleveland and R. McGill. Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods. Journal of the American Statistical Association, 79(387):531–554, 1984.
  • [16] Ç. Demiralp, P. J. Haas, S. Parthasarathy, and T. Pedapati. Foresight: Rapid Data Exploration Through Guideposts. CoRR, abs/1709.10513, 2017.
  • [17] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009.
  • [18] V. Dibia and Ç. Demiralp. Data2Vis: Automatic Generation of Data Visualizations Using Sequence to Sequence Recurrent Neural Networks. CoRR, abs/1804.03126, 2018.
  • [19] H. Ehsan, M. A. Sharaf, and P. K. Chrysanthis. MuVE: Efficient Multi-Objective View Recommendation for Visual Data Exploration. 2016 IEEE 32nd International Conference on Data Engineering (ICDE), pages 731–742, 2016.
  • [20] S. Few. Data Visualization Effectiveness Profile. https://www.perceptualedge.com/articles/visual_business_intelligence/data_visualization_effectiveness_profile.pdf, 2017.
  • [21] Google. Explore in Google Sheets. https://www.youtube.com/watch?v=9TiXR5wwqPs, 2015.
  • [22] F. Hayes-Roth. Rule-based Systems. Commun. ACM, 28(9):921–932, Sept. 1985.
  • [23] J. Heer, N. Kong, and M. Agrawala. Sizing the Horizon: The Effects of Chart Size and Layering on the Graphical Perception of Time Series Visualizations. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’09, pages 1303–1312, New York, NY, USA, 2009. ACM.
  • [24] K. Hu, D. Orghian, and C. Hidalgo. DIVE: A Mixed-Initiative System Supporting Integrated Data Exploration Workflows. In ACM SIGMOD Workshop on Human-in-the-Loop Data Analytics (HILDA). ACM, 2018.
  • [25] E. M. Jonathan Meddes. Improving visualization by capturing domain knowledge. volume 3960, pages 3960 – 3960 – 10, 2000.
  • [26] B. Jones. Data Dialogues: To Optimize or to Satisfice When Visualizing Data? https://www.tableau.com/about/blog/2016/1/data-dialogues-optimize-or-satisfice-data-visualization-48685, 2016.
  • [27] S. Kandel, A. Paepcke, J. M. Hellerstein, and J. Heer. Enterprise Data Analysis and Visualization: An Interview Study. IEEE Transactions on Visualization and Computer Graphics, 18(12):2917–2926, Dec. 2012.
  • [28] H. Kennedy, R. L. Hill, W. Allen, , and A. Kirk. In Engaging with (big) data visualizations: Factors that affect engagement and resulting new definitions of effectiveness, volume 21, USA, 2016. First Monday.
  • [29] Y. Kim and J. Heer. Assessing Effects of Task and Data Distribution on the Effectiveness of Visual Encodings. Computer Graphics Forum (Proc. EuroVis), 2018.
  • [30] C. N. Knaflic. is there a single right answer? http://www.storytellingwithdata.com/blog/2016/1/12/is-there-a-single-right-answer, 2016.
  • [31] R. Kosara. Understanding Pie Charts. https://eagereyes.org/techniques/pie-charts, 2010.
  • [32] Y. Liu and J. Heer. Somewhere Over the Rainbow: An Empirical Assessment of Quantitative Colormaps. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI ’18, pages 598:1–598:12, New York, NY, USA, 2018. ACM.
  • [33] G. Louppe, L. Wehenkel, A. Sutera, and P. Geurts. Understanding Variable Importances in Forests of Randomized Trees. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 1, NIPS’13, pages 431–439, USA, 2013. Curran Associates Inc.
  • [34] Y. Luo, X. Qin, N. Tang, and G. Li. DeepEye: Towards Automatic Data Visualization. The 34th IEEE International Conference on Data Engineering (ICDE), 2018.
  • [35] J. Mackinlay, P. Hanrahan, and C. Stolte. Show Me: Automatic Presentation for Visual Analysis. IEEE Transactions on Visualization and Computer Graphics, 13(6):1137–1144, Nov. 2007.
  • [36] J. D. Mackinlay. Automating the Design of Graphical Presentations of Relational Information. ACM Trans. Graphics, 5(2):110–141, 1986.
  • [37] P. Millais, S. L. Jones, and R. Kelly. Exploring Data in Virtual Reality: Comparisons with 2D Data Visualizations. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, CHI EA ’18, pages LBW007:1–LBW007:6, New York, NY, USA, 2018. ACM.
  • [38] D. Moritz, C. Wang, G. L. Nelson, H. Lin, A. M. Smith, B. Howe, and J. Heer. Formalizing Visualization Design Knowledge as Constraints: Actionable and Extensible Models in Draco. IEEE Trans. Visualization & Comp. Graphics (Proc. InfoVis), 2018.
  • [39] K. Morton, M. Balazinska, D. Grossman, R. Kosara, and J. Mackinlay. Public data and visualizations: How are many eyes and tableau public used for collaborative analytics? SIGMOD Record, 43(2):17–22, 6 2014.
  • [40] N. Natarajan, I. S. Dhillon, P. Ravikumar, and A. Tewari. Learning with Noisy Labels. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 1, NIPS’13, pages 1196–1204, USA, 2013. Curran Associates Inc.
  • [41] S. J. Pan and Q. Yang. A Survey on Transfer Learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345–1359, Oct 2010.
  • [42] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 311–318, Stroudsburg, PA, USA, 2002. Association for Computational Linguistics.
  • [43] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in PyTorch. 2017.
  • [44] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res., 12:2825–2830, Nov. 2011.
  • [45] D. B. Perry, B. Howe, A. M. Key, and C. Aragon. VizDeck: Streamlining exploratory visual analytics of scientific data. In iConference, 2013.
  • [46] Plotly. Plotly. https://plot.ly, 2018.
  • [47] Plotly. Plot.ly Chart Studio. https://plot.ly/online-chart-maker/, 2018.
  • [48] Plotly. Plotly Community Feed. https://plot.ly/feed, 2018.
  • [49] Plotly. Plotly for Python. https://plot.ly/d3-js-for-python-and-pandas-charts/, 2018.
  • [50] Plotly. Plotly REST API. https://api.plot.ly/v2, 2018.
  • [51] Plotly. Plotly.js Open-Source Announcement. https://plot.ly/javascript/open-source-announcement, 2018.
  • [52] E. Ramos and D. Donoho. ASA Data Exposition Dataset. http://stat-computing.org/dataexpo/1983.html, 1983.
  • [53] K. Reda, P. Nalawade, and K. Ansah-Koi. Graphical Perception of Continuous Quantitative Maps: The Effects of Spatial Frequency and Colormap Design. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI ’18, pages 272:1–272:12, New York, NY, USA, 2018. ACM.
  • [54] S. F. Roth, J. Kolojejchick, J. Mattis, and J. Goldstein. Interactive Graphic Design Using Automatic Presentation Knowledge. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’94, pages 112–117, New York, NY, USA, 1994. ACM.
  • [55] B. Saket, A. Endert, and C. Demiralp. Task-Based Effectiveness of Basic Visualizations. IEEE Transactions on Visualization and Computer Graphics, pages 1–1, 2018.
  • [56] B. Santos. Evaluating visualization techniques and tools: What are the main issues. In The AVI Workshop on Beyond Time and Errors: Novel Evaluation Methods For information Visualization (BELIV ’08), 2008.
  • [57] A. Satyanarayan, D. Moritz, K. Wongsuphasawat, and J. Heer. Vega-Lite: A Grammar of Interactive Graphics. IEEE Transactions on Visualization and Computer Graphics, 23(1):341–350, Jan. 2017.
  • [58] A. Satyanarayan, K. Wongsuphasawat, and J. Heer. Declarative Interaction Design for Data Visualization. In ACM User Interface Software & Technology (UIST), 2014.
  • [59] M. M. Sebrechts, J. V. Cugini, S. J. Laskowski, J. Vasilakis, and M. S. Miller. Visualization of Search Results: A Comparative Evaluation of Text, 2D, and 3D Interfaces. In Proceedings of the 22Nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’99, pages 3–10, New York, NY, USA, 1999. ACM.
  • [60] E. Segel and J. Heer. Narrative Visualization: Telling Stories with Data. IEEE Transactions on Visualization and Computer Graphics, 16(6):1139–1148, Nov. 2010.
  • [61] J. Seo and B. Shneiderman. A Rank-by-Feature Framework for Interactive Exploration of Multidimensional Data. 4:96–113, 2005.
  • [62] S. Silva, B. S. Santos, and J. Madeira. Using color in visualization: A survey. Computers & Graphics, 35(2):320 – 333, 2011. Virtual Reality in Brazil Visual Computing in Biology and Medicine Semantic 3D media and content Cultural Heritage.
  • [63] D. Skau. Best Practices: Maximum Elements For Different Visualization Types. https://visual.ly/blog/maximum-elements-for-visualization-types/, 2012.
  • [64] C. Stolte, D. Tang, and P. Hanrahan. Polaris: a system for query, analysis, and visualization of multidimensional databases. Commun. ACM, 51(11):75–84, 2008.
  • [65] J. Tukey. Exploratory Data Analysis. Addison-Wesley series in behavioral science. Addison-Wesley Publishing Company, 1977.
  • [66] L. van der Maaten and G. Hinton. Visualizing Data using t-SNE . Journal of Machine Learning Research, 9:2579–2605, 2008.
  • [67] M. Vartak, S. Huang, T. Siddiqui, S. Madden, and A. Parameswaran. Towards Visualization Recommendation Systems. SIGMOD Rec., 45(4):34–39, May 2017.
  • [68] M. Vartak, S. Madden, A. Parameswaran, and N. Polyzotis. SeeDB: Automatically Generating Query Visualizations. Proceedings of the VLDB Endowment, 7(13):1581–1584, 2014.
  • [69] F. Viégas, M. Wattenberg, D. Smilkov, J. Wexler, and D. Gundrum. Generating charts from data in a data table. US 20180088753 A1., 2018.
  • [70] F. B. Viégas, M. Wattenberg, F. van Ham, J. Kriss, and M. McKeon. ManyEyes: A Site for Visualization at Internet Scale. IEEE Transactions on Visualization and Computer Graphics, 13(6):1121–1128, Nov. 2007.
  • [71] C. Ware. Information Visualization: Perception for Design. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2004.
  • [72] H. Wickham. ggplot2: Elegant Graphics for Data Analysis. Springer Publishing Company, Incorporated, 2nd edition, 2009.
  • [73] L. Wilkinson, A. Anand, and R. Grossman. Graph-Theoretic Scagnostics. In Proceedings of the Proceedings of the 2005 IEEE Symposium on Information Visualization, INFOVIS ’05, pages 21–, Washington, DC, USA, 2005. IEEE Computer Society.
  • [74] G. Wills and L. Wilkinson. AutoVis: Automatic Visualization. Information Visualization, 9:47–6927, 2010.
  • [75] K. Wongsuphasawat, D. Moritz, A. Anand, J. Mackinlay, B. Howe, and J. Heer. Towards A General-Purpose Query Language for Visualization Recommendation. In ACM SIGMOD Workshop on Human-in-the-Loop Data Analytics (HILDA), 2016.
  • [76] K. Wongsuphasawat, D. Moritz, A. Anand, J. Mackinlay, B. Howe, and J. Heer. Voyager: Exploratory Analysis via Faceted Browsing of Visualization Recommendations. IEEE Trans. Visualization & Comp. Graphics (Proc. InfoVis), 2016.
  • [77] K. Wongsuphasawat, Z. Qu, D. Moritz, R. Chang, F. Ouk, A. Anand, J. Mackinlay, B. Howe, and J. Heer. Voyager 2: Augmenting Visual Analysis with Partial View Specifications. In ACM Human Factors in Computing Systems (CHI), 2017.
  • [78] E. Zgraggen, Z. Zhao, R. Zeleznik, and T. Kraska. Investigating the Effect of the Multiple Comparisons Problem in Visual Analysis . In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI ’18, pages 479:1–479:12, New York, NY, USA, 2018. ACM.
  • [79] Y. Zhu. Measuring Effective Data Visualization . In G. Bebis, R. Boyle, B. Parvin, D. Koracin, N. Paragios, S.-M. Tanveer, T. Ju, Z. Liu, S. Coquillart, C. Cruz-Neira, T. Müller, and T. Malzbender, editors, Advances in Visual Computing, pages 652–661, Berlin, Heidelberg, 2007. Springer Berlin Heidelberg.