Neo: Generalizing Confusion Matrix Visualization to Hierarchical and Multi-Output Labels

10/24/2021
by   Jochen Görtler, et al.
University of Konstanz
13

The confusion matrix, a ubiquitous visualization for helping people evaluate machine learning models, is a tabular layout that compares predicted class labels against actual class labels over all data instances. We conduct formative research with machine learning practitioners at a large technology company and find that conventional confusion matrices do not support more complex data-structures found in modern-day applications, such as hierarchical and multi-output labels. To express such variations of confusion matrices, we design an algebra that models confusion matrices as probability distributions. Based on this algebra, we develop Neo, a visual analytics system that enables practitioners to flexibly author and interact with hierarchical and multi-output confusion matrices, visualize derived metrics, renormalize confusions, and share matrix specifications. Finally, we demonstrate Neo's utility with three case studies that help people better understand model performance and reveal hidden confusions.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

07/17/2019

Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Machine learning models are currently being deployed in a variety of rea...
02/22/2018

The State of the Art in Integrating Machine Learning into Visual Analytics

Visual analytics systems combine machine learning or other analytic tech...
02/08/2021

Learning with Density Matrices and Random Features

A density matrix describes the statistical state of a quantum system. It...
06/29/2021

Distributed Matrix Tiling Using A Hypergraph Labeling Formulation

Partitioning large matrices is an important problem in distributed linea...
02/04/2019

Confusion matrices and rough set data analysis

A widespread approach in machine learning to evaluate the quality of a c...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Related Work

Model evaluation is a key step to successfully applying machine learning. However, what it means for a model to perform well greatly depends on the task. A variety of metrics have been developed to evaluate classifiers [16]; common example metrics include accuracy, precision, and recall. However, there is no one-size-fits-all metric,111There is no better illustration of this than viewing the overwhelming number of different metrics that can computed from a confusion matrix: https://en.wikipedia.org/wiki/Confusion_matrix#Table_of_confusion. and the utility of metrics depend on the modeling task.

Model Performance Visualizations

The visualization community has developed novel visual encodings to help practitioners better understand their model’s performance. These techniques can be categorized as either class-based or instance-based. For class-based techniques, Alsallakh et al. [1] use a radial graph layout where the links represent confusion between classes. Seifert et al. [27] embed all test and training samples into a radial coordinate systems where units are classes. Regarding instance-based visualization, Amershi et al. [5] propose a unit visualization that shows how each instance is classified and shows the closeness of instances in the feature space. Squares [25] extends this visualization and shows, per class, how instances are classified within a multi-classifier. Similarly, ActiVis [18]

uses instance-based prediction results to explore neuron activations in deep neural networks. While we focus on practitioners, previous work has also studied confusion matrix literacy and designed alternative representations for non-expert and public algorithmic performance understanding 

[28]. Common to these works is that they introduce new visualization concepts that may not be familiar to machine learning practitioners and therefore require training and adaption time.

Confusion Matrix Visualizations

Instead of introducing alternative visual encodings, our approach aims at enhancing confusion matrices directly, a ubiquitous visualization that already has familiarity within the machine learning community [37], and adapting it to types of data that are encountered in practice today. There exists some work that enhances conventional confusion matrices. For example, individual instances have been shown directly in the cells of a confusion matrix [36, 7]. Alsallakh et al. [2] investigate hierarchical structure in neural networks using confusion matrices. In their work, hierarchies can be constructed interactively based on blocks in the confusion matrix, which are then shown using icicle plots. They also provide group level statistics for the elements of the hierarchy. However, in contrast to our work, their system does not consider multi-output labels.

Confusion matrices are also used in iterative model improvement. Hinterreiter et al. [13]

propose a system to track confusions and model performance over time by juxtaposing confusion matrices of different modeling runs. Their system also provides an interactive shelf to specify the individual runs. Our work also features an interactive shelf; however, its purpose in our work is to drill down into sub-hierarchies of a larger confusion matrix. Furthermore, confusion matrices have been used to directly interact with machine learning models. As such, they can be used to interactively adapt decision tree classifiers 

[35], by augmenting them with information about the splits that are performed by each node. For models that are based on boosting, Talbot et al. [33] propose a system to adjust the weights of weak classifiers in an ensemble to achieve better performance. Furthermore, Kapoor et al. [20] propose a technique to interactively steer the optimization of a machine learning classifier, based on directly interacting with confusion matrices. None of these systems consider hierarchical or multi-output labels.

There are also different approaches that generalize beyond conventional confusion matrices. Class-based similarities from prediction scores instead of regular confusions have been proposed to generalize better to hierarchical and multi-output labels [3]. Zhang et al. [40]

embed pairwise class prediction scores into a Cartesian coordinate system to compare the performance of different models. Furthermore, multi-dimensional scaling has been used to embed confusion matrices into 2D 

[32]. In contrast to our work, these adaptions also stray further from the traditional concept of a confusion matrix.

Model Confusions as Probability Distributions

Framing the confusion matrix as a probability distribution has been used by the machine learning community to investigate the variability of a classifier by Caelen [8]. In addition to this, Tötsch and Hoffmann [34] show how a probabilistic view of the confusion matrix can be used to quantify the uncertainty of a classifier. However, both these works only consider binary classification. Preliminary work on generalizing to multi-label problems [22] computes the contribution of an instance to a cell, but only if the prediction was only partially correct. Our work builds upon these views and produces a unified language for generalizing confusion matrices to hierarchical and multi-output labels.

Table Algebra

Our work is inspired by relational algebra theory [10] and the table algebra in Polaris [29], now the popular software Tableau, and its work on visualizing hierarchically structured data [30]. In Polaris, a user can visually explore the contents of a database by dragging variables of interest onto “shelves”. The contents of a shelf are then transformed into queries to a relational database or OLAP cube, which retrieves the data for visualization. Our approach is different in that we support operations on matrices and that it is based on probability distributions rather than a relational database model.

2 Formative Research: Survey, Challenges, & Tasks

To understand how practitioners use confusion matrices in their own work, we conducted a survey that resulted in 20 responses from machine learning researchers, engineers, and software developers focusing on classification tasks at a large technology company. Respondents were recruited using an internal mailing list about machine learning tooling and targeted practitioners who regularly use confusion matrices. We take inspiration from the methods used in previous visualization literature on multi-class model visualization [25], bootstrapping our survey questions from their work. The survey consists of eight questions centered around machine learning model evaluation and confusion matrix utility. The first two questions (Q1 and Q2) are multiple choice, while the remaining (Q3–Q6) are open responses.

  • [topsep=0.2em, itemsep=-0.3em]

  • Which stages of machine learning do you typically work on?

  • How many classes does your data typically have?

  • When do you use confusion matrices in your ML workflow?

  • Which insights do you gain from using confusion matrices?

  • Which insights are missing, or you wish you would also gain from using a confusion matrix?

  • How often are your labels structured hierarchically (for example apple could be in the category fruit, which then is part of the category food)? How do you work with hierarchical confusions? How deep are the hierarchies?

  • When do you encounter data where one instance has multiple labels (for example an instance that is apple and ripe)? How many labels are typically associated with an instance?

  • How else do you visualize your data and errors besides confusion matrices? What advantages do these visualizations have?

From the raw survey data, we used a thematic analysis method to group common purposes, practices, and challenges of model evaluation into categories [11]. Throughout the discussion, we use representative quotes from the respondents to illustrate the main findings.

Figure 1: Survey responses from machine learning practitioners (multiple choice questions). Left: respondents cover every stage of machine learning process; many of them work on “data processing” and “model training,” with the majority of respondents indicating “model evaluation,” the specific machine learning stage we focus on in this work. Right: respondents work on classification models of a variety of sizes, ranging from binary classifiers to models with over 1,000 classes.

2.1 Respondents’ Machine Learning Backgrounds

We asked practitioners what stages of the machine learning process they typically work on (Q1, multiple choice), to establish context about the respondents’ backgrounds. The left-hand side of Figure 1 shows a histogram of what stages our respondent’s have experience in, sorted by the chronological order of stages in the machine learning development process [4]. With some expertise represented at every stage of machine learning development, most respondents indicate their work falls between “data processing,” “model training,” and “model evaluation.” This diverse experience and concentration on model evaluation gives us, the researchers, confidence that the population of practitioners surveyed contains the relevant experience and knowledge to speak about the intricacies of model evaluation with confusion matrices.

To gain insight into the scale of the respondents’ modeling work, we also asked about the typical number of classes modeled from their datasets (Q2, multiple choice). The right-hand side of Figure 1

shows a histogram for these responses, sorted from the fewest number of classes (binary classification) to the largest (101+). Results show an emphasis on binary classification, a majority skew towards models with fewer than 50 classes, but also representation from larger-scale models with over 100 classes. These results establish that our respondents have worked with small-scale datasets, large-scale datasets, and everything in-between, strengthening our confidence that many different machine learning applications are represented in our formative research.

2.2 Why Use Confusion Matrices?

We first categorize and describe the reasons why respondents use confusion matrices (Q3). While we expected certain use cases to be reported, we were surprised by the number of roles and responsibilities confusion matrices satisfy in practice, such as performance analysis, model and data debugging, reporting and sharing, and data annotation efforts (Q4).

2.2.1 Model Evaluation and Performance Analysis

Confusion matrices are constructed to evaluate, test, and inspect class performance in models; therefore, it is unsurprising the majority of responses, 14/20, indicate that model evaluation is the main motivation for using confusion matrices in their own work. Respondents explain that detailed model evaluation is critical to ensure machine learning systems and products produce high-accuracy predictions or a good user experience. According to one respondent, confusion matrices allow a practitioner to see “performance at a glance.” One frequent and primary example reported was checking the presence of a strong diagonal; diagonal cells indicate correctly predicted data instances (whereas cells outside the diagonal represent confusions), therefore a strong diagonal is found in well-performing models.

2.2.2 Debugging Model Behavior by Finding Error Patterns

Besides seeing performance at a glance, 7/20 respondents indicated that confusion matrices are also useful for identifying error patterns to help debug a model. Regarding pattern identification, a respondent said confusion matrices “allow me to see how a certain class is being misclassified, or if there is a pattern in misclassifications that can reveal something about the behavior of my model.”

Respondents described multiple common patterns practitioners look for, including checking the aforementioned strong diagonal of the matrix, finding classes with the most confusions, and finding classes that are over-predicted. Another interesting pattern reported by a natural-language processing practitioner was determining the directionality of confusions for a pair of classes. For example, in the case of a bidirectional language translation model, does a particular sentence correctly translate from the source to the target language, but not the reverse. These patterns can be

“…much more revealing than a simple number,” and help practitioners find shared similarity between two confused classes.

Figure 2: A visual representation of class labels for conventional confusion matrices (left) compared to our work (in blue) that supports hierarchical labels and multi-output labels. To build a confusion matrix from any of these label structures, compute every combination of the actual label against the predicted label for all classes.

2.2.3 Communication, Reporting, and Sharing Performance

While confusion matrices help an individual practitioner understand their own model’s behavior, they are also used in larger machine learning projects with many invested stakeholders. Here, it is critical that team members are aware of the latest performance of a model during development, or monitoring the status of a previously-deployed model that is evaluated on new data. One respondent reported that “exporting the matrix is more useful,” since confusion matrices are commonly shared in communication reports with other individuals.

2.2.4 High-quality Data Annotation

Beyond model evaluation, respondents said confusion matrices are also useful for data labelling/annotation work. Machine learning models require large datasets to better generalize, which results in substantial efforts to obtain high-quality data labels. In this use case, a practitioner wants to understand annotation performance instead of model performance; in some scenarios the same practitioner fulfills both roles.

Some newly labelled datasets undergo quality assurance, where a subset of the newly labelled data is scrutinized, adjusted, and corrected if any labels were incorrectly applied. These labels are then compared against the original labelled dataset using a confusion matrix. These data label confusion matrices visualize the performance of a data annotator (could be human or computational) instead of a model’s performance (which can also be thought of as an annotator). This process allows practitioners to find data label discrepancies between different teams. For example, one respondent reported they “get to understand if there are certain labels or prompts that are causing confusion between the production [label] team and the quality assurance [label] team.” The quality assurance team often shares these visualizations with the production labeling teams to “improve the next [labeling] iteration,” guiding annotation efforts through rich and iterative feedback.

2.3 Challenges with Confusion Matrices

When prompted about where confusion matrices may not be sufficient (Q5), respondents voiced that they have experienced challenges (C1–C4) due to limitations with its representation and lack of support for handling more complex dataset structure. Visualizations for these datasets either did not exist or were shoehorned into existing confusion matrices by neglecting or abusing label names and structure (Q8).

2.3.1 Hidden Performance Metrics (C1)

The most common limitation of conventional confusion matrices discovered from our survey is their inability to show performance metrics for analysis context. Over half, 11/20, respondents said that it was important to see other metrics alongside confusions (Q8). Even accuracy is not explicitly listed in a confusion matrix but must be computed from specific cells for each class, which can be taxing when performing the mental math over and over. While respondents listed other important performance metrics such as precision, recall, and true/false positive/negative rates, deciding which metrics are important is specific to the modeling task and domain. Lastly, when sharing confusion matrices with others, respondents said it is important to provide textual descriptions of performance to help focus attention on specific errors.

2.3.2 Complex Dataset Structure: Hierarchical Labels (C2)

Another big challenge for confusion matrices is capturing and visualizing complex data structures that are now common in machine learning applications. Conventional confusion matrices assume a flat, one-dimensional structure, but many datasets today across data types have hierarchical structure. When asked specifically about dataset structure, 9/20 respondents said they work with hierarchical data and that typical model evaluation tools, like confusion matrices, do not suffice (Q6). For example, an apple class could be considered a subset of fruit which is a subset of food. One respondent indicated that their team works almost exclusively with hierarchical data. In the applications with hierarchical data, respondents indicated that the hierarchies were on average 2–4 levels deep (i.e., from the root node to the leaf nodes). Handling hierarchical classification data and the subgroups inherent to its structure is currently not supported in confusion matrix representations

2.3.3 Complex Dataset Structure: Multi-output Labels (C3)

Another type of dataset structure complexity is also well-represented in our survey, namely datasets that have instances with multi-output labels (Q7). For example, an apple could be red and ripe. Over half, 11/20, of the respondents indicated that they work with datasets with multi-output labels that conventional confusion matrices do not support. In such datasets, respondents said that, on average, data instances have 1–3 labels each, but one respondent described an application where instances had 20+ labels. It is important to also note the distinction between labels and metadata: metadata is any auxiliary data describing an instance, whereas a label denotes a specific model output for prediction. In short, all labels are metadata, but not all metadata are labels.

2.3.4 Communicating Confusions while Collaborating (C4)

We have already identified and discussed the need for communicating model performance and common confusions in collaborative machine learning projects. However, there remains friction when sharing new model results with confusion matrices, for example, a loss of quality and project context (e.g., copying and pasting charts as images into a report). It can be time consuming for a practitioner to prepare and polish a visualization to include in a report, yet it is important to ensure model evaluation is accessible to others. Some respondents said it would be convenient if their confusion matrices could be easily exported. This challenge is twofold: what are better and sensible defaults for confusion matrix visualization, and how can systems reduce the friction for practitioners sharing their latest model evaluations?

2.4 Motivation and Task Analysis

From our formative research, there is clear opportunity to improve confusion matrix visualization. Practitioners reported that conventional confusion matrices, while useful, are insufficient for many of the recent advancements and applications of machine learning, and expressed enthusiasm for visualization to better help understand model confusions. This research also yielded several key ideas that inspired us to rethink authoring and interacting with confusion matrices. To inform our design, we distill tasks that practitioners perform to understand model confusions. Tasks (T1–T4) map one-to-one to challenges (C1–C4):

  • [topsep=0.2em, itemsep=-0.3em]

  • Visualize derived performance metrics while enabling flexible data analysis, such as scaling and normalization (C1).

  • Traverse and visualize hierarchical labels (C2).

  • Transform and visualize multi-output labels (C3).

  • Share confusion matrix analysis and configurations (C4).

3 Confusion Matrix Algebra

From our formative research, we aim to generalize confusion matrices to include hierarchical and multi-output labels. For these types of data, analysis usually requires data wrangling as a preprocessing step, where practitioners develop one-off scripts. Our work takes a different approach: We provide a unified view of the different operations and analysis tasks for confusion matrices in the form of a specification language (T4) that is based on a key insight: Confusion matrices can be understood as probability distributions. While this way of viewing confusion matrices may seem unwieldy at first, its expressiveness becomes clear when we think about how practitioners interact with hierarchical and multi-output labels (Figure 2).

Confusion matrices show the number of occurrences for each combination of an actual class versus a predicted class. Rows in a confusion matrix represent actual classes, columns represent predicted classes, and the cells represent the frequencies of all combinations of actual and predicted classes. Our algebra leverages that the actual class and the predicted class can be viewed as variables in a multivariate probability distribution . The probability mass function of this distribution is given by the relative frequencies of occurrences, which we obtain by dividing the absolute frequencies by the number of instances in the dataset. For an introduction to multivariate probabilities, we recommend the book by Hogg [14]. Here we will explain the concepts of our algebra using the labels as an example. In this setting, the following describes a cell in the confusion matrix, specifically apples that are mistaken for oranges:

This probabilistic framing allows us to use the standard operations of multivariate probability distributions to transform our data. In particular, we use the following operations, which we also illustrate in Figure 3: Conditioning primes a probability distribution on given values. We can use this operation to extract sub-views of a larger confusion matrix. Marginalization allows us to discard variables of multivariate distributions that we are currently not interested in by summing over all such variables. These operations have the algebraic property that their results are again probability distributions—mathematically this is defined as closedness. This property is not purely theoretical, but rather it also has practical implications: It allows us to chain multiple operations together to form complex queries. Moreover, the algebra automatically ensures correct normalization after every step. In addition to the two operations above, we also propose a nesting operation, which is useful to investigate multiple labels simultaneously.

Figure 3: A visual representation of the three techniques for transforming high-dimensional multi-output labels. First, we can condition the confusion matrix based on the value of another label. To focus on a single label, we can use marginalization to sum across ignored labels. We can also nest multiple labels to form hierarchical labels.

3.1 Normalization

Normalization is essential for confusion matrices as it determines how the data is visualized (T1). Our probabilistic framework guarantees normalization implicitly, as all objects are probability distributions. Depending on the task, it might make sense to normalize a confusion matrix by rows or columns. Choosing a normalization scheme can emphasize patterns that large matrix entries might otherwise hide (example shown in subsection 5.1). Normalizing by rows or by columns also produces recall and precision, two widely used performance metrics echoed from our formative research. The recall for a label is the value on the diagonal of a matrix normalized by rows: . Similarly, the precision for a label is the value on the diagonal of a matrix normalized by columns: . Both cases can be computed using Bayes’ rule.

3.2 Hierarchical Labels

With our algebra, practitioners can understand how confusions relate to hierarchical labels by drilling down into specific sub-hierarchies (T2). In addition, we can use the hierarchical structure to improve the visual representation of large confusion matrices by collapsing sub-hierarchies. Collapsing sub-hierarchies is equivalent to summarizing multiple entries. First, we collect all the rows/columns that belong to the category to be collapsed. In terms of probability distributions, we create a compound probability (here for Citrus) for these items:

This rewrite is possible because, for visualization, the individual rows/columns of a confusion matrix are not affected by another—they are mutually independent. Therefore, we can conclude .

The other type of analysis that our algebra supports is drilling down into a sub-hierarchy. For this, we will condition the multivariate distribution on the rows/columns that we want to consider:

This operation results in a new confusion matrix that only contains the specified rows and columns as shown in Figure 3.

3.3 Multi-output Labels

Multi-output labels make it significantly harder to evaluate a model’s performance. The number of cells in a confusion matrix grows exponentially for datasets with multi-output labels. Adding an additional label to the fruit dataset results in possible combinations of actual and predicted states:

Our algebra provides multiple techniques to transform high-dimensional confusions into 2D for different analyses (T3), illustrated in Figure 3. In the following discussion, we use example analysis questions to ground the explanation of each technique.

Initially, an analyst might ask What are the confusions for “Taste”, if the predicted label was “apple”?, i.e., we consider confusions for one label given a class of a different label. We achieve this in our algebra by conditioning the multivariate distribution on this class. The following example shows the confusion matrix only for apples:

This operation usually changes the number of columns and rows of the resulting confusion matrix because not all labels necessarily occur together with the fixed label (Figure 3, left).

Furthermore, an analyst may currently not be interested in one of the variables and ask: What are the confusions for “Fruits” without considering their “Taste”? In this case, we can discard the needless variable in our probabilistic framework using marginalization. Here, we discard Taste:

Note that this operation does not change the dimensionality of the variables that we are interested in but instead sums over the frequencies of the discarded entries accordingly (Figure 3, middle).

Finally, analysts that need to understand the relationship between two different variables may ask: What are the confusions for the “Taste” for every “Fruit”? To inspect multiple dimensions simultaneously, our algebra can nest one label below another. Multiple labels in a dataset form a high-dimensional confusion matrix, which cannot be readily visualized using a 2D matrix representation. The nesting operation solves this problem by realizing all possible combinations of labels in a structured manner (the power set of the variables) and induces a hierarchical structure—the relationship between parent and child is given by the ordering of the nesting (Figure 3

, right). This is a useful technique for visualizing joint distributions.

4 Neo: Interactive Confusion Matrix Visualization

To put our confusion matrix algebra into practice, we design and develop Neo, a visual analytics system that enables practitioners to flexibly author and interact with confusion matrices for model evaluation. Our visualization system is agnostic to the model architecture and data. As long as the classification problem (or data annotation task) can record instance labels and predictions, Neo can ingest the results. Throughout the following section, we link relevant views and features to the tasks (T1–T4) identified from our formative research (subsection 2.4).

4.1 Design Goal: Preserve Familiar Representation

Whereas many machine learning visualizations do not have an established form, confusion matrices have an expected and borderline “standardized” representation. Instead of reinventing the confusion matrix visualization, our primary design goal for Neo was to leverage the familiarity of confusion matrices and improve upon their functionality with complementary views and interaction. For example, in the simplest case where a practitioner has a classification model with a dataset whose instances have no hierarchy and only one class label, Neo shows a conventional confusion matrix. However, even in these cases there is still opportunity for improving model evaluation through interaction.

Figure 4: Neo’s JSON specification based on our confusion matrix algebra. The specification configures the confusion matrix based on the selected normalization scheme, visualization encoding, and desired measures, but also saves the state of the shown hierarchy (collapsed, filter) and how multi-output labels are shown using either marginalization (classes), nesting (order of classes array), or conditioning (where).

4.2 Specification for Matrix Configuration

Neo is built upon a powerful domain-specific language (DSL) for specifying a confusion matrix configuration. Implemented in Neo as JSON, this paradigm provides similar benefits to other declarative specification visualizations [26]: automated analysis, reproducibility, and portability. Figure 4 shows an example “spec” and its different fields. In this section we describe every field of the spec.

Neo is a reactive system: configuring the spec updates the visualization, and interacting with the visualization updates the spec. This is a powerful interaction paradigm where a practitioner can tailor their desired view using either code or the interface while remaining in sync [21]. Once a practitioner is satisfied with their visualization, they can easily share their spec with others since their view is represented as a JSON string (T4). In Neo, the spec is hidden by default, but is exposed through a single button click.

4.3 Interacting with Confusion Matrices

The primary view of Neo is the confusion matrix itself (multiple examples seen in Neo: Generalizing Confusion Matrix Visualization to Hierarchical and Multi-Output Labels). Rows represent actual classes and columns represent predicted classes. A cell contains the number of data instances incorrectly predicted from the row class as the column class; the exception are cells along the diagonal that indicate the number of correctly predicted instances for a particular class. To see how many instances are in each cell, one can hover over any cell to display a textual description of the confusion count. This feature was requested in our formative research. Moreover, hovering over a cell highlights its row and column in the matrix, using a light amber background color (Figure 5A), to ease a user’s eye-tracking when reading the axis labels.

Figure 5: (A) Brushing a cell in Neo displays the confusion information in a natural language caption. (B) Confusion matrix cells with a 0 value are excluded from the encoding scale. (C) Users can choose between a color and size encoding for the confusion matrix cells.
Figure 6: Neo’s interactive shelf let’s practitioners specify how to transform multi-output labels for visualization. Non-activated (gray) multi-output labels are marginalized. Activated (blue) multi-output labels define a nesting order. The confusion matrix can also be conditioned on the value of a particular label.

4.3.1 Visualization Encodings and Confusion Normalization

The default encoding is color (arguably the default in practice). Users can toggle between a color encoding and a size encoding where inner squares are scaled to support comparison of absolute values (Figure 5C); this is set in the specification in the encoding field (Figure 4).

Regardless of encoding, this common representation already presents a problem with confusion matrices: the diagonal contains many more instances than off-diagonal entries (e.g., orders of magnitude), which hide important confusions in the matrix. As practitioners improve a model over time, the net outcome moves instances from off-diagonal entries to the diagonal, further exacerbating this problem. Ironically, the better the model optimization, the harder it is to see confusions.

Neo addresses this issue in multiple ways. First, Neo leverages a color discontinuity for the value 0 [19]. Cells with 0 instances are not colored and instead contain a small light-gray dash, which makes it immediately clear which cells have confusions and which do not (Figure 5B). Second, Neo can scale the color of the matrix visualization by everything except the diagonal, giving the full color range exclusively to the confusions (the diagonal is removed from the visualization in this case) . Third, practitioners can choose from different normalization schemes, presented in detail in subsection 3.1, to see different views of the confusions (T1). The default normalization scales cells by the number of instances in each cell, but Neo supports normalizing by the rows or columns. We can read recall and precision respectively from the diagonal of the normalized matrix. Normalization is set in the spec in the normalization field (Figure 4).

Figure 7: (A) When a dataset has class count imbalance (e.g., some class have many more data instances than others, as seen by the “Count Obs.” metric), confusions off the diagonal are hidden and the “Accuracy” metric is misleading. (B

) Normalizing by row and/or column probabilities reveals hidden confusions, and has direct connections to other more appropriate model evaluation metrics such as precision and recall.

4.3.2 Performance Metrics Per Class

Related to choosing different normalization schemes, respondents from our formative research indicated that confusion matrices lack analysis context for looking at other metrics alongside the visualization. Performance metrics, such as accuracy, precision, recall, and others are not readily accessible from a confusion matrix. Aggregate metrics such as these can also be broken down from the model-level to the class-level to support better class-by-class analysis. Neo solves this problem by visualizing both aggregate and per-class metrics on the right-hand side of the confusion matrix as an additional column per metric (Neo: Generalizing Confusion Matrix Visualization to Hierarchical and Multi-Output LabelsA and Figure 7), where the top number corresponds to the aggregate metric, and numbers aligned with each row correspond to each class. Besides the metrics listed above, Neo also includes metrics such as the count of actual and predicted instances, true/false positives, and true/false negatives. These are all set in the spec in the measures field (Figure 4). While this addition may seem small, it is one of the most common limitations of conventional confusion matrices and was continuously requested by respondents from our formative research.

4.4 Visualizing Hierarchical Labels

Hierarchical datasets are one of the more complex structures discovered from our formative research that conventional confusion matrices do not support (see subsection 3.2). Following our design goal to preserve the confusion matrix representation, Neo supports hierarchical labels (see Figure 2) through multiple design improvements (T2). First, the class labels on the axes are nested according to the hierarchy, where classes further in the hierarchy are indented (see Neo: Generalizing Confusion Matrix Visualization to Hierarchical and Multi-Output LabelsB and Figure 8). Second, the matrix is partitioned into blocks based on the lowest hierarchy level. Hovering over any cell in the matrix highlights its parent hierarchy indicator black (vertical gray bars) for easier tracking (Figure 8A). Together, these two improvements help users understand model performance with the hierarchy directly represented in the visualization.

Neo can interactively collapse sub-hierarchies in two ways (see Figure 8). First, selecting a parent class on either axis toggles between showing or hiding the children classes. The interaction collapses (or expands) the parent class and recomputes the confusion data to accurately represent the new aggregate class category in the matrix (T2). This implements a Focus+Context paradigm, by expanding class categories of interest while keeping surrounding categories available nearby [9]. Neo models hierarchies as virtual category trees [31], and expands and collapses sub-matrices symmetrically, since the nonsymmetric case makes confusion much harder to reason about. Alternatively, selecting the magnifying glass icon triggers a drill-down, replacing the entire visualization with only the selected sub-hierarchy and remaps the color (or size) encoding. These techniques allow practitioners to explore larger confusion matrices by reducing the number of visible classes shown and comparing class categories against one another. Regardless of technique, the spec is also updated to record which sub-hierarchies are collapsed or zoomed, set in the collapsed and filter fields respectively (Figure 4), ensuring that when returning to Neo in the future, or sharing the current view, a user picks up where they left off (T4).

4.5 Visualizing Multi-output Labels

Multi-output labels are another more complex structure discovered from our formative research unsupported by conventional confusion matrices (see subsection 3.3). Analyzing multi-output models is difficult since confusions are represented in an unbounded high-dimensional space (see Figure 2), which inhibits directly applying conventional matrix visualization. To preserve the confusion matrix representation familiarity, Neo supports three mechanisms to transform high-dimensional confusions into 2D (T3): conditioning, marginalization, and nesting (for details of each, see subsection 3.3). Inspired by previous work in exploratory visualization [29, 38], in Neo, visualizing multi-output labels leverages an interactive shelf to specify label transformations (Figure 6). The interactive shelf contains all multi-output labels for a given dataset. Multi-output labels are either activated or not; activating a multi-output label toggles its color from blue to gray. Activating a multi-output label displays the label in the confusion matrix for analysis.

Figure 8: (A

) In a deep learning model trained on ImageNet,

Neo reveals the geological form sub-hierarchy contains confusions between semantically related classes across sub-hierarchies. (B) Another high-level sub-hierarchy for animal-animate expands (C) to show detailed confusion comparisons within and between sub-hierarchies of animal classes.
Conditioning

The first technique to transform multi-output confusions is conditioning, i.e., analyzing confusions for one label given a class of a different label. In these scenarios, Neo conditions the confusion matrix based on the value of a specified label. A practitioner can select to condition the matrix on an actual or predicted class from the conditioning label in the interactive shelf. Note that when a multi-output label is used for conditioning, it can no longer be used for nesting. Similar to the other techniques, these options are reflected in the spec in the where field (Figure 4).

Marginalization

The next technique to visualize high-dimensional confusions uses marginalization to sum over all other multi-output labels that a practitioner is not interested in. Therefore, in the interactive shelf in Neo, multi-output labels that are not activated, i.e., grayed out instead of blue, are marginalized automatically. In the spec, activated classes are kept in-sync and saved in the classes field (Figure 4).

Nesting

Oftentimes a practitioner wants to inspect several multi-output labels at once. To address this issue, Neo nests multi-output labels under one another. Nesting multi-output labels creates a hierarchical label structure, which Neo already supports, where each class of the child label is replicated across all classes of the parent label. Neo automatically nests multi-output labels when more than one label is activated in the interactive shelf. Reordering the labels in the shelf changes the nesting order. This order is also reflected in the spec as the order of the activated classes in classes field (Figure 4).

4.6 System Design and Implementation

Neo is a modern web-based system built with Svelte222Svelte: https://svelte.dev, TypeScript333TypeScript: https://www.typescriptlang.org, and D3444D3: https://d3js.org. The spec is implemented as a portable JSON format to easily share confusion matrix configurations with other stakeholders (T4).

5 Model Evaluation Case Studies

The following three case studies showcase how Neo helps practitioners evaluate models across different domains, including object detection, large-scale image classification, and online toxicity detection.

5.1 Finding Hidden Confusions

Recent work on screen recognition showed how machine learning can create accessibility metadata for mobile applications directly from pixels [41]. An object detection model trained on 77,637 screens extracts user-interface elements from screenshots on-device. The publication includes a confusion matrix for a 13-class classifier that reports and summarizes model performance (test set contains 5,002 instances). With Neo, we can further analyze this confusion matrix and find hidden confusions to help improve the end-user experience of the model.

First, Neo loads the confusion matrix with the default “Accuracy” metric appended on the right-hand side, as seen in Figure 7A. By excluding cells with 0 confusions from the visualization, we can quickly see which class pairs have confusions and which do not. Looking at the accuracies, we see good performance across classes, but in the visualization notice that a few cells dominate the color encoding. When a dataset has strong class count imbalance, e.g., class distribution is not equal, “Accuracy” is a misleading metric to use for evaluating a multi-class model. We confirm this by adding the “Count Observed” metric in the specification to see that the Text and Icon classes contain many more instances, 42k and 18k respectively (Figure 7A).

With Neo, we normalize the confusion matrix by the row or column probabilities, seen in Figure 7B, that automatically remap the color encoding to reveal hidden confusions. These normalizations are closely related to two other metrics, precision and recall, practitioners use to better inspect performance per class. After adding these metrics to the spec, we see low recall (with row normalization) and low precision (with column normalization) for the Checkbox (selected) class; digging into the confusion matrix shows errors with the Icon class (Figure 7B, right). We also see confusions between the Container class and SegmentedControl and TextField that were previously hidden in Figure 7A. Neo’s design, metrics, and normalization features make error analysis actionable by surfacing hidden error patterns to model builders.

5.2 Traversing Large Hierarchical Image Classifications

Achieving high accuracy on ImageNet, with its 1.2M+ data instances spread across 1,000 classes, is a standard large-scale benchmark for image classification. Most work considers ImageNet classes as a flat array, but the classes originate from the WordNet 

[23] hierarchy. To test Neo’s scalability, we analyze the results of a ResNet152-V2 [12] deep learning model trained on ImageNet, including its hierarchical structure. The validation set contains 50,000 images.

Figure 9: (A) Conditioning a confusion matrix for a toxicity classification model on identity hate comments filters all confusions according to the value of the label. (B) Nesting the confusion matrix by toxic mild and toxic severe allows us to visualize both labels simultaneously.

When loading a large hierarchical confusion matrix, Neo defaults to collapsing all sub-hierarchies and starting at the root. In this configuration, the metrics show the aggregate performance of the entire model, but as we expand into sub-hierarchies these metrics are recomputed per sub-hierarchy and class. Beginning at the root node of the hierarchy, we expand to an early sub-hierarchy titled object-physical that contains three sub-categories, each of which we filter for analysis. The first category, part-portion, expands fully to contain classes of cloth and towels. The performance on this sub-hierarchy is rather good (strong diagonal), so we continue. Second, the geological-form category expands fully to contain classes of natural landscapes (Figure 8A). While the accuracy is high on most classes (above 91-99%), one sub-hierarchy, shore, is lower than the others (88%). Shore contains two classes, lakeside and seaside. There are a few confusions between the two, which is expected given their semantic similarity, but there exists another set of confusions between these classes and sandbar and promontory (point of high land that juts out into a large body of water), which both belong to a different sub-hierarchy (Figure 8A). Neo enables practitioners to discover these confusion across different sub-hierarchies.

The third and final category, whole-unit, in our original sub-hierarchy contains hundreds of classes. We are now interested in inspecting the performance of living things in our model, i.e., the organism-being category, which contains four sub-hierarchies: person-individual, plant-flora, and fungus all perform well, but animal-animate contains many classes with confusions. Expanding animal-animate shows two classes of interest: chordate-vertebrate and invertebrate, a biological distinction between groups of animals that do or do not have a backbone. This 2x2 confusion matrix is useful for comparing this meaningful high-level sub-hierarchy (Figure 8B), but expanding both categories only one level deeper presents multiple directions for deeper analysis by comparing confusions from familiar animals within and between one another (Figure 8C). Lastly, throughout this analysis Neo’s specification has been automatically updating with the exact configuration, so the exact view can be saved and shared with any other project stakeholder.

5.3 Detecting Multi-class and Multi-label Online Toxicity

To make online discussion more productive and respectful, an open challenge tasked developers to build toxicity classifiers [17]. From 159,571 Wikipedia comments labeled by people for toxic behavior, this is a multi-class, multi-label classification problem that contains 6 types of non-mutually exclusive behavior: mild toxicity, severe toxicity, obscene, threat, insult, and identity hate; e.g., a comment can be both mildly toxic and a threat

. We analyze the results from a naive one-vs-rest logistic regression classifier from a test set of 47,872 comments.

Neo defaults to loading mild toxic comments as the first label to consider. Because this is a multi-output label confusion matrix, Neo visualizes the 2x2 matrix of mild toxic comments against none, i.e., everything else. The interactive shelf tells us that the other 5 classes are currently marginalized. In Figure 9, left, the “Count Obs.” metric tells us this dataset has a large class imbalance, i.e., there are many more non-toxic comments than there are toxic comments. This means our model could struggle with false negatives. Checking this metric indicates that indeed, of the approximately 5k mild toxic comments, our naive model only correctly predicts around 2k of them, leaving nearly 3k false negatives. This is an early indication that our model architecture may not suffice for this dataset.

We are interested in other hurtful online discussion that could cause emotional harm, therefore we only want to consider identity hate comments. To do this, we condition the confusion matrix on identity hate (Figure 9A). Figure 9, middle, shows that the model does a much better job identifying mild toxic comments given the instances are also identity hate, but there are still mild toxic false negatives present.

Beyond mild toxic, we now want to inspect more serious comments within severe toxic. In Neo, we activate and nest severe toxic comments under mild toxic comments to consider the occurrence of both multi-output labels simultaneously (Figure 9B). From Figure 9, right, we see the model correctly identifies some of these comments, but suffers a similar problem as mild toxic comments in that it has many false negatives. We could consider other confusion matrix configurations, such as nesting obscene under mild toxic comments to form a larger hierarchy as shown in Neo: Generalizing Confusion Matrix Visualization to Hierarchical and Multi-Output LabelsC, but already, we can confidently conclude that our first model cannot distinguish between mild toxic

comments and benign comments. To improve this model, the next step is likely choosing a different architecture, such as a long short-term memory model, that can learn richer features from the raw text data.

6 Discussion and Future Work

We have demonstrated how our confusion matrix algebra and its implementation in Neo can help practitioners evaluate their models. Our work opens up many research directions that envision a future where confusion matrices can be further transformed into a powerful and flexible performance analysis tool.

Confusion Matrix Visualization Scalability

Scaling confusion matrix visualization remains an important challenge. In Neo, hierarchical data can be collapsed to focus on smaller submatrices, since most of the time comparing classes within nearby categories is more meaningful, e.g., comparing “apple” to “orange” instead of “apple” to “airplane.” However, in the case of a one-dimensional, large confusion matrix (e.g., more than 100+ classes), the conventional representation suffers from scalability problems. Beyond scrolling and zooming, we envision richer interactions and extensions to our algebra to handle larger confusion matrices. For example, investigating a “NOT” operation that ignores columns to produce a better color mapping to find smaller matrix cells, or leveraging classic table seriation techniques [6].

Automatic Submatrix Discovery

Related to scalability, further algorithmic advancements could help automatically find interesting submatrices of a confusion matrix. From our formative research, this was briefly discussed in the context of a large confusion matrix. Automatically finding groups of cells in the matrix based on some metric, e.g., low-performing classes, could help guide a practitioner towards important confusions to fix and reduce the number of cells on screen.

Interactive Analysis with Metadata

In large data annotation efforts, other metadata is collected besides the raw data features and label(s). How can we use this other metadata to explore model confusions? For example, in an image labelling task, annotators may be asked to draw a bounding box around an object. We could then ask questions about patterns of confusions when metadata such as “bounding box” was small, represented in our confusion matrix algebra as: . We could also compute new metrics like the percentage of small bounding boxes when the “apple” class was confused with “orange,” represented in our confusion matrix algebra as: . Interactive query interfaces that support these types of questions could help practitioners attribute confusions to specific data features or other metadata, saving time when searching for error patterns.

Comparing Model Confusions Over Time

Machine learning development is an inherently iterative process [15, 4, 24, 13], where multiple models are often compared against each other. Two common comparison scenarios include (1) training multiple models at once, and (2) retraining a model after debugging. In the first scenario, it would be useful to interactively compare confusion matrices against one another to select the best performing model. In the second scenario, using a confusion matrix to compare an improved model against its original version could help practitioners test whether or not their improvements worked. One potential comparison technique could be to take the difference between confusion matrices to find the biggest changes.

Creating Datasets from Confusions

While visualizing confusion matrices and aggregate errors helps practitioners debug their models, it can be useful to inspect individual data instances. From our formative research, practitioners expressed interest in extracting instances from confusion matrix cells. Future interactions such as filtering by confusion type and previewing instances within each cell could support extracting and creating new subsets of data for future error analysis.

7 Conclusion

From our formative research, one respondent reported that “confusion matrices are one type of analysis when analyzing performance… doing thorough analysis requires looking at lots of different distributions of the data.” This quote raises a keen point that while confusion matrices remain a ubiquitous visualization for model evaluation, they are only one view into model behavior. Regardless, confusion matrices continue to be an excellent tool to teach modeling fundamentals to novices and an invaluable tool for practitioners building industry-scale systems.

Acknowledgements.
We thank our colleagues at Apple for their time, effort, and help integrating our research with their work. We especially thank Lorenz Kern who helped us with obtaining initial datasets. Jochen Görtler is supported in part by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 251654672 – TRR 161.

References

  • [1] B. Alsallakh, A. Hanbury, H. Hauser, S. Miksch, and A. Rauber (2014) Visual methods for analyzing probabilistic classification data. IEEE Transactions on Visualization and Computer Graphics. External Links: Document Cited by: §1.
  • [2] B. Alsallakh, A. Jourabloo, M. Ye, X. Liu, and L. Ren (2017)

    Do convolutional neural networks learn class hierarchy?

    .
    IEEE Transactions on Visualization and Computer Graphics. External Links: Document Cited by: §1.
  • [3] B. Alsallakh, Z. Yan, S. Ghaffarzadegan, Z. Dai, and L. Ren (2020) Visualizing classification structure of large-scale classifiers. In ICML Workshop on Human Interpretability in Machine Learning, Cited by: §1.
  • [4] S. Amershi, A. Begel, C. Bird, R. DeLine, H. Gall, E. Kamar, N. Nagappan, B. Nushi, and T. Zimmermann (2019) Software engineering for machine learning: a case study. In 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP), External Links: Document Cited by: §2.1, §6, Neo: Generalizing Confusion Matrix Visualization to Hierarchical and Multi-Output Labels.
  • [5] S. Amershi, M. Chickering, S. M. Drucker, B. Lee, P. Simard, and J. Suh (2015) Modeltracker: redesigning performance analysis tools for machine learning. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, External Links: Document Cited by: §1, Neo: Generalizing Confusion Matrix Visualization to Hierarchical and Multi-Output Labels.
  • [6] J. Bertin (1983) Semiology of graphics. University of Wisconsin Press. Cited by: §6.
  • [7] D. Bruckner (2014) ML-o-scope: a diagnostic visualization system for deep machine learning pipelines. Technical report Defense Technical Information Center. External Links: Document Cited by: §1.
  • [8] O. Caelen (2017) A bayesian interpretation of the confusion matrix. Annals of Mathematics and Artificial Intelligence. External Links: Document Cited by: §1.
  • [9] S. K. Card, J. D. Mackinlay, and B. Shneiderman (1999) Readings in information visualization: using vision to think. Morgan Kaufmann Publishers Inc.. Cited by: §4.4.
  • [10] E. F. Codd (1970) A relational model of data for large shared data banks. Communications of the ACM. Cited by: §1.
  • [11] G. R. Gibbs (2007) Thematic coding and categorizing. Analyzing Qualitative Data. External Links: Document Cited by: §2.
  • [12] K. He, X. Zhang, S. Ren, and J. Sun (2016) Identity mappings in deep residual networks. In

    European Conference on Computer Vision

    ,
    External Links: Document Cited by: §5.2.
  • [13] A. Hinterreiter, P. Ruch, H. Stitz, M. Ennemoser, J. Bernard, H. Strobelt, and M. Streit (2020) ConfusionFlow: a model-agnostic visualization for temporal analysis of classifier confusion. IEEE Transactions on Visualization and Computer Graphics. External Links: Document Cited by: §1, §6, Neo: Generalizing Confusion Matrix Visualization to Hierarchical and Multi-Output Labels.
  • [14] R. Hogg and E. Tanis (2020) Probability and statistical inference. Pearson. External Links: ISBN 013518939X Cited by: §3.
  • [15] F. Hohman, K. Wongsuphasawat, M. B. Kery, and K. Patel (2020) Understanding and visualizing data iteration in machine learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, External Links: Document Cited by: §6.
  • [16] M. Hossin and M. N. Sulaiman (2015) A review on evaluation metrics for data classification evaluations. International Journal of Data Mining & Knowledge Management Process. External Links: Document Cited by: §1.
  • [17] Jigsaw (2017) Toxic comment classification challenge. Kaggle. Cited by: §5.3.
  • [18] M. Kahng, P. Y. Andrews, A. Kalro, and D. H. Chau (2017) ActiVis: visual exploration of industry-scale deep neural network models. IEEE transactions on visualization and computer graphics. External Links: Document Cited by: §1.
  • [19] S. Kandel, R. Parikh, A. Paepcke, J. M. Hellerstein, and J. Heer (2012) Profiler: integrated statistical analysis and visualization for data quality assessment. In Proceedings of the International Working Conference on Advanced Visual Interfaces, External Links: Document Cited by: §4.3.1.
  • [20] A. Kapoor, B. Lee, D. Tan, and E. Horvitz (2010) Interactive optimization for steering machine classification. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’10, New York, NY, USA. External Links: Document, ISBN 9781605589299 Cited by: §1.
  • [21] M. B. Kery, D. Ren, F. Hohman, D. Moritz, K. Wongsuphasawat, and K. Patel (2020) Mage: fluid moves between code and graphical work in computational notebooks. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology, External Links: Document Cited by: §4.2.
  • [22] D. Krstinić, M. Braović, L. Šerić, and D. Božić-Štulić (2020) Multi-label classifier performance evaluation with confusion matrix. In Computer Science & Information TechnologyConference: International Conference on Soft Computing, Artificial Intelligence and Machine Learning (SAIM 2020), External Links: Document Cited by: §1.
  • [23] G. A. Miller (1995) WordNet: a lexical database for english. Communications of the ACM. External Links: Document Cited by: §5.2.
  • [24] K. Patel, N. Bancroft, S. M. Drucker, J. Fogarty, A. J. Ko, and J. Landay (2010) Gestalt: integrated support for implementation and analysis in machine learning. In Proceedings of the 23nd annual ACM symposium on User interface software and technology, External Links: Document Cited by: §6, Neo: Generalizing Confusion Matrix Visualization to Hierarchical and Multi-Output Labels.
  • [25] D. Ren, S. Amershi, B. Lee, J. Suh, and J. D. Williams (2016) Squares: supporting interactive performance analysis for multiclass classifiers. IEEE Transactions on Visualization and Computer Graphics. External Links: Document Cited by: §1, §2, Neo: Generalizing Confusion Matrix Visualization to Hierarchical and Multi-Output Labels.
  • [26] A. Satyanarayan, D. Moritz, K. Wongsuphasawat, and J. Heer (2016) Vega-lite: a grammar of interactive graphics. IEEE Transactions on Visualization and Computer Graphics. External Links: Document Cited by: §4.2.
  • [27] C. Seifert and E. Lex (2009) A novel visualization approach for data-mining-related classification. In 2009 13th International Conference Information Visualisation, External Links: Document Cited by: §1.
  • [28] H. Shen, H. Jin, Á. A. Cabrera, A. Perer, H. Zhu, and J. I. Hong (2020) Designing alternative representations of confusion matrices to support non-expert public understanding of algorithm performance. Proceedings of the ACM on Human-Computer Interaction. External Links: Document Cited by: §1.
  • [29] C. Stolte, D. Tang, and P. Hanrahan (2002) Polaris: a system for query, analysis, and visualization of multidimensional relational databases. IEEE Transactions on Visualization and Computer Graphics. External Links: Document Cited by: §1, §4.5.
  • [30] C. Stolte, D. Tang, and P. Hanrahan (2002) Query, analysis, and visualization of hierarchically structured data using polaris. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, External Links: Document Cited by: §1.
  • [31] A. Sun and E. Lim (2001) Hierarchical text classification and evaluation. In Proceedings 2001 IEEE International Conference on Data Mining, External Links: Document Cited by: §4.4.
  • [32] R. Susmaga (2004) Confusion matrix visualization. In Intelligent Information Processing and Web Mining, External Links: Document Cited by: §1.
  • [33] J. Talbot, B. Lee, A. Kapoor, and D. S. Tan (2009) EnsembleMatrix: interactive visualization to support machine learning with multiple classifiers. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’09. External Links: Document, ISBN 9781605582467 Cited by: §1.
  • [34] N. Tötsch and D. Hoffmann (2021) Classifier uncertainty: evidence, potential impact, and probabilistic treatment. PeerJ Computer Science. External Links: Document Cited by: §1.
  • [35] S. van den Elzen and J. J. van Wijk (2011) BaobabView: interactive construction and analysis of decision trees. In 2011 IEEE Conference on Visual Analytics Science and Technology, External Links: Document Cited by: §1.
  • [36] J. Wexler (2017)

    Facets: an open source visualization tool for machine learning training data

    .
    Note: http://ai.googleblog.com/2017/07/facets-open-source-visualization-tool.html Cited by: §1, Neo: Generalizing Confusion Matrix Visualization to Hierarchical and Multi-Output Labels.
  • [37] L. Wilkinson and M. Friendly (2009) The history of the cluster heat map. The American Statistician. External Links: Document Cited by: §1.
  • [38] K. Wongsuphasawat, Z. Qu, D. Moritz, R. Chang, F. Ouk, A. Anand, J. Mackinlay, B. Howe, and J. Heer (2017) Voyager 2: augmenting visual analysis with partial view specifications. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, External Links: Document Cited by: §4.5.
  • [39] Q. Yang, J. Suh, N. Chen, and G. Ramos (2018) Grounding interactive machine learning tool design in how non-experts actually build models. In Proceedings of the 2018 Designing Interactive Systems Conference, External Links: Document Cited by: Neo: Generalizing Confusion Matrix Visualization to Hierarchical and Multi-Output Labels.
  • [40] J. Zhang, Y. Wang, P. Molino, L. Li, and D. S. Ebert (2019) Manifold: a model-agnostic framework for interpretation and diagnosis of machine learning models. IEEE Transactions on Visualization and Computer Graphics. External Links: Document Cited by: §1.
  • [41] X. Zhang, L. de Greef, A. Swearngin, S. White, K. Murray, L. Yu, Q. Shan, J. Nichols, J. Wu, C. Fleizach, A. Everitt, and J. P. Bigham (2021) Screen recognition: creating accessibility metadata for mobile applications from pixels. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. External Links: Document Cited by: §5.1.