DeepAI
Log In Sign Up

Sensible AI: Re-imagining Interpretability and Explainability using Sensemaking Theory

Understanding how ML models work is a prerequisite for responsibly designing, deploying, and using ML-based systems. With interpretability approaches, ML can now offer explanations for its outputs to aid human understanding. Though these approaches rely on guidelines for how humans explain things to each other, they ultimately solve for improving the artifact – an explanation. In this paper, we propose an alternate framework for interpretability grounded in Weick's sensemaking theory, which focuses on who the explanation is intended for. Recent work has advocated for the importance of understanding stakeholders' needs – we build on this by providing concrete properties (e.g., identity, social context, environmental cues, etc.) that shape human understanding. We use an application of sensemaking in organizations as a template for discussing design guidelines for Sensible AI, AI that factors in the nuances of human cognition when trying to explain itself.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

07/21/2021

Audit, Don't Explain – Recommendations Based on a Socio-Technical Understanding of ML-Based Systems

In this position paper, I provide a socio-technical perspective on machi...
11/01/2022

Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small

Research in mechanistic interpretability seeks to explain behaviors of m...
05/13/2022

Grounding Explainability Within the Context of Global South in XAI

In this position paper, we propose building a broader and deeper underst...
02/08/2021

Enhancing Human-Machine Teaming for Medical Prognosis Through Neural Ordinary Differential Equations (NODEs)

Machine Learning (ML) has recently been demonstrated to rival expert-lev...
01/04/2023

A Protocol for Intelligible Interaction Between Agents That Learn and Explain

Recent engineering developments have seen the emergence of Machine Learn...

1. Introduction

With ML-based systems being deployed in the wild, it’s imperative that all stakeholders of these systems have some understanding of how the underlying ML model works. From the experts who develop algorithms to practitioners who design and deploy ML-based systems, and end-users who ultimately interact with these systems—stakeholders require varying levels of understanding of ML to ensure that these systems are used responsibly. Approaches like interpretability and explainability have been proposed as a way to bridge the gap between ML models and human understanding. These include models that are inherently interpretable (e.g., decision trees 

(Quinlan, 1986), simple point systems (Zeng et al., 2017; Jung et al., 2017) or generalized additive models (Caruana et al., 2015; Hastie and Tibshirani, 1990)) and post-hoc explanations for the predictions made by complex models (e.g., LIME (Ribeiro et al., 2016), SHAP (Lundberg and Lee, 2017)). Tools that implement interpretability and explainability approaches have also been made available for public use. In light of this, recent work in HCI has evaluated the efficacy of these tools in helping people understand ML models. These findings suggest that ML practitioners (Kaur et al., 2020) and end-users (Bansal et al., 2021; Kocielnik et al., 2019) are not always able to make accurate judgments about the model, even with the help of explanations. In fact, having access to these tools often leads to over-trust in the ML models. Ultimately, noting that interpretability and explainability are meant for the stakeholders, recent work has proposed design guidelines for explanations based on research in the social sciences about how people explain things to each other (Miller et al., 2017; Miller, 2019). Taking a human-centered or a model-centered approach, this prior work seeks to answer: what are the characteristics of an explanation that can help people understand ML models?

Let us consider a real-world setting. Imagine you are a doctor in a healthcare organization that has decided to use an ML-based decision-support software to help with medical diagnosis. The system takes as input information about patients’ symptoms, demographics, family history, etc., and returns a predicted diagnosis. Naturally, you want to be able to overview why the software predicted a certain diagnosis before you suggest treatment based on its prediction. Further, you want to be able to explain to the patient why you (did not) trust and follow the predicted diagnosis. To aid with this, the software provider gives you access to an explanation system (e.g., LIME (Ribeiro et al., 2016), SHAP (Lundberg and Lee, 2017)) which shows: (1) a local explanation (e.g., a bar chart) of the input features that were most important for the diagnosis made for a specific patient, (2) a global explanation for the features that are usually important to the model when making a prediction, and (3) an overview of each feature’s relationship with the output classes. The explanation system also includes interactive elements so you can ask “what if” questions based on different combinations of input features.

Is this enough to ensure that the ML-based decision-support software can be reliably used by the doctor? We claim that the answer to this question is no. This paper makes the argument that current interpretability and explainability solutions will always fall short of helping people reliably use ML-based systems for decision-making because of their focus on designing better explanations—in other words, improving an artifact. For example, while the explanation shows the symptoms that were important to the model’s prediction (i.e., a local explanation), it does not tell the doctor to be cautious that the patient’s other symptoms are fluctuating, that the patient belongs to a sub-group for which the model has limited training data, or that the nurses have noticed other relevant symptoms in the visiting family. From the patient’s perspective, the explanation does not convey why, for example, their fear of having a particular disease (after an online symptom search or from family history) is unwarranted in this instance. These factors, that have little to do with the particular explanation, can alter the stakeholders’ decision-making in significant ways. Here, we propose a specific theoretical framework to shift from improving the artifact (e.g., an explanation or explanation system) to understanding how humans make sense of complex, and sometimes conflicting, information. Recent work supports this shift from what an explanation should look like to who it is intended for. Properties of the who such as, prior experience with AI and ML (Ehsan et al., 2021b), attitude towards AI (e.g., algorithmic aversion (Burton et al., 2020; Dietvorst et al., 2015)), the socio-organizational context (Ehsan et al., 2021a), have been observed as being critical to understanding AI and ML outputs. We extend this work by providing a framework for how to incorporate human-centered principles to interpretability and explainability.

In this paper, we present Weick’s sensemaking as a framework for envisioning the needs of people in the human-machine context. Weick describes sensemaking as, quite literally, “the making of sense,” or “a developing set of ideas with explanatory possibilities” (Weick, 1995). Although Weick’s definition is similar to that of prior work in HCI and information retrieval, the two deviate in their goals; the latter defines sensemaking as finding representations that enable information foraging and question-answering (Pirolli and Card, 2005; Russell et al., 1993). Weick’s sensemaking is more procedural: “placement of items into frameworks, comprehending, redressing surprise, constructing meaning, interacting in pursuit of mutual understanding, and patterning” (Weick, 1995, p.6). These processes are influenced by one’s identity, environment, social, and organizational context—Weick expands these into the seven properties of sensemaking (Figure 1, Right). For example, for the doctor trying to diagnose a patient with the help of an ML-based system (with explanations), their understanding of the predicted diagnosis can be influenced by questions such as, have they recently diagnosed another patient with similar symptoms; is the patient’s care team in agreement on a diagnosis; is the predicted diagnosis plausible; and, which symptoms are more visible and does the explanation present these as important to the prediction. The seven properties of sensemaking are a framework for identifying and understanding these contextual factors.

Figure 1. Left: DARPA’s conceptualization of Explainable AI, adapted from (Gunning and Aha, 2019). Right: Weick’s sensemaking properties (1–7) categorized using the high-level Enactment-Selection-Retention organizational model, adapted from (Jennings and Greenwood, 2003). Enactment includes properties about perceiving and acting on environmental changes; Selection, properties related to interpreting what the changes mean; and Retention, properties that describe storing and using prior experiences (Kudesia, 2017). Our Sensible AI framework extends the existing definition of interpretability and explainability to include Weick’s sensemaking properties.

What does this knowledge of sensemaking offer to interpretability and explainability researchers and tool designers? A sensemaking perspective tells us how things beyond the individual (i.e., the environmental, social, and organizational contexts) shape individual cognition. It gives us a path forward. Prior work in organizational studies has used sensemaking to identify ways in which teams and organizations can be made more reliable. These high-reliability organizations (HROs) can serve as a template for designing Sensible AI, AI that accounts for the nuances of human cognition when explaining itself to people. We extend the principles that make HROs reliable (e.g., a preoccupation with failure, a sensitivity to low-level operations, a reluctance to simplify anomalous situations) as guidelines for designing Sensible AI. Within our healthcare example, Sensible AI might take the form of a system that highlights the most significant ways in which a change in input features would change the predicted diagnosis; shows cases with similar input features but different diagnosis; presents input features that were considered less important by the model; asks all members of the patient care team to review the diagnosis individually first, allowing for a diversity of opinions and discussion opportunities; and asks for further explanation for cases in which the predicted diagnosis was disregarded, to inform future test cases. Our hope is that researchers and designers can translate our Sensible AI design guidelines as technical and social checks and balances in their tools, to better support human cognition as described by sensemaking.

2. Interpretability and Explainability

2.1. What are interpretability and explainability?

Interpretability is defined from a model’s perspective as the “ability to explain or to present in understandable terms to a human” (Doshi-Velez and Kim, 2017, p.2). It serves as a proxy for other desiderata for ML-based systems such as reliability, robustness, transferability, informativeness, etc. These properties in turn promote trustworthiness, accountability, and fair and ethical decision-making (Doshi-Velez and Kim, 2017; Lipton, 2018). At a high-level, interpretability approaches can be categorized into glassbox models (e.g., (Quinlan, 1986; Jung et al., 2017; Zeng et al., 2017; Caruana et al., 2015; Lakkaraju et al., 2016; Hastie and Tibshirani, 1990)) or post-hoc explanations for blackbox models (e.g., (Ribeiro et al., 2016; Lundberg and Lee, 2017; Alvarez-Melis and Jaakkola, 2017; Selvaraju et al., 2017; Simonyan et al., 2013)). Instantiating these approaches into user-facing tools, static explanations output by mathematical representations of interpretability now includes interactive visuals output by explainable AI. Although similarly defined, this idea of explainability is more human-centered and is “associated with the notion of an explanation as an interface between humans and a decision maker that is, at the same time, both an accurate proxy of the decision maker and comprehensible to human” (Arrieta et al., 2020, p.85) (Figure 1, Left). Scholars have incorporated prior work from philosophy (e.g., (Hempel and Oppenheim, 1948; Peirce, 1878; van Fraassen, 1988; Lipton, 1990; Pitt, 1988; Grice, 1975)), the social sciences (e.g., (Lombrozo, 2006, 2012; Miller et al., 2017; Miller, 2019; Leake, 1991; Slugoski et al., 1993; Malle, 2006; Hilton, 1996; Lomborg and Kapsch, 2020; Nisbett and Wilson, 1977)), and HCI (e.g., (Bellotti and Edwards, 2001; Norman, 2014; Weld and Bansal, 2019; Zhu et al., 2018; Gillies et al., 2016; Passi and Jackson, 2018; Dourish, 2016)) with the motivation that by translating ideas from how people explain things to each other, we can design better solutions for how ML models can be explained to people. As a result, increasingly, interpretability and explainability tools include characteristics such as interactivity (Amershi et al., 2014; Hohman et al., 2019), counterfactual “what-if” outputs (Miller, 2021; Wachter et al., 2017), and modular and sequential explanations (Melis et al., 2021).

Several comprehensive reviews (e.g.,  (Abdul et al., 2018; Arrieta et al., 2020; Liao et al., 2020; Wang et al., 2019; Zhang and Lim, 2022)) synthesize and describe design considerations for the field. Based on a review of 289 core papers and 12412 citing papers, Adbul et al. highlight the trends as (1) a move from early AI work (e.g., in Expert Systems (Swartout, 1983; Davis et al., 1977)) to FAccT-centric ways of providing explanations; and (2) addressing macroscopic societal accountability in addition to helping individual users understand ML outputs (Abdul et al., 2018)

. Arrieta et al. taxonomize 409 papers to clarify terminology (e.g., interpretability, understandability, comprehensibility, etc.); describe interpretability approaches for shallow and deep learning models; and highlight the challenges for responsible AI 

(Arrieta et al., 2020).

2.2. Understanding the “who” in interpretability and explainability

Scholars in ML, HCI, and social science communities have advocated for the importance of understanding who the explanations are intended for. Their work identifies principles about stakeholders that are relevant in the human-machine context. Cognitive factors (e.g., mental models, type of reasoning) have been shown to be important. For example, accurate mental models and deliberative reasoning can help avoid ML practitioners’ misuse of, and over-reliance on, interpretability outputs (Kaur et al., 2020). This also applies to end-users without ML expertise (Buçinca et al., 2021), otherwise explanations increase the likelihood that an end-user will accept an AI’s output, regardless of its correctness (Bansal et al., 2021). For end-users, completeness (rather than soundness) of explanations helps people form accurate mental models (Kulesza et al., 2013). Accuracy and example-based explanations can similarly shape people’s mental models and expectations, albeit in different ways (Kocielnik et al., 2019).

Prior experience and background in ML

is also important. Variance in these can result in preset expectations, which can lead to over- or under-use of explanations 

(Ehsan et al., 2021b). Job- and task-dependent information needs also shape how (much) people internalize explanations. Explanation interfaces that are interactive and collaborative can improve overall accuracy (Stumpf et al., 2009). Additionally, explanations from glassbox models with fewer number of features are easier for end-users to understand (Poursabzi-Sangdeh et al., 2021). For ML practitioners, specific types of visuals of explanations (e.g., local vs. global, sequential vs. collective) differ in how much they help them understand and debug models, and explain them to customers (Hohman et al., 2019; Melis et al., 2021). Finally, social, organizational, and socio-organizational context is important. For example, (Hong et al., 2020; Veale et al., 2018; Madaio et al., 2020; Holstein et al., 2019; Zhu et al., 2018) all highlight the challenges of operating within an organization that either develops or employs an AI-based system. Stakeholders within and outside the organization can have conflicting needs from the system—technical interpretability and explainability approaches are unable to account for these.

These studies from the ML, HCI, and social science communities have all highlighted relevant factors about the “who” in interpretability and explainability. Our proposed framework complements these evaluations: it unifies them based on sensemaking theory translated from organizational studies. We explain how individual, social, and organizational factors can affect the human-machine context, and provide a path forward that accounts for these who-centered factors.

3. Sensemaking

Property Human-Human Context Human-Machine Context
Identity Construction Sensemaking is a question about who I am as indicated by the discovery of how and what I think. Given multiple explanations, people will internalize the one(s) that support their identity in positive ways.
Social What I say and single out and conclude are determined by who socialized me and how I was socialized, and by the audience I anticipate will audit the conclusions I reach. Differences in micro- and macro-social contexts affect the effectiveness of explanations.
Retrospective To learn what I think, I look back over what I said earlier. Providing explanations before people can reflect on the model and its predictions negatively affects sensemaking.
Enactive I create the object to be seen and inspected when I say or do something. The order in which explanations are seen affects how people understand a model and its predictions.
Ongoing Understanding is spread across time and competes for attention with other ongoing projects, by which time my interests may already have changed. The valence and magnitude of emotion caused by an interruption during the process of understanding explanations from interpretability tools change what is understood.
Focused on Extracted Cues The ‘what’ that I single out and embellish is only a small portion of the original utterance, that becomes salient because of context and personal dispositions. Highlighting different parts of explanations can lead to varying understanding of the underlying data and model.
Plausibility over Accuracy I need to know enough about what I think to get on with my projects, but no more, which means sufficiency and plausibility take precedence over accuracy. Given plausible explanations for a prediction, people are not inclined to search for the accurate one amongst these.
Table 1. An overview of the seven properties of sensemaking, their description in the human-human context, and our proposed claims for the human-machine context grounded in each property.

Sensemaking describes a framework for the factors that influence human understanding; “the sensemaking perspective is a frame of mind about frames of mind” (Weick, 1995, p.xii). It is most prominent in discrepant or surprising events. People try to put stimuli into frameworks, particularly when predictions or expectations break down. That is, when people come across new or unexpected information, they like to add structure to this unknown. The process by which they do this, why they do it, and how it affects them and their understanding of the world are all central to sensemaking.

Sensemaking subsumes interpretability111Although interpretability is defined as model-centric and explainability as human-centric, there is not yet consensus on how these terms are different from an implementation point of view. Since “interpretability” is commonly used in describing tools that output explanations, we use this term for the rest of the paper. We follow similar terminology choices with ML- (rather than AI-) based systems since interpretability is attributed to ML models.. They share the same goal: understanding an outcome or experience. If an ML-based system could explain itself, we can verify if the reasoning is sound based on auxiliary criteria (e.g., safety, nondiscrimination), and determine whether the system meets other desiderata such as fairness, reliability, causality, and trust (Doshi-Velez and Kim, 2017; Lipton, 2018). Sensemaking includes all of this and more. Sensemaking not only considers the information being presented to the person doing the meaning-making, but also additional contextual nuances that affect whether and how this information is internalized. This includes factors such as, the enacted environment, the individual’s identity, their social and organizational networks, and prior experiences with similar information.

In the subsections that follow, we describe Weick’s seven properties of sensemaking in the human-human context and translate them for the human-machine context (see Table 1 for an overview). To concretize how these properties might affect stakeholders of ML-based systems, we present an example user vignette for each property. Prior work has applied similar methodology when translating theory (Miles and Huberman, 1994; Alkhatib, 2021). While the examples are crafted based on popular press articles and research papers, they are not intended as being representative of these cases. We use them to highlight a sensemaking property, but we do not claim that the property has a causal relationship with the example, i.e., there could be other reasons for why the ML-based systems functioned the way that they are described in these articles.

3.1. Grounded in Identity Construction

Identity is critical for AI/ML sensemaking because people only understand these systems in ways that they are congruent with their existing beliefs or update their beliefs while shedding a positive light on them. For interpretability, this suggests that, given multiple explanations, people will internalize the one(s) that support their identity in positive ways.

3.1.1. Identity Construction in the Human-Human Context

Sensemaking begins with the sensemaker. In this way, sensemaking is innately human-centered: “how can I know what I think until I see what I say?” (Weick, 1995, p.18). It is grounded in the individual’s need to have a clear sense of identity. People make sense of something to either support their existing beliefs or update them when applying their beliefs leads to a breakdown in their understanding. Weick notes five things of importance for identity and sensemaking (Weick, 1995, pp.23-24): (1) controlled, intentional sensemaking is triggered by a failure to confirm one’s self; (2) sensemaking is grounded in the desire to maintain a consistent, positive self-conception; (3) people learn about their identities by projecting them into an environment—which includes their social, organizational, and cultural contexts—and observing the consequences; (4) sensemaking via identity construction is a mix of proaction and reaction; and (5) sensemaking is self-referential in that the self is what ultimately needs interpreting—what a given situation means is defined by the identity that an individual relies on while understanding it.

The relationship between identity and sensemaking is not limited to the individual sensemaker. The influence of social context can be seen in how identity is constructed. Weick describes this influence using three definitions of identity. First, Mead’s claim that the mind and self are developed based on the communicative processes among people (i.e., social behaviorism). Individuals are comprised of “a parliament of selves” which reflect their various social contexts (Mead, 1934). Second, Knorr-Cetina’s inclusion of social contexts based on the larger tapestry of social, organizational, and cultural norms, i.e., the macro-social (Knorr-Cetina, 1981). Finally, Erez and Earley’s three self-derived needs that shape identity, which include intrapersonal and interpersonal dynamics: (1) the need for self-enhancement, seeking and maintaining a positive cognitive and affective state about the self; (2) the self-efficacy motive, desire to perceive oneself as competent and efficacious; and (3) the need for self consistency, desire to sense and experience coherence and continuity (Erez et al., 1993).

Sensemaking is made challenging by identity because the more identities that an individual has, the more ways they can assign meaning to something. Given the fluidity of identity construction, people have to grapple with several, sometimes contradicting, ways of understanding. Sometimes, this flexibility and adaptability in one’s identity can be good. However, in most cases, this identity-based equivocality can lead to confusion, cognitive burden, and, in turn, lead people towards heuristics-based understanding 

(Reason, 1990).

3.1.2. Identity Construction in the Human-Machine Context

Consider Platform X, a popular social media site which uses an ML model for content moderation, with two stakeholders in mind. First, Sharon, a 42 year old conservative in the U.S. who is against vaccination for COVID-19. Her recent posts include graphic descriptions and images of, what she claims, are the potential side-effects of getting vaccinated. Second, Avery, a 37 year old doctor who believes it is their responsibility to share unfiltered information about the COVID-19 pandemic. Several of their posts highlight the positives of getting vaccinated, and some of them present the rare potential side-effects that have been noted by medical professionals. For both Sharon and Avery, some posts have been removed by Platform X’s content moderation model.

Social media platforms usually offer an explanation for post removal to maintain their user base and help people share content in line with their policies. With interpretability tools, these platforms can support richer explanations. Based on the local explanation from an interpretability tool, Sharon is told that her post was removed due to its content type, the number of her previously flagged posts, her predicted political affiliation based on her posting history, and the topic being COVID-19. She might immediately latch on to the predicted political affiliation as the explanation, and not try to understand the removal any further (i.e., sensemaking is not triggered because her identity remains intact). For Avery, who simply wants to share all relevant information given their identity as a doctor, the post removal might attack their needs for self-enhancement, self-efficacy, and self-consistency. As such, they might assume that the content type being graphic is the main reason for post removal—this would support their positive self-conception, and not require them to understand the model’s reasoning any further.

Interpretability tools are designed to present information in a context-free, unbiased way. But, people rarely internalize information in this static way. Weick argues that whether or how people internalize an explanation is dependent on their identity as an individual and as a part of their varying social contexts.

Claim: Given multiple explanations, people will internalize the one(s) that support their identity in positive ways.

3.2. Social

AI/ML sensemaking is modified by social context because it represents the audience-oriented external factors that influence people as they try to understand the outputs of these systems. For interpretability, this suggests that explanations are internalized differently by people with different micro- and macro-social contexts.

3.2.1. Social Elements of Sensemaking in the Human-Human Context

Sensemaking describes human cognition. This might give it the appearance of being about the individual, but it is not. Weick notes the work on socially shared cognition (e.g., (Resnick et al., 1991; Levine et al., 1993)) which shows that human cognition and social functioning are essential to each other. Specifically, an individual’s conduct is dependent on their audience, whether this is an imagined, implied, or a physically present one (Allport, 1985; Bruns and Stalker, 1961). Regarding the lack of a need for a physically present audience, recall Weick’s reference to Mead’s work on the individual being “a parliament of selves” (Mead, 1934) (see Section 3.1 for details on socially-grounded identity construction).

A focus on social aspects of sensemaking naturally implies that modes of communication (e.g., speech, discourse) and tools that support these also get attention, since these represent the ways in which social contact is mediated. Weick describes their importance on three levels, which exist beyond the individual: (1) inter-subjective, the conversations with others that can lead to alignment; (2) generic subjective, the socially-established norms when alignment has been achieved; and (3) extra-subjective, the culturally-established norms that do not necessarily require communication anymore. As we go from inter- to extra-subjective, the role of the implied and invisible audience becomes increasingly prominent. This, in turn, shapes the modes and tools of communication necessary for sensemaking.

3.2.2. Social Elements of Sensemaking in the Human-Machine Context

Consider the model developed for predicting diabetic retinopathy (DR) based on healthcare data (predominantly eye fundus photos) collected in the U.S. (Beede et al., 2020). The U.S. healthcare system is consistent across organizations—there is low variability in how eye fundus photos are captured, how the medical records are stored, and who (a generalist or specialist doctor) makes a diagnosis. However, when the same model was applied to a different social and cultural context—in Thailand, where healthcare is dependent on individual providers and patient needs in different regions—it failed in unanticipated ways.

First, there is the issue with the data itself. Several countries in Southeast Asia, including Thailand, do not have dedicated rooms for capturing fundus photos, making the photos inconsistent in opacity and leading to potentially inaccurate predictions. Second, there are established norms around the results of a DR screening test. While it is often expected to receive results immediately in the U.S. healthcare system, this is less common in Thailand, with fewer technicians, doctors, and specialists. Patients living in smaller towns have to travel to larger cities for appointments with specialists. A patient who is anticipating their DR result 4-5 weeks later might not have budgeted enough time for travel, based on a referral on the same day as the DR screening test visit.

While interpretability tools may offer an explanation, these explanations are limited to the model and the training dataset. Weick’s perspective suggests that it might not be enough to explain the prediction, due to the variability in people’s social contexts when using predictions in real-world settings; recent work on domain and distributional shifts in ML datasets supports this perspective (Koh et al., 2021).

Claim: Differences in micro- and macro-social contexts affect the effectiveness of explanations.

3.3. Retrospective

Retrospection or reflective thinking influences AI/ML sensemaking by engaging people in deliberately thinking about the diverse interpretations of outputs when trying to understand these systems, instead of following the more automated, heuristics-based, reasoning pathways. For interpretability, this suggests that providing explanations before people can reflect on the model and its predictions negatively affects sensemaking.

3.3.1. Retrospective Sensemaking in the Human-Human Context

Sensemaking is retrospective because the object of sensemaking is a lived experience. Weick describes the retrospective nature of sensemaking as the most important, but perhaps the least noticeable, property. The reason it so frequently goes unnoticed is because of how embedded retrospection is in the sensemaking process. Retrospective sensemaking is derived from the work of Schutz, who believes that meaning is “merely an operation of intentionality, which…only becomes visible to the reflective glance” (Schutz and Kersten, 1976; Schutz, 1972). The lived timeframe being considered for reflection can be the short- or long-term past, ranging from minutes, days, and years to “as I begin the latter portion of a long word, my utterance of the first part is already in the past” (Hartshorne, 1962, p.44).

The retrospective process starts with an individual’s present circumstances, and those shape the past experiences selected for sensemaking. Reflection happens in the form of a cone of light that starts with the present and spreads backwards. In this way, the cues of the past lived experience that are paid attention to for sensemaking depend on how the present is shaped. The challenge lies in which present to consider. People typically have several things on their mind at the same time, be it multiple projects at work or personal goals. With these, they have a multitude of lenses that they could apply for the reflective sensemaking process—the object of their sensemaking thus becomes equivocal. When dealing with equivocality, people are already overwhelmed with information and providing more details is often not helpful. “Instead, they need values, priorities, and clarity about preferences to help them be clear about which projects matter” (Weick, 1995, p.28). In looking for clarity on which meaning to select, people are prone to a hindsight bias (Staw, 1975). They select the most plausible story of causality for the outcome that they are trying to explain (Section 3.7 describes this property of sensemaking: being driven by plausibility over accuracy).

3.3.2. Retrospective Sensemaking in the Human-Machine Context

For ML-based systems, the model and its predictions are the “lived experiences.” Consider a radiologist tasked with reading chest radiographs to determine if a patient has COVID-19. The hospital has purchased an ML-based image classification system. To help determine if the predictions makes sense, the software also provides saliency maps (an interpretability approach).

By immediately providing an explanation, the interpretability tool effectively disengages the retrospective process that helps with sensemaking. Figure 2 shows example explanations provided to the radiologist. As described in the accompanying research paper (DeGrave et al., 2021), these explanations show that the ML model sometimes relies on laterality markers to make the prediction. For example, in Figure 2, the saliency maps highlight not only the relevant regions in the lungs as being predictive, but also some areas (see pointers) that differ based on how the radiograph was taken. These, coincidentally, are also predictive of COVID-19 positive vs. negative results, leading to a spurious correlation.

Ideally, the radiologist evaluating the saliency map would be able to reach the same conclusion regarding these spurious correlations. However, the retrospective property would suggest that by providing this explanation without asking the radiologist to first think about what the explanation could be, the interpretability tool disengages their retrospective sensemaking process. This makes it easier for the radiologist to craft a plausible narrative that agrees with the model’s prediction instead of analyzing the radiograph in detail and accurately understanding the model. When they immediately have the explanation, there is no cognitive need for the radiologist to understand the intricacies of the model, which increases the likelihood of them missing the issues with the model. Prior work on stakeholders’ use of interpretability tools corroborates this perspective: people expect far more from interpretability tools than their actual capabilities and, in doing so, often end up over-trusting and misusing them (Bansal et al., 2021; Kaur et al., 2020).

Figure 2. Saliency maps for chest radiographs, adapted from (DeGrave et al., 2021).

Claim: Providing explanations before people can reflect on the model and its predictions negatively affects sensemaking.

3.4. Enactive of Sensible Environments

Enactment is critical for AI/ML sensemaking because it represents how (much) people understand these systems—it reflects the parts of these systems that people understand, and then build on, over time. For interpretability, this suggests that the order in which explanations are seen affects how people understand a model and its predictions.

3.4.1. Enactment in the Human-Human Context

When we are tasked with making sense of something, it might appear to belong to an external environment that we must observe and understand. Weick argues that this is not the case, that sensemaking works such that “people often produce part of the environment they face” (Weick, 1995, p.30). It is not just the person, rather, the person and their enacted environment that is the unit of analysis for sensemaking (Pondy and Mitroff, 1979).

This environment that provides the necessary context for sensemaking is not a monolithic, fixed environment that exists external to people. Rather, people act, and their actions shape the environmental context needed for sensemaking: “they act, and in doing so create the materials that become the constraints and opportunities they face.” (Weick, 1995, p.31)

. Here, Weick is influenced by Follett, who claims that there is no subject or object in meaning-making. There is no meaning that one understands as the “result of the process;” there is just a “moment in process” 

(Follett, 1924, p.60). As such, this meaning is inherently contextual in that it is shaped by the cycle of action-enaction between the human and their environment.

Weick cautions against two things with the enactive nature of sensemaking. First, to not restrict our definition of action in shaping our environment. Action here could mean creating, reflecting, or interpreting: “the idea that action can be inhibited, abandoned, checked, or redirected, as well as expressed, suggests that there are many ways in which action can affect meaning other than by producing visible consequences in the world” ((Blumer, 1969), described by Weick (Weick, 1995, p.37)). Second, the enacted environments do not need to embody existing ones. People want to believe that the world is defined using pre-given features, i.e., knowledge and meaning exist, we just need to find them. This is called Cartesian anxiety: “a dilemma: either we have a fixed and stable foundation for knowledge, a point where knowledge starts, is grounded, and rests, or we cannot escape some sort of darkness, chaos, and confusion” (Varela et al., 2016, p.140). When faced with equivocal meanings, people want to select ones that reduce Cartesian anxiety. But, in doing so, they also enable existing, socially constructed meanings to shape their sensemaking. This can be helpful in providing the clarity of values needed when faced with equivocality, or it can privilege some meanings over others, depending on agency and power (Ring and Van de Ven, 1989).

3.4.2. Enactment in the Human-Machine Context

Enactment is most apparent when ML-based systems are used in urgent or reactive situations, such as predictive policing. Consider PredPol, which uses location-based ML models that rely on connections between places and their historical crime rates to identify hot spots for police patrol (59).

Say a police officer is monitoring PredPol to allocate patrol units to various neighborhoods. The model’s predictions influence both the officer monitoring the software as well as those patrolling. Both will update their “environment” to be focused on certain neighborhoods. That is, they are primed to look for criminal activity in these neighborhoods. Additionally, when arrests are made using model predictions, they provide further evidence to the model that the patterns it has identified are accurate. In this way, the feedback loop causes the model to become increasingly biased (Heaven, 2020). If the police officers were also provided an explanation for the model’s predictions, the type of explanation and the order in which they are seen (e.g., global vs. local explanation first) changes the enacted environment for the officers. The sensemaking perspective offers several properties for how the environment could be shaped (e.g., people’s identity, social network).

Interpretability tools offer different types of information (e.g., feature importances, partial dependency plots, data distributions), but do not impose an order on how this information is explored. End-users can take different paths to reaching conclusions about the model. Because sensemaking is sensitive to enacted environments, it is important to remember that any information or explanation about the model is not treated by people as static or isolated.

Claim: The order in which explanations are seen affects how people understand a model and its predictions.

3.5. Ongoing

The ongoing nature of AI/ML sensemaking highlights how interruptions and emotions can influence what is understood about these systems. For interpretability, this suggests that, if interrupted when viewing an explanation, the valence and magnitude of the resulting emotion can change what people understand about the model and its predictions.

3.5.1. Sensemaking as an Ongoing Activity in the Human-Human Context

Sensemaking never starts or stops; people are always in the middle of something. To think otherwise would suggest that people are able to chop meaningful moments from the flow of time, but that would be counter-intuitive because to determine whether something is “meaningful” would require sensemaking in the first place (Rickman, 1979; Dilthey and Jameson, 1972). Sensemaking is akin to being in situations of thrownness. Winograd and Flores describe these situations as having the following properties: (1) you cannot avoid acting; (2) you cannot step back and reflect on your actions, i.e., you have to rely on your intuitions; (3) the effects of action cannot be predicted; (4) you do not have a stable representation of the situation; (5) every representation is an interpretation, i.e., no objective analysis can be performed in the moment; and (6) language is action, i.e., people enact the situation via their descriptions of their environment, making it impossible to stay detached from it (Winograd et al., 1986).

Emotion is embedded in sensemaking via the following process. Interruptions trigger arousal, i.e., a discharge in the autonomic nervous system, which convinces the individual that something in the environment has changed, that they must understand it and take appropriate action to get back to a state of flow (Berscheid, 1983; Mandler, 1984). The higher the arousal post-interruption, the stronger the emotional response and, in turn, the stronger the affect of emotion on sensemaking. Why does it matter if there is an emotional response during an ongoing sensemaking process? Emotions affect sensemaking in that recall and retrospect are dependent on one’s mood (Snyder and White, 1982). Specifically, people recall events that are congruent with their current emotional valence. Of all the past events that might be relevant to sensemaking in a current situation, the ones we recall are not those that look the same, but those that feel the same.

3.5.2. Sensemaking as an Ongoing Activity in the Human-Machine Context

Consider the PredPol example again. Let’s assume the arrest record shows that the likelihood of a legitimate arrest in an area predicted as a hot spot by the model is 40%. The officer monitoring the model outputs is made aware of this number every time they log into the system. Imagine this happens one day: the patrol officers allocated to one of the hot spots make a legitimate arrest. The monitoring officer is commended for their role in anticipating the situation. This happens several times during the day. Thus, the monitoring officer associates positive feedback with arrests based on the model’s predictions. When writing their report about the incidents, they use the explanations provided by the software to further justify their choices.

Next day, the patrol officers make another arrest in the same predicted hot spot. The monitoring officer is once again asked to record an explanation for selecting that area for patrol. Before they do so, they happen to look at social media and notice several posts showing outrage with regards to that arrest. This is an interruption, as described by the ongoing property of sensemaking. This time, when the monitoring officer is writing up their explanation, it could be that they mention that the model’s predictions are not always right and highlight some other failure cases.

As we have noted before, information presented in explanations is rarely used in context-free settings. Despite being shown the same explanation, the monitoring officer could notice different aspects of it depending on whether they were interrupted, whether the interruption led to positive or negative emotional states, and the magnitude of those emotions.

Claim: The valence and magnitude of the emotion caused by an interruption during the process of understanding explanations from interpretability tools change what is understood.

3.6. Focused on and by Extracted Cues

Extracted cues modify AI/ML sensemaking because they represent the (incomplete) bits of information that people rely on when trying to understand these systems. For interpretability, this suggests that highlighting different parts of explanations can lead to varying understanding of the underlying data and model.

3.6.1. Extracting Cues in the Human-Human Context

Weick describes extracted cues as “simple, familiar structures that are seeds from which people develop a larger sense of what may be occurring” (Weick, 1995, p.50). These extracted cues are important for sensemaking because they are taken as “equivalent to the entire datum from which they come” and in being taken as such, they “suggest a certain consequence more obviously than it was suggested by the total datum as it originally came” (James, 2007, p.340). Sensemaking uses extracted cues like a partially completed sentence. The completed first half of the sentence constrains what the incomplete second half could be (Shotter, 1983).

Extracting cues involves two processes—noticing and bracketing—which are both affected by context. First, context affects which cues are extracted based on what is noticed by the sensemaker. Noticing is an informal, even involuntary, observation of the environment that begins the process of sensemaking (Starbuck and Milliken, 1988). Cues that are noticed are either novel, unusual, or unexpected, or those that we are situationally or personally primed to focus on (e.g., recently or frequently encountered cues) (Taylor, 1991). Second, context affects how the extracted (noticed) cues are interpreted. Without context, any cues that are extracted lead to equivocal meanings (Leiter, 1980). These situations of equivocality need a clarity of values instead of more information for sensemaking (Section 3.3). Context can provide this clarity in the form of, for example, the social and cultural norms of the setting where sensemaking in happening. During the process of extracting cues, people are trying to form a cognitive reference map that presumes that there is a connection between the situation/outcome and the cue. However, important cues can be missed when people do not have any prior experience with the situation.

3.6.2. Extracting Cues in the Human-Machine Context

Consider the example where a company provides ML-based software to organizations to help them with hiring decisions. A marketing company uses this software to shortlist candidates by sending some questions in advance. The candidates answer these questions in a video format, and the ML-based software analyzes these videos and provides a hiring score along with an explanation. The kind of input data used by the model includes demographic information; prior experience from the candidate’s resume; and tone of voice, perceived enthusiasm, and other emotion data coded by the software after analyzing the recorded video (Kahn, 2021).

Let’s say that the marketing company is using this software to shortlist candidates for the position of a sales representative. The software shows that A is a better candidate than B and explains its ratings (based on local explanations from interpretability tools). The HR folks see that A’s rating is based on their facial expressions during the interview (they were smiling, not visibly nervous, and seemed enthusiastic). They consider these to be good attributes for a sales representative and hire A even though B is more qualified. Additional information about A’s and B’s qualifications is also noted in the local explanations but might not be the cues that are extracted or focused on in this instance.

Current interpretability tools present all types of information and let the user decide how to explore. Weick cautions against this unstructured exploration because it leads to equivocal alternatives for understanding an ML-based system. Which one of these alternatives is ultimately selected can be a reasonable, reflective process or entirely arbitrary.

Claim: Highlighting different parts of explanations can lead to varying understanding of the underlying data and model.

3.7. Driven by Plausibility rather than Accuracy

Recognizing that people are driven by plausibility rather than accuracy is critical for AI/ML sensemaking because we must account for people’s inclination to only have a “good enough” understanding of these systems. For interpretability, this suggests that, given plausible explanations, people are not inclined to search for the accurate one amongst these.

3.7.1. Plausibility over Accuracy in the Human-Human Context

Weick argues that accuracy is nice but not necessary for sensemaking. Even when it is necessary, people rarely achieve it. Instead, people rely on plausible reasoning which is: (1) not necessarily correct but fits the facts, and (2) based on incomplete information (Isenberg, 1986). When sensemaking, people can be influenced by what is “interesting, attractive, emotionally appealing, and goal relevant” (Fiske, 1992).

Weick notes eight reasons for why accuracy is secondary to sensemaking. Most important among these, first, it is impossible to internalize the overwhelming amount of information available for sensemaking. To cope with this, people apply relevance filters to the information  (Gigerenzer, 1991; Smith and Kida, 1991). Second, when people filter what they notice, this biased noticing can be good for action, though not for deliberation. But, deliberation is not the goal, it is “futile in a changing world where perceptions, by definition, can never be accurate” (Weick, 1995, p.60). Third, at the time of sensemaking, it is impossible to tell if the sensemaker’s perceptions will be accurate. It is only in retrospect—after the sensemaker has taken action based on their understanding—that they evaluate their perceptions for accuracy.

With accuracy not being necessary for sensemaking, it is only natural to ask: what is? Weick claims that what is necessary for sensemaking is a good story, “something that preserves plausibility and coherence, something that is reasonable and memorable, something that embodies past experiences and expectations, something that resonates with other people, something that can be constructed retrospectively but also can be used prospectively, something that captures both feeling and thought, something that allows for embellishment to fit current oddities, something that is fun to construct” (Weick, 1995, pp.60-61). Stories help with sensemaking because they are templates from previous attempts at making sense of similar situations. Overall, this property is often amplified by the others in that the plausible narratives could depend on people’s identity, implied or actual audience, extracted cues, emotional state, etc.

3.7.2. Plausibility over Accuracy in the Human-Machine Context

Interpretability outputs, such as text or visual explanations, inherently present a story. As long as this explanation / story is plausible, there is no reason for an individual to evaluate it for accuracy. Consider the example with the radiologist again, where they are tasked with deciding whether a chest radiograph shows that the patient has COVID-19. Their decision-making is supported by an ML-based software that has been trained on publicly available chest radiograph datasets. To help them understand the model’s reasoning for a prediction, the radiologist has access to saliency maps as interpretable outputs (Figure 2).

According to Weick, when using the saliency map to determine whether the model’s prediction makes sense, the radiologist is essentially searching for a plausible story that explains the prediction. The explanations in Figure 2 show some areas inside the lungs as relevant, a plausible reason for predicting COVID-19. The radiologist could believe this plausible explanation and choose to follow it. Human evaluations of interpretability tools show that this confirmatory use of explanations is often the case, even when explanations reveal issues with the underlying model (Bansal et al., 2021; Buçinca et al., 2021; Kaur et al., 2020).

Let’s say that the radiologist was not immediately convinced that the prediction was accurate after seeing the saliency maps. Maybe they looked at one of them (e.g., Figure 2-Middle) and noticed that the radiograph’s edges (by the person’s shoulders and diaphragm) were also salient for the prediction. Even with this observation, the radiologist is looking for a plausible story. Perhaps the patient was coughing and could not stay still when the radiograph was being captured? That could explain the lateral markers for a COVID-19 positive patient. The model is relying on spurious correlations, but, with the role of plausibility in sensemaking, the radiologist might not try to accurately interpret the saliency map.

Claim: Given plausible explanations for a prediction, people are not inclined to search for the accurate one amongst these.

3.8. Summary

When designing solutions for promoting human understanding of ML models, we must consider the nuances of human cognition in addition to the technical solutions which explain ML models. Sensemaking provides a set of properties that describe these nuances—each of these can be seen as a self-contained set of research questions and hypotheses that relates to the other six. As the human-machine examples show, sensemaking properties could explain external factors that shape the information that is ultimately internalized by people when they use interpretability tools.

4. Discussion

We propose a framework for Sensible AI to account for the properties of human cognition described by sensemaking. This has the potential to refine the explanations from interpretability tools for human consumption and to better support the human-centered desiderata of ML-based systems. How do we do this? Once again, Weick (along with his colleagues) proposes a solution: to explicitly promote or amplify sensemaking, we can follow the model of mindful organizing (Weick and Sutcliffe, 2015). Sensemaking and organizing are inextricably intertwined. While sensemaking describes the meaning-making process of understanding, organizing describes the final outcome (e.g., a map or frame of reference) that represents the understanding. They belong to the same mutually interdependent, cyclical, recursive process—sensemaking is the process by which organizing is achieved (Ann Glynn and Watkiss, 2020; Weick et al., 2005). Mindfulness is expressed by actively refining the existing categories that we use to assign meaning, and creating new categories as needed for events that have not been seen before (Langer, 1989; Weick and Sutcliffe, 2015; Vogus and Sutcliffe, 2012).

Mindful organizing was proposed after observing high-reliability organizations (HROs). HROs are organizations that have successfully avoided catastrophes despite operating in high-risk environments (Roberts, 1990; Weick et al., 1999). Examples of these include healthcare organizations, air traffic control systems, naval aircraft carriers, and nuclear power plants. Mindful organizing embodies five principles consistently observed in HROs: (1) preoccupation with failure, anticipating potential risks by always being on the lookout for failures, being sensitive to even the smallest ones; (2) reluctance to simplify, wherein each failure is treated as unique because routines, labels, and cliches can stop us from looking into details of an event; (3) sensitivity to operations, a heightened awareness of the state of relevant systems and processes because systems are not static or linear, and expecting uncertainty in anticipating how different systems will interact in the event of a crisis; (4) commitment to resilience, prioritizing training for emergency situations by incorporating diverse testing pathways and team structures, and when a failure occurs, trying to absorb strain and preserve function; and (5) deference to expertise, assuming that people who are in the weeds—often lower-ranking individuals—have more knowledge about the situation, and valuing their opinions. Our proposal for Sensible AI encompasses designing, deploying, and maintaining systems that are reliable by learning from properties of HROs. Table 2 presents the corresponding principles of HROs that serve as inspiration for each idea.

Table 2. Principles of high-reliability organizations (columns) that inspired our design ideas (rows) for Sensible AI.

4.1. Seamful Design

We can help people understand AI and ML by giving them the agency to do so. Often, ML-based systems and interpretability tools are designed with seamless interaction and effortless usability in mind. However, this can engage people’s automatic reasoning mode, leading them to use ML outputs without adequate deliberation (Buçinca et al., 2021; Kaur et al., 2020; Bansal et al., 2021). Highlighting complex details of ML outputs and processes—seamful design (Inman and Ribes, 2019)—can promote the reluctance to simplify that has helped HROs. It can also add a sensitivity to operations when changes to inputs for models can be clearly seen in the outputs. Enhancing reconfigurability of ML models and training people to understand their complexity can reduce automatic, superficial evaluations. Increasing user control in the form of seamful design has the added benefit of introducing opportunities for informational interruptions, which are helpful for the commitment to resilience seen in HROs. While current interpretability tools have interactive features that provide additional information as needed, contextualizing this information using narratives can help people maintain overall situational awareness and avoid dysfunctional momentum when using ML-based systems. For example, when a doctor is viewing a predicted diagnosis, a Sensible AI system could prompt them to view cases with similar inputs but different diagnoses. Next, we discuss ways to design these systems without overloading the end-user with features, interactivity, and information.

4.2. Inducing Skepticism

One way to reduce over-reliance on generalizations and known information—both common outcomes of sensemaking—is to create situations in which people would ask questions. We call this inducing skepticism, an idea suggested in prior work as a strategy for promoting reflective design (Sengers et al., 2005). Inducing skepticism can foster a preoccupation with failure, an HRO principle that encourages cultivating a doubt mindset in employees. HRO employees are always on the lookout for anomalies, they interpret any new cues from their systems in failure-centric ways, and collectively promote wariness. This can be incorporated in ML-based systems, for example, by suggesting that end-users ask about how a particular prediction is unique or similar to other data points, questioning outputs of interpretability tools sometimes (e.g., “does this feature importance value make sense?”), presenting bottom-n feature importances in an explanation instead of top-n, highlighting cases for which the model is unsure of its predictions, etc. Inducing skepticism can also be accomplished in social ways, by promoting diversity in teams, both in terms of skillsets and experience. For example, novices can prompt experts to view an AI output in more detail when they ask questions about it. This diversity is a common way in which HROs maintain their commitment to resilience. These technical and social ways of inducing skepticism have a common goal, a reluctance to simplify by adding complexity and diversity to a situation.

4.3. Adversarial Design

No one person can successfully anticipate all failures, even when the system induces skepticism. Adversarial design suggests relying on social and organizational networks for this task. Adversarial design is a form of political design rooted in the theory of agonism: promoting productive contestation and dissensus (DiSalvo, 2015; Mouffe, 2013; Wenman, 2013). By designing Sensible AI systems with dissensus-centric features, we can increase the likelihood that someone raises a red flag given early signals of a failure situation. Prior work has implemented adversarial design in the form of red teaming in technical and social ways (e.g., adversarial attacks for testing and promoting cybersecurity (Abbass et al., 2011), and forming teams with collective diversity and supporting deliberation (Hong and Page, 2004, 2020; Gordon et al., 2022), respectively). Here, HRO principles of reluctance to simplify, commitment to resilience, and deference to expertise can be observed in practice. We propose technical redundancies and social diversity to reduce unanticipated failures in understanding AI outputs, as one way of operationalizing adversarial design. Technical redundancies can be implemented as system features wherein multiple people view the same output in different contexts, giving the team a better chance of finding potential issues. Social or organizational diversity can be expanded by including people with different roles, skillsets, and opinions. The more diversity in people viewing the outputs, the higher the likelihood that they collectively discover an issue, as long as deliberation is made easy (Hong and Page, 2020).

4.4. Continuous Monitoring and Feedback

When ML-based systems are deployed in real-world settings, changes in data collection and distributional drifts are a given (Koh et al., 2021). To manage these, researchers and practitioners have proposed MLOps—an extension of DevOps practices from software to ML-based settings—to include continuous testing, integration, monitoring, and feedback loops in maintaining the operation of ML-based systems in the wild (Mäkinen et al., 2021). We propose incorporating social features in this pipeline by designing for HRO principles such as preoccupation with failure, sensitivity to operations, and deference to expertise. For example, include (1) continuous failure monitoring, effectively serving as distributed fire alarms that can be engaged by people at varying levels in an organization, and (2) model maintenance, by relying on people on the ground for detailed understanding of failure cases, as seen in organizations that perform failure panels, audits, etc.

5. Conclusion

Interpretability and explainability approaches are designed to help stakeholders adequately understand the predictions and reasoning of an ML-based system. Although these approaches represent complex models in simpler formats, they do not account for the contextual factors that affect whether and how people internalize information. We have presented an alternate framework for helping people understand ML models grounded in Weick’s sensemaking theory from organizational studies. Via its seven properties, sensemaking describes the individual, environmental, social, and organizational context that affects human understanding. We translated these for the human-machine context and presented a research agenda based on each property. We also proposed a new framework—Sensible AI—that accounts for these nuances of human cognition and presented initial design ideas as a concrete path forward. We hope that by accounting for these nuances, Sensible AI can support the desiderata (e.g., reliability, robustness, trustworthiness, accountability, fair and ethical decision-making, etc.) that interpretability and explainability are intended for.

Acknowledgements.
We thank our reviewers for their helpful comments. We are also grateful to Mitchell Gordon, Stevie Chancellor, and Michael Madaio for their feedback and support. Harmanpreet Kaur was supported by the Google PhD fellowship.

References

  • H. Abbass, A. Bender, S. Gaidow, and P. Whitbread (2011) Computational red teaming: past, present and future. IEEE Computational Intelligence Magazine 6 (1), pp. 30–42. External Links: Document Cited by: §4.3.
  • A. Abdul, J. Vermeulen, D. Wang, B. Y. Lim, and M. Kankanhalli (2018) Trends and trajectories for explainable, accountable and intelligible systems: an hci research agenda. CHI ’18, New York, NY, USA, pp. 1–18. External Links: ISBN 9781450356206, Link, Document Cited by: §2.1.
  • A. Alkhatib (2021) To live in their utopia: why algorithmic systems create absurd outcomes. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI ’21, New York, NY, USA. External Links: ISBN 9781450380966, Link, Document Cited by: §3.
  • G. W. Allport (1985) The historical background of social psychology (vol. 1). The handbook of social psychology. Cited by: §3.2.1.
  • D. Alvarez-Melis and T. Jaakkola (2017) A causal framework for explaining the predictions of black-box sequence-to-sequence models. In

    Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

    ,
    Copenhagen, Denmark, pp. 412–421. External Links: Link, Document Cited by: §2.1.
  • S. Amershi, M. Cakmak, W. B. Knox, and T. Kulesza (2014) Power to the people: the role of humans in interactive machine learning. AI Magazine 35 (4), pp. 105–120. External Links: Link, Document Cited by: §2.1.
  • M. Ann Glynn and L. Watkiss (2020) Of organizing and sensemaking: from action to meaning and back again in a half-century of weick’s theorizing. Journal of Management Studies 57 (7), pp. 1331–1354. External Links: Document Cited by: §4.
  • A. B. Arrieta, N. Díaz-Rodríguez, J. D. Ser, A. Bennetot, S. Tabik, A. Barbado, S. Garcia, S. Gil-Lopez, D. Molina, R. Benjamins, R. Chatila, and F. Herrera (2020) Explainable artificial intelligence (xai): concepts, taxonomies, opportunities and challenges toward responsible ai. Information Fusion 58, pp. 82–115. External Links: Document, ISSN 1566-2535 Cited by: §2.1, §2.1.
  • G. Bansal, T. Wu, J. Zhou, R. Fok, B. Nushi, E. Kamar, M. T. Ribeiro, and D. Weld (2021) Does the whole exceed its parts? the effect of ai explanations on complementary team performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI ’21, New York, NY, USA. External Links: ISBN 9781450380966, Link, Document Cited by: §1, §2.2, §3.3.2, §3.7.2, §4.1.
  • E. Beede, E. Baylor, F. Hersch, A. Iurchenko, L. Wilcox, P. Ruamviboonsuk, and L. M. Vardoulakis (2020) A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, New York, NY, USA, pp. 1–12. External Links: ISBN 9781450367080, Link Cited by: §3.2.2.
  • V. Bellotti and K. Edwards (2001) Intelligibility and accountability: human considerations in context-aware systems. Human–Computer Interaction 16 (2-4), pp. 193–212. External Links: Document Cited by: §2.1.
  • E. Berscheid (1983) Emotion. In Close Relationships, H.H. Kelley, E. Berscheid, A. Christensen, J. Harvey, T. Huston, G. Levinger, E. McClintock, A. Peplau, and D.R. Peterson (Eds.), pp. 110–168. Cited by: §3.5.1.
  • H. Blumer (1969) Symbolic interactionism. Vol. 50, Englewood Cliffs, NJ: Prentice-Hall. Cited by: §3.4.1.
  • T. Bruns and G. Stalker (1961) The management of innovation. Tavistock, London, pp. 120–122. Cited by: §3.2.1.
  • Z. Buçinca, M. B. Malaya, and K. Z. Gajos (2021) To trust or to think: cognitive forcing functions can reduce overreliance on ai in ai-assisted decision-making. Proc. ACM Hum.-Comput. Interact. 5 (CSCW1). External Links: Link, Document Cited by: §2.2, §3.7.2, §4.1.
  • J. W. Burton, M. Stein, and T. B. Jensen (2020) A systematic review of algorithm aversion in augmented decision making. Journal of Behavioral Decision Making 33 (2), pp. 220–239. External Links: Document Cited by: §1.
  • R. Caruana, Y. Lou, J. Gehrke, P. Koch, M. Sturm, and N. Elhadad (2015) Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’15, New York, NY, USA, pp. 1721–1730. External Links: ISBN 978-1-4503-3664-2, Link, Document Cited by: §1, §2.1.
  • R. Davis, B. Buchanan, and E. Shortliffe (1977) Production rules as a representation for a knowledge-based consultation program. Artificial intelligence 8 (1), pp. 15–45. External Links: Document Cited by: §2.1.
  • A. J. DeGrave, J. D. Janizek, and S. Lee (2021) AI for radiographic covid-19 detection selects shortcuts over signal. Nature Machine Intelligence, pp. 1–10. External Links: Document Cited by: Figure 2, §3.3.2.
  • B. J. Dietvorst, J. P. Simmons, and C. Massey (2015) Algorithm aversion: people erroneously avoid algorithms after seeing them err.. Journal of Experimental Psychology: General 144 (1), pp. 114–126. External Links: Document Cited by: §1.
  • W. Dilthey and F. Jameson (1972) The rise of hermeneutics. New literary history 3 (2), pp. 229–244. Cited by: §3.5.1.
  • C. DiSalvo (2015) Adversarial design. Design Thinking, Design Theory. Cited by: §4.3.
  • F. Doshi-Velez and B. Kim (2017) Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. Cited by: §2.1, §3.
  • P. Dourish (2016) Algorithms and their others: algorithmic culture in context. Big Data & Society 3 (2), pp. 1–11. External Links: Document Cited by: §2.1.
  • U. Ehsan, Q. V. Liao, M. Muller, M. O. Riedl, and J. D. Weisz (2021a) Expanding explainability: towards social transparency in ai systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–19. External Links: Document Cited by: §1.
  • U. Ehsan, S. Passi, Q. V. Liao, L. Chan, I. Lee, M. Muller, M. O. Riedl, et al. (2021b) The who in explainable ai: how ai background shapes perceptions of ai explanations. arXiv preprint arXiv:2107.13509. Cited by: §1, §2.2.
  • M. Erez, P. C. Earley, et al. (1993) Culture, self-identity, and work. Oxford University Press on Demand. Cited by: §3.1.1.
  • S. T. Fiske (1992) Thinking is for doing: portraits of social cognition from daguerreotype to laserphoto.. Journal of personality and social psychology 63 (6), pp. 877–889. External Links: Document Cited by: §3.7.1.
  • M. P. Follett (1924) Creative experience. Longmans, Green and company. Cited by: §3.4.1.
  • G. Gigerenzer (1991) How to make cognitive illusions disappear: beyond “heuristics and biases”. European review of social psychology 2 (1), pp. 83–115. External Links: Document Cited by: §3.7.1.
  • M. Gillies, R. Fiebrink, A. Tanaka, J. Garcia, F. Bevilacqua, A. Heloir, F. Nunnari, W. Mackay, S. Amershi, B. Lee, et al. (2016) Human-centred machine learning. In Proceedings of the 2016 CHI conference extended abstracts on human factors in computing systems, pp. 3558–3565. Cited by: §2.1.
  • M. L. Gordon, M. S. Lam, J. S. Park, K. Patel, J. Hancock, T. Hashimoto, and M. S. Bernstein (2022) Jury learning: integrating dissenting voices into machine learning models. In CHI Conference on Human Factors in Computing Systems, CHI ’22, New York, NY, USA. External Links: ISBN 9781450391573, Link, Document Cited by: §4.3.
  • H. P. Grice (1975) Logic and conversation. In Speech acts, pp. 41–58. External Links: Document Cited by: §2.1.
  • D. Gunning and D. Aha (2019) DARPA’s explainable artificial intelligence (xai) program. AI Magazine 40 (2), pp. 44–58. External Links: Document Cited by: Figure 1.
  • C. Hartshorne (1962) Mind as memory and creative love. In Theories of the Mind, J. M. Scher (Ed.), pp. 440–463. Cited by: §3.3.1.
  • T. J. Hastie and R. J. Tibshirani (1990) Generalized additive models. CRC Press (en). Cited by: §1, §2.1.
  • W. D. Heaven (2020) Predictive policing algorithms are racist. they need to be dismantled. MIT Technology Review 17, pp. 2020. Cited by: §3.4.2.
  • C. G. Hempel and P. Oppenheim (1948) Studies in the logic of explanation. Philos. Sci. 15 (2), pp. 135–175. External Links: Document Cited by: §2.1.
  • D. J. Hilton (1996)

    Mental models and causal explanation: judgements of probable cause and explanatory relevance

    .
    Thinking & Reasoning 2 (4), pp. 273–308. External Links: Document Cited by: §2.1.
  • F. Hohman, A. Head, R. Caruana, R. DeLine, and S. M. Drucker (2019) Gamut: a design probe to understand how data scientists understand machine learning models. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI ’19, New York, NY, USA, pp. 579:1–579:13. External Links: ISBN 978-1-4503-5970-2, Link, Document Cited by: §2.1, §2.2.
  • K. Holstein, J. Wortman Vaughan, H. Daumé, M. Dudik, and H. Wallach (2019) Improving fairness in machine learning systems: what do industry practitioners need?. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI ’19, New York, NY, USA, pp. 1–16. External Links: ISBN 9781450359702, Link, Document Cited by: §2.2.
  • L. Hong and S. E. Page (2004) Groups of diverse problem solvers can outperform groups of high-ability problem solvers. Proceedings of the National Academy of Sciences 101 (46), pp. 16385–16389. External Links: Document Cited by: §4.3.
  • L. Hong and S. Page (2020) The contributions of diversity, accuracy, and group size on collective accuracy. Accuracy, and Group Size on Collective Accuracy (October 15, 2020). External Links: Document Cited by: §4.3.
  • S. R. Hong, J. Hullman, and E. Bertini (2020) Human factors in model interpretability: industry practices, challenges, and needs. Proceedings of the ACM on Human-Computer Interaction 4 (CSCW1), pp. 1–26. External Links: Document Cited by: §2.2.
  • S. Inman and D. Ribes (2019) ”Beautiful seams”: strategic revelations and concealments. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI ’19, New York, NY, USA, pp. 1–14. External Links: ISBN 9781450359702, Link, Document Cited by: §4.1.
  • D. J. Isenberg (1986) The structure and process of understanding: implications for managerial action. In The Thinking Organization, H.P. Sims Jr. and D.A. Gioia (Eds.), pp. 238–262. Cited by: §3.7.1.
  • W. James (2007) The principles of psychology. Vol. 1, Cosimo, Inc.. Cited by: §3.6.1.
  • P. D. Jennings and R. Greenwood (2003) Constructing the iron cage: institutional theory and enactment. Debating organization: point-counterpoint in organization studies 195. Cited by: Figure 1.
  • J. Jung, C. Concannon, R. Shroff, S. Goel, and D. G. Goldstein (2017) Simple rules for complex decisions. Available at SSRN 2919024. External Links: Link Cited by: §1, §2.1.
  • J. Kahn (2021) HireVue drops facial monitoring amid a.i. algorithm audit. Fortune. Cited by: §3.6.2.
  • H. Kaur, H. Nori, S. Jenkins, R. Caruana, H. Wallach, and J. Wortman Vaughan (2020) Interpreting interpretability: understanding data scientists’ use of interpretability tools for machine learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, New York, NY, USA, pp. 1–14. External Links: ISBN 9781450367080, Link Cited by: §1, §2.2, §3.3.2, §3.7.2, §4.1.
  • K. D. Knorr-Cetina (1981) The micro-sociological challenge of macro-sociology : towards a reconstruction of social theory and methodology. In Advances in social theory and methodology: toward an integration of micro- and macro-sociologies, K. Knorr-Cetina and A. V. Cicourel (Eds.), pp. 1–47. Cited by: §3.1.1.
  • R. Kocielnik, S. Amershi, and P. N. Bennett (2019) Will you accept an imperfect ai? exploring designs for adjusting end-user expectations of ai systems. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–14. External Links: Document Cited by: §1, §2.2.
  • P. W. Koh, S. Sagawa, H. Marklund, S. M. Xie, M. Zhang, A. Balsubramani, W. Hu, M. Yasunaga, R. L. Phillips, I. Gao, T. Lee, E. David, I. Stavness, W. Guo, B. Earnshaw, I. Haque, S. M. Beery, J. Leskovec, A. Kundaje, E. Pierson, S. Levine, C. Finn, and P. Liang (2021) WILDS: a benchmark of in-the-wild distribution shifts. In Proceedings of the 38th International Conference on Machine Learning, M. Meila and T. Zhang (Eds.), Vol. 139, pp. 5637–5664. External Links: Link Cited by: §3.2.2, §4.4.
  • R. S. Kudesia (2017) Organizational sensemaking. In Oxford research encyclopedia of psychology, Cited by: Figure 1.
  • T. Kulesza, S. Stumpf, M. Burnett, S. Yang, I. Kwan, and W. Wong (2013) Too much, too little, or just right? ways explanations impact end users’ mental models. In 2013 IEEE Symposium on visual languages and human centric computing, pp. 3–10. External Links: Document Cited by: §2.2.
  • H. Lakkaraju, S. H. Bach, and L. Jure (2016) Interpretable decision sets: a joint framework for description and prediction. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, External Links: Document Cited by: §2.1.
  • E. J. Langer (1989) Minding matters: the consequences of mindlessness–mindfulness. In Advances in experimental social psychology, Vol. 22, pp. 137–173. Cited by: §4.
  • [59] (2020-09) Law enforcement: predpol law enforcement intelligence led policing software: predpol law enforcement intelligence led policing software. External Links: Link Cited by: §3.4.2.
  • D. B. Leake (1991) Goal-based explanation evaluation. Cognitive Science 15 (4), pp. 509–545. Cited by: §2.1.
  • K. Leiter (1980) A primer on ethnomethodology. Oxford University Press, USA. Cited by: §3.6.1.
  • J. M. Levine, L. B. Resnick, and E. T. Higgins (1993) Social foundations of cognition. Annual review of psychology 44 (1), pp. 585–612. Cited by: §3.2.1.
  • Q. V. Liao, D. Gruen, and S. Miller (2020) Questioning the ai: informing design practices for explainable ai user experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, New York, NY, USA, pp. 1–15. External Links: ISBN 9781450367080, Link Cited by: §2.1.
  • P. Lipton (1990) Contrastive explanation. Royal Institute of Philosophy Supplements 27, pp. 247–266. Cited by: §2.1.
  • Z. C. Lipton (2018) The mythos of model interpretability. Communications of the ACM 61, pp. 36–43. External Links: Document, ISSN 00010782, Link Cited by: §2.1, §3.
  • S. Lomborg and P. H. Kapsch (2020) Decoding algorithms. Media, Culture & Society 42 (5), pp. 745–761. External Links: Document Cited by: §2.1.
  • T. Lombrozo (2006) The structure and function of explanations. Trends in cognitive sciences 10 (10), pp. 464–470. External Links: Document Cited by: §2.1.
  • T. Lombrozo (2012) Explanation and abductive inference.. The Oxford Handbook of Thinking and Reasoning, pp. 260–276. Cited by: §2.1.
  • S. M. Lundberg and S. Lee (2017) A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), pp. 4765–4774. External Links: Link Cited by: §1, §1, §2.1.
  • M. A. Madaio, L. Stark, J. Wortman Vaughan, and H. Wallach (2020) Co-designing checklists to understand organizational challenges and opportunities around fairness in ai. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, New York, NY, USA, pp. 1–14. External Links: ISBN 9781450367080, Link Cited by: §2.2.
  • S. Mäkinen, H. Skogström, E. Laaksonen, and T. Mikkonen (2021) Who needs mlops: what data scientists seek to accomplish and how can mlops help?. arXiv preprint arXiv:2103.08942. Cited by: §4.4.
  • B. F. Malle (2006) How the mind explains behavior: folk explanations, meaning, and social interaction. Mit Press. Cited by: §2.1.
  • G. Mandler (1984) Mind and body: psychology of emotion and stress. WW Norton & Company Incorporated. Cited by: §3.5.1.
  • G. H. Mead (1934) Mind, self and society. Vol. 111, Chicago University of Chicago Press.. Cited by: §3.1.1, §3.2.1.
  • D. A. Melis, H. Kaur, H. Daumé III, H. Wallach, and J. W. Vaughan (2021) From human explanation to model interpretability: a framework based on weight of evidence. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 9, pp. 35–47. Cited by: §2.1, §2.2.
  • M. B. Miles and A. M. Huberman (1994) Qualitative data analysis: an expanded sourcebook. sage. Cited by: §3.
  • T. Miller, P. Howe, and L. Sonenberg (2017) Explainable ai: beware of inmates running the asylum or: how i learnt to stop worrying and love the social and behavioural sciences. In IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI), Cited by: §1, §2.1.
  • T. Miller (2019) Explanation in artificial intelligence: insights from the social sciences. Artificial Intelligence 267, pp. 1–38. External Links: ISSN 0004-3702, Document Cited by: §1, §2.1.
  • T. Miller (2021) Contrastive explanation: a structural-model approach.

    The Knowledge Engineering Review

    36.
    External Links: Document Cited by: §2.1.
  • C. Mouffe (2013) Agonistics: thinking the world politically. Verso Books. Cited by: §4.3.
  • R. E. Nisbett and T. D. Wilson (1977) Telling more than we can know: verbal reports on mental processes.. Psychological review 84 (3), pp. 231–259. External Links: Document Cited by: §2.1.
  • D. A. Norman (2014) Some observations on mental models. In Mental models, pp. 15–22. Cited by: §2.1.
  • S. Passi and S. J. Jackson (2018)

    Trust in data science: collaboration, translation, and accountability in corporate data science projects

    .
    Proceedings of the ACM on Human-Computer Interaction 2 (CSCW), pp. 1–28. External Links: Document Cited by: §2.1.
  • C. S. Peirce (1878) Illustrations of the logic of science: IV the probability of induction. Popular Science Monthly 12, pp. 705–718. Cited by: §2.1.
  • P. Pirolli and S. Card (2005) The sensemaking process and leverage points for analyst technology as identified through cognitive task analysis. In Proceedings of international conference on intelligence analysis, Vol. 5, pp. 2–4. Cited by: §1.
  • J. C. Pitt (1988) Theories of explanation. Oxford University Press. External Links: ISBN 9780195049701 Cited by: §2.1.
  • L. R. Pondy and I. I. Mitroff (1979) Beyond open system models of organization. Research in organizational behavior 1 (1), pp. 3–39. Cited by: §3.4.1.
  • F. Poursabzi-Sangdeh, D. G. Goldstein, J. M. Hofman, J. W. Wortman Vaughan, and H. Wallach (2021) Manipulating and measuring model interpretability. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI ’21, New York, NY, USA. External Links: ISBN 9781450380966, Link, Document Cited by: §2.2.
  • J. R. Quinlan (1986) Induction of decision trees. Mach. Learn.. External Links: ISSN 0885-6125, 1573-0565, Document Cited by: §1, §2.1.
  • J. Reason (1990) Human error. Cambridge university press. Cited by: §3.1.1.
  • L. B. Resnick, J. M. Levine, and S. D. Teasley (1991) Perspectives on socially shared cognition. 1st ed. edition, American Psychological Association (English). External Links: ISBN 1557981213 Cited by: §3.2.1.
  • M. T. Ribeiro, S. Singh, and C. Guestrin (2016)

    ” Why should i trust you?” explaining the predictions of any classifier

    .
    In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1135–1144. External Links: Document Cited by: §1, §1, §2.1.
  • H. P. Rickman (1979) Dilthey selected writings. Cited by: §3.5.1.
  • P. S. Ring and A. H. Van de Ven (1989) Formal and informal dimensions of transactions. Research on the management of innovation: The Minnesota studies 171, pp. 192. Cited by: §3.4.1.
  • K. H. Roberts (1990) Some characteristics of one type of high reliability organization. Organization Science 1 (2), pp. 160–176. External Links: Document Cited by: §4.
  • D. M. Russell, M. J. Stefik, P. Pirolli, and S. K. Card (1993) The cost structure of sensemaking. In Proceedings of the INTERACT ’93 and CHI ’93 Conference on Human Factors in Computing Systems, CHI ’93, New York, NY, USA, pp. 269–276. External Links: ISBN 0897915755, Link, Document Cited by: §1.
  • A. Schutz and F. Kersten (1976) Fragments on the phenomenology of music. Cited by: §3.3.1.
  • A. Schutz (1972) The phenomenology of the social world. Northwestern University Press. Cited by: §3.3.1.
  • R. R. Selvaraju, A. Das, R. Vedantam, M. Cogswell, D. Parikh, and D. Batra (2017) Grad-cam: why did you say that? visual explanations from deep networks via gradient-based localization. In ICCV, Cited by: §2.1.
  • P. Sengers, K. Boehner, S. David, and J. ’. Kaye (2005) Reflective design. In Proceedings of the 4th Decennial Conference on Critical Computing: Between Sense and Sensibility, CC ’05, New York, NY, USA, pp. 49–58. External Links: ISBN 1595932038, Link, Document Cited by: §4.2.
  • J. Shotter (1983) Duality of “structure” and “intentionality” in an ecological psychology. Journal for the Theory of Social Behaviour 13 (1), pp. 19–44. Cited by: §3.6.1.
  • K. Simonyan, A. Vedaldi, and A. Zisserman (2013) Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint arXiv:1312. 6034. Cited by: §2.1.
  • B. R. Slugoski, M. Lalljee, R. Lamb, and G. P. Ginsburg (1993) Attribution in conversational context: effect of mutual knowledge on explanation-giving. European Journal of Social Psychology 23 (3), pp. 219–238. Cited by: §2.1.
  • J. F. Smith and T. Kida (1991) Heuristics and biases: expertise and task realism in auditing.. Psychological bulletin 109 (3), pp. 472. Cited by: §3.7.1.
  • M. Snyder and P. White (1982) Moods and memories: elation, depression, and the remembering of the events of one’s life. Journal of personality 50 (2), pp. 149–167. External Links: Document Cited by: §3.5.1.
  • W. H. Starbuck and F. J. Milliken (1988) Executives’ perceptual filters: what they notice and how they make sense. Cited by: §3.6.1.
  • B. M. Staw (1975) Attribution of the “causes” of performance: a general alternative interpretation of cross-sectional research on organizations. Organizational behavior and human performance 13 (3), pp. 414–432. External Links: Document Cited by: §3.3.1.
  • S. Stumpf, V. Rajaram, L. Li, W. Wong, M. Burnett, T. Dietterich, E. Sullivan, and J. Herlocker (2009) Interacting meaningfully with machine learning systems: three experiments. International journal of human-computer studies 67 (8), pp. 639–662. External Links: Document Cited by: §2.2.
  • W. R. Swartout (1983) XPLAIN: a system for creating and explaining expert consulting programs. Artificial intelligence 21 (3), pp. 285–325. External Links: Document Cited by: §2.1.
  • S. E. Taylor (1991) Asymmetrical effects of positive and negative events: the mobilization-minimization hypothesis.. Psychological bulletin 110 (1), pp. 67–85. Cited by: §3.6.1.
  • B. van Fraassen (1988) The pragmatic theory of explanation. In Theories of Explanation, J. C. Pitt (Ed.), Cited by: §2.1.
  • F. J. Varela, E. Thompson, and E. Rosch (2016) The embodied mind: cognitive science and human experience. MIT press. Cited by: §3.4.1.
  • M. Veale, M. Van Kleek, and R. Binns (2018) Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI ’18, New York, NY, USA, pp. 1–14. External Links: ISBN 9781450356206, Link, Document Cited by: §2.2.
  • T. J. Vogus and K. M. Sutcliffe (2012) Organizational mindfulness and mindful organizing: a reconciliation and path forward. Academy of Management Learning & Education 11 (4), pp. 722–735. External Links: Document Cited by: §4.
  • S. Wachter, B. Mittelstadt, and C. Russell (2017) Counterfactual explanations without opening the black box: automated decisions and the gdpr. Harv. JL & Tech. 31, pp. 841. Cited by: §2.1.
  • D. Wang, Q. Yang, A. Abdul, and B. Y. Lim (2019) Designing theory-driven user-centric explainable ai. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI ’19, New York, NY, USA, pp. 1–15. External Links: ISBN 9781450359702, Link, Document Cited by: §2.1.
  • K.E. Weick, K.M. Sutcliffe, and D. Obstfeld (1999) Organizing for high reliability: processes of collective mindfulness. Research in Organizational Behaviour 21, pp. 81–123. Cited by: §4.
  • K. E. Weick, K. M. Sutcliffe, and D. Obstfeld (2005) Organizing and the process of sensemaking. Organization science 16 (4), pp. 409–421. External Links: Document Cited by: §4.
  • K. E. Weick and K. M. Sutcliffe (2015) Managing the unexpected: sustained performance in a complex world. John Wiley & Sons. Cited by: §4.
  • K. E. Weick (1995) Sensemaking in organizations. Vol. 3, Sage. Cited by: §1, §3.1.1, §3.3.1, §3.4.1, §3.4.1, §3.4.1, §3.6.1, §3.7.1, §3.7.1, §3.
  • D. S. Weld and G. Bansal (2019) The challenge of crafting intelligible intelligence. Communications of the ACM 62 (6), pp. 70–79. External Links: Document Cited by: §2.1.
  • M. Wenman (2013) Agonistic democracy: constituent power in the era of globalisation. Cambridge University Press. Cited by: §4.3.
  • T. Winograd, F. Flores, and F. F. Flores (1986) Understanding computers and cognition: a new foundation for design. Intellect Books. Cited by: §3.5.1.
  • J. Zeng, B. Ustun, and C. Rudin (2017) Interpretable classification models for recidivism prediction. Journal of the Royal Statistical Society: Series A (Statistics in Society) 180 (3), pp. 689–722. External Links: Document Cited by: §1, §2.1.
  • W. Zhang and B. Y. Lim (2022) Towards relatable explainable ai with the perceptual process. In CHI Conference on Human Factors in Computing Systems, New York, NY, USA, pp. 1–24. External Links: ISBN 9781450391573, Link, Document Cited by: §2.1.
  • J. Zhu, A. Liapis, S. Risi, R. Bidarra, and G. M. Youngblood (2018) Explainable ai for designers: a human-centered perspective on mixed-initiative co-creation. In 2018 IEEE Conference on Computational Intelligence and Games (CIG), pp. 1–8. External Links: Document Cited by: §2.1, §2.2.