Technology developers, researchers, policymakers, and others have identified the design and development process of AI systems as a site for interventions to promote more ethical and just ends for AI systems (Holstein et al., 2019; Madaio et al., 2020; Rakova et al., 2020; Schiff et al., 2020). Recognizing this opportunity, researchers, practitioners, and activists have created a plethora of tools, resources, guides, and kits—of which the dominant paradigm is a “toolkit” (Lee and Singh, 2021; Richardson et al., 2021)—to promote ethics in AI design and development. Toolkits help technology practitioners and other stakeholders surface, discuss, or address ethical issues in their work. However, as the field appears to coalesce around this paradigm, it is critical to consider how these toolkits help to define and shape that work.
Creating toolkits alone will not be sufficient to address ethical issues. They must be adopted and used in practice within specific organizational contexts. Previous reviews of ethics and fairness toolkits focus on usability and functionality (Lee and Singh, 2021; Richardson et al., 2021); here, we take a more critical approach to understand how toolkits, like all tools, encode assumptions about what it means to do the work of ethics. In other words, how do toolkits envision the work of AI ethics? Specifically, we ask:
What are the discourses of ethics that ethical AI toolkits draw on to legitimize their use?
Who do the toolkits imagine as doing the work of addressing ethics in AI?
What do toolkits imagine to be the specific work practices of addressing ethics in AI?
To do this, we compiled and qualitatively coded a corpus of 27 AI ethics toolkits (broadly construed) to identify the discourses about ethics, the imagined users of the toolkits, and the work practices the toolkits envision and support. We found gaps between the types of stakeholders and work practices the toolkits call for and the support they provide. While framing ethics and fairness as sociotechnical issues that require diverse stakeholder involvement and engagement, many of the toolkits focused on technical approaches for individual technical practitioners to undertake. With few exceptions, toolkits lacked guidance on how to involve more diverse stakeholders or how to navigate organizational power dynamics when addressing AI ethics.
We provide recommendations for designers of AI ethics toolkits—both future and existing—to (1) embrace the non-technical dimensions of AI ethics work, (2) support the work of engaging with stakeholders111Here, we use the term “stakeholder” expansively, to include both potential users of the toolkits, others who may be part of the AI design, development, and deployment process, as well as other direct and indirect stakeholders who may be impacted by AI systems. We take this expansive approach following Lucy Suchman’s work complicating the notion of the user (Suchman, 2002) as well as Forlizzi and Zimmerman’s work calling for more attention to stakeholders outside of the end users(Forlizzi and Zimmerman, 2013). In cases where we specifically mean the users of the toolkit, we use the term “user.” from non-technical backgrounds, and (3) structure the work of AI ethics as a problem for collective action. We contemplate how we, as a research community, can facilitate toolkits that achieve these goals, and how we might create metaphors and formats beyond toolkits that resist the solutionism prevalent in today’s resources.
2.1.1. As a genre
What sort of thing is a toolkit? At their core, toolkits are curated collections of tools and materials. Examples abound: do-it-yourself construction toolkits; first aid kits; traveling salesman kits; and research toolkits for (e.g.,) conducting participatory development efforts in rural communities (Mattern, 2021; Kelty, 2018), among many other examples. If we view them as a genre of communication (cf. Yates and Orlikowski, 1992), we can see how their design choices structure their users’ actions and interactions by conveying expectations for how they might be used. As Mattern has argued, toolkits make particular claims about the world through their design—they construct an imagined user, make an implicit argument about what forms of knowledge matter, and suggest visions for the way the world should be (Mattern, 2021). As a genre of communication, toolkits suggest a set of practices in a commonly recognized form; they formalize complex processes, but in so doing, they may flatten nuance and suggest that the tools to solve complex problems lie within the confines of the kit (Mattern, 2021; Kelty, 2018). While artifacts can make certain practices legible, understandable, and knowable across different contexts, they can also abstract away from local situated practices (Scott, 1998). Moreover, toolkits work to configure what Goodwin calls professional vision: “socially organized ways of seeing and understanding events that are answerable to the distinctive interests of a particular social group” (Goodwin, 2015, p606). This professional vision has political implications: in Goodwin’s analysis, U.S. policing creates “suspects” to whom “use of force” can be applied (Goodwin, 2015, p616); it is thus critical to examine how toolkits may configure the professional vision of AI practitioners working on ethics.
2.1.2. In AI ethics
In light of AI practitioners’ needs for support in addressing the ethical dimensions of AI (Holstein et al., 2019), technology companies, researchers at FAccT, AIES, CHI, and other venues, and others have developed numerous tools and resources to support that work, with many such resources taking the form of toolkits (Lee and Singh, 2021; Richardson et al., 2021; Morley et al., 2021; Krafft et al., 2021; Shen et al., 2021a; Gebru et al., 2021; Mitchell et al., 2019). Several papers have performed systemic meta-reviews and empirical analyses of AI ethics toolkits (Lee and Singh, 2021; Morley et al., 2021; Richardson et al., 2021; Ayling and Chapman, 2021). Ayling and Chapman (2021) perform a descriptive analysis of AI ethics toolkits, identifying stakeholder types common across toolkits, and stages in the organizational lifecycle at which various toolkits are applied. Lee and Singh (2021)
take a normative look at six open source fairness toolkits, using surveys and interviews with practitioners to understand the strengths and weaknesses of these tools.Richardson et al. (2021) performed a simulated ethics scenario with ML practitioners, observing their experience using various ethics toolkits to inform recommendations for their design. Morley et al. (2021) proposes a typology of AI ethics approaches condensed from a variety of toolkits, also critiquing toolkits’ lack of usability and their focus on individuals rather than social groups. Crockett et al. (2021) analyze 77 AI ethics toolkits, finding that many lack instructions or training to facilitate adoption. They focus on the toolkits’ dominant principles, product lifecycle stages, and measure how well they may be adopted by small and medium enterprises.
In technology fields other than AI ethics, others have studied how design toolkits shape work practices. Chivukula et al. (2021) identify how toolkits operationalize ethics, identify their audience, and embody specific theories of change. Pierce et al. (2018)’s analysis of cybersecurity toolkits reveal a complex set of “differentially” vulnerable persons, all attempting to achieve security for their socially situated needs. Building on prior empirical work evaluating the functionality and usability of AI ethics toolkits, we take a critical approach to understand the work practices that toolkits envision for their imagined users, and how those work practices might be enacted in particular sites of technology production. In other words, we focus our analysis on how toolkits help configure the organizational practice of AI ethics.
2.2. AI Ethics in Organizational Practice
As the field of AI ethics has moved from developing high-level principles (Jobin et al., 2019) to operationalizing those principles in particular sets of practices (Mittelstadt, 2019; Schiff et al., 2020), prior research has identified the crucial role that social and organizational dynamics play in whether and how those practices are enacted in the organizational contexts where AI systems are developed (Metcalf et al., 2019; Madaio et al., 2020; Rakova et al., 2020). Substantial prior work has identified the crucial role of organizational dynamics (e.g., workplace politics, institutional norms, organizational culture) in shaping technology design practices more broadly (Suchman, 2002; Wong, 2021; Shilton, 2013; Neff, 2020). Prior ethnographic research on the work practices of data scientists has identified how technical decisions are never just technical—that they are often contested and negotiated by multiple actors (e.g., data scientists, business team members, user researchers) within their situated contexts of work (Passi and Barocas, 2019; Passi and Jackson, 2018). Passi and Sengers (2020) discuss how such negotiations were shaped by the organizations’ business priorities, and how the culture and structure of those organizations legitimized technical knowledge over other types of knowledge and expertise, in ways that shaped how negotiations for technical design decisions were resolved. These dynamics are found across a range of technology practitioners, including user experience professionals (Wong, 2021; Chivukula et al., 2020), technical researchers (Shilton, 2013), or privacy professionals (Bamberger and Mulligan, 2015).
Prior research on AI ethics work practices has similarly identified how the organizational contexts of AI development shape practitioners’ practices for addressing ethical concerns. Metcalf et al., explored the recent institutionalization of ethics in tech companies by tracing the roles and responsibilities of so-called “ethics owners” (Metcalf et al., 2019). In contrast with ethics owners who may have responsibility over ethical implications of AI, Madaio et al. (2020) identified how the social pressures on AI practitioners (e.g., data scientists, ML engineers, AI product managers) to ship products on rapid timelines disincentivized them to raise concerns about potential ethical issues. Taking a wider view, Rakova et al. (2020) discussed how AI development suffers from misaligned incentives and a lack of organizational accountability structures to support proactive anticipation of and work to address ethical AI issues. However, as resources to support AI ethics work have proliferated—including AI ethics toolkits—it is not clear to what extent the designers of those resources have learned the lessons of this research on how organizational dynamics may shape AI ethics work in practice.
3.1. Researchers’ positionality
The three authors share an interest in issues related to fairness and ethics in AI and ML systems, and have formal training in human-computer interaction and information studies, but also draw on interdisciplinary research fields studying the intersections of technology and society. All three authors are male, and live and work for academic and industry research institutions in the United States. One author’s prior research is situated in values in design, studying the practices used by user experience and other technology professionals to address ethical issues in their work, including the organizational power dynamics involved in these practices. Another author’s prior work has focused on how AI practitioners conceptualize fairness and address it in their work practices. He has conducted qualitative research with AI practitioners, has contributed to multiple resources for fairness in AI, and currently works at a large technology company as an AI fairness researcher. The third author has built course materials to teach undergraduate and graduate students how to identify and ameliorate bias in machine learning algorithms and has reflected on the ways that students do not get exposed to fairness in technical detail during their coursework. In the limitations (section5.3) we discuss ways in which our positionality may have shaped our approach to the research questions, data collection, and data analysis and interpretation.
3.2. Corpus development
We conducted a review of existing ethics toolkits, curated to explore the breadth of ways that ethical issues are portrayed in relation to developing AI systems. We began by conducting a broad search for such artifacts in May-June 2021. We searched in two ways. First, we looked at references from recent research papers from FAccT and CHI that survey ethical toolkits (e.g., Lee and Singh, 2021; Richardson et al., 2021). Second, following the approach in Lee and Singh (2021), we emulated the position of a practitioner looking for ethical toolkits and conducted a range of Google searches for artifacts using the terms: “AI ethics toolkit,” “AI values toolkit,” “AI fairness toolkit,” “ethics design toolkit,” “values design toolkit.” Several search results provided artifacts such as blog posts or lists of other toolkits, and many toolkits appeared in results from multiple search terms.222While not all toolkits specifically focused on AI (some focused on “algorithms” or “design”), their content and their inclusion in search results made it reasonably likely that a practitioner would consult with the resource in deciding how to enact AI ethics. We shared and discussed these resources with each other to discuss what might (not) be considered a toolkit (for instance, we decided to exclude ethical oaths or compilations of tools).333Note that the term toolkit is used in this paper is an analytical category chosen by the researchers to search for and describe the artifacts being studied. Not all the artifacts we analyzed explicitly described themselves using the term toolkit. See the Appendix for more details about the toolkits. From these search processes, we initially identified 58 unique candidate toolkits for analysis.
Our goal was to identify a subset of toolkits for deeper qualitative analysis in order to sample for a variety of toolkits (rather than attempt to create an exhaustive or statistically representative sample). After reading through the toolkits, we discussed potential dimensions of variation, including: the source(s) of the toolkit (e.g., academia, industry, etc), the intended audience or user, form factor(s) of the toolkit and any guidance it provided (e.g., code, research papers, documentation, case studies, activity instructions, etc.), and its stated goal(s) or purpose(s). We also used the following criteria to narrow the corpus for deeper qualitative analysis:
The toolkit’s audience should be a stakeholder related to the design, deployment, or use of AI systems. This led us to exclude toolkits such as Shen et al.’s value cards (Shen et al., 2021a), designed primarily for use in a student or educational setting, but not to exclude toolkits such as Krafft et al. (2021), intended to be used by community advocates.
The toolkit should provide specific guidance or actionable items to its audience, which could be technical, organizational, or social actions. Artifacts that provided lists of other toolkits or only provided informational materials were excluded (e.g., a blog post advocating for greater use of value-sensitive design (Shonhiwa, 2020)).
Given our focus on practice, the toolkit should have some indication of use
(by stakeolders either internal or external to companies). Although we are unable to validate the extent to which each toolkit has been adopted, we used a series of proxies to estimate which toolkits are likely to have been used by practitioners, including whether it appeared in practitioner-created lists of resources, its search results rankings, or (for open source code toolkits) indications of community use or contributions. One author also works in an industry institution, and was able to provide further insight into toolkit usage by industry teams. This excluded some toolkits which were created as part of academic papers, but did not seem to be more broadly used by practitioners at the time of sampling, such as FairSight(Ahn and Lin, 2020).
We independently reviewed the toolkits for inclusion, exclusion, or discussion. As a group, we discussed toolkits that we either marked for discussion or that we rated differently. To resolve disagreements, we decided to aim for variation along multiple dimensions (a toolkit that overlapped a lot with an already included toolkit was less likely to be included). We did not seek to capture an exhaustive sample, nor do we claim that the corpus is statistically representative. The final corpus includes 27 toolkits, which are summarized in Section 3.4 and fully listed in Appendix A.
3.3. Corpus Analysis
In a first round of analysis, we conducted an initial coding of the 27 toolkits based on the following dimensions: the source(s) of the toolkit (e.g., academia or industry), the intended audience or user, its stated goal(s), and references to the ML pipeline.444Although many of these were explicitly stated in the toolkits’ documentation, some required some interpretative coding. We resolved all disagreements through discussion amongst all three authors. We used the results of this initial coding to inform our discussions of which toolkits to include in the corpus, as well as to inform a second round of analysis. We then began a second round of more open-ended inductive qualitative analysis based on our research questions. From reading through the toolkits, the authors discussed potential emerging themes. Based on these themes, we decided to ask the following questions of each of the toolkits:
What language does the toolkit use to describe values and ethics?
What does the toolkit say about the users and other stakeholders of the AI systems to whom the toolkit aims its attention?
What type of work is needed to enact the toolkit’s guidance in practice?
What does the toolkit say about the organizational context in which workers must apply the toolkit?
Each author read closely through one third of the toolkits, found textual examples that addressed each of these questions, and posted those examples onto sticky notes in an online whiteboard. Collectively, all the authors conducted thematic analysis and affinity diagramming on the online whiteboard, inductively clustering examples into higher-level themes (following (Braun and Clarke, 2006)), which we report on in the findings section.
3.4. Corpus Description
We briefly describe our corpus of 27 toolkits based on our first round of analysis. A full listing is in Appendix A. The toolkit authors include: technology companies, university centers and academic researchers, non-profit organizations or institutes, open source communities, design agencies, a government agency, and an individual tech worker. The toolkits’ form factors vary greatly as well. Many were technical in nature, such as open source code, proprietary code, accompanying documentation, accompanying tutorials, a software product, or a web-based tool. Other common forms included exercise or activity instructions, worksheets, guides or manuals, frameworks or guidelines, checklists, or cards. Several included informational websites or reading materials. Considering the toolkits’ audiences, most are targeted towards technical audiences such as developers, data scientists, designers, technology professionals or builders, implementation or product teams, analysts, or UX teams. Some are aimed at different levels within organizations, including: managers or PMs, executive leadership, internal stakeholders, team members, or organizations broadly. Some toolkits’ audiences include people outside of technology companies, including: policymakers or government leaders, advocates, software clients or customers, vendors, civil society organizations, community groups, and users. We elaborate more on the toolkits’ intended audiences in Sec. 4.2.
We begin our findings with a description of the language toolkits use to describe and frame the work of AI ethics (RQ1). We then discuss the audiences envisioned to use the toolkits (RQ2); and close with what the toolkits envision to be the work of AI ethics (RQ3).
4.1. Language, framing, and discourses of ethics (RQ1)
4.1.1. Motivating Ethics: Harms, Risks, Opportunities, and Scale
We first look at how the toolkits motivate their use. Often, they articulate a problem that the toolkit will help address. One way of articulating a problem is identifying how AI systems can have effects that harm people. In such cases, toolkits motivate ethical problems by highlighting harms to people outside the design and development process—a group that Pfaffenberger terms the “impact constituency,” the “individuals, groups, and institutions who lose as a technology diffuses throughout society” (Pfaffenberger, 1992, p297). For instance, Fairlearn describes unfairness “in terms of its impact on people — i.e., in terms of harms — and not in terms of specific causes, such as societal biases, or in terms of intent, such as prejudice” . Other toolkits gesture towards the “impact”  or “unintended consequences”  of systems.
Conversely, other toolkits frame problems by articulating how AI systems can present risks to the organizations developing or deploying them. They highlight potential business, financial, or reputational risks, or by relating AI ethics to issues of corporate risk management more broadly. The Ethics & Algorithms toolkit, aimed at governments and organizations who are procuring and deploying AI systems describes itself as “A risk management framework for governments (and other people too!) to approach ethical issues.” . Other toolkits suggest that they can help manage business risks, in part by generating governance and compliance reports. In contrast with the language of harms, which focuses on people who are affected by AI systems (often by acknowledging historical harms that different groups have experienced), the language of risk is more forward facing, focusing on the potential for something to go wrong and how it might affect the organization developing or deploying the AI system—leading the organization to try to find ways to prepare contingencies for the possible negative futures it can foresee for itself.
Not all toolkits frame AI ethics as avoiding negative outcomes, however. The integrate.ai guide uses the term “opportunity,” framing AI ethics in terms of pursuing positive opportunities or outcomes. The guide later argues that AI ethics can be part of initiatives “incentivizing risk professionals to act for quick business wins and showing business leaders why fairness and transparency are good for business” . The IDEO AI Ethics cards (which in some sections also frames AI ethics in terms of harms to people) also discusses capturing positive potential, writing: “In order to have a truly positive impact, AI-powered technologies must be grounded in human needs and work to extend and enhance our capabilities, not replace them” . Across these examples, AI ethics is framed as a way for businesses or the impact constituency to capture “upside” benefits of technology through design, development, use, and business practices.
Some toolkits imagine that the positive or negative impacts of AI technologies will occur at a global scale. This is evidenced by statements such as: “your [technology builders’] work is global. Designing AI to be trustworthy requires creating solutions that reflect ethical principles deeply rooted in important and timeless values.” ; or “Data systems and algorithms can be deployed at unprecedented scale and speed—and unintended consequences will affect people with that same scale and speed” . Framing ethics globally perhaps draws attention to potential non-obvious harms or risks that might occur, prompting toolkit users to consider broader and more diverse populations who interact with AI systems. At the same time, the language of AI ethics operating at a global scale—and thus addressable at a global scale—also suggests a shared universal definition of social values, or suggests that social values have universally shared or similar impacts. This view of values as a stable, universal phenomenon has been critiqued by a range of scholars who discuss how social values are experienced in different ways, and are situated in local contexts and practices (Le Dantec et al., 2009; Houston et al., 2016; JafariNaimi (Parvin) et al., 2015; Shilton et al., 2014; Sambasivan et al., 2021; Madaio et al., 2021).
4.1.2. Sources of Legitimacy for Ethical Action
Toolkits’ use of language also claims authority from existing discourses about what constitutes an ethical problem and how problems should be addressed. These claims help connect the toolkits’ practices to a broader set of practices or frameworks that may be more widely accepted or understood, helping to legitimize the toolkits’ perspectives and practices, and providing a useful tactical alignment between the toolkit and existing organizational practices and resources.555Perhaps surprisingly, almost none of the toolkits provide an explicit discussion of philosophical ethical frameworks. (While toolkits may implicitly draw on different ethical theories, our focus in this analysis is on the explicit theories, discourses, and frameworks that are referred to in the text of the toolkits and their supporting documentation). One exception to this is the Design Ethically toolkit, which provides a brief overview of deontological ethics and consequentialism, calling them “duty-based” and “results-based” .
Several toolkits adopt the language of “responsible innovation.”
The Consequence Scanning toolkit was developed in the U.K. and calls itself “an Agile event for Responsible Innovators” . The integrate.ai toolkit is titled “Responsible AI in Consumer Enterprise” . Fairlearn notes that its community consists of “responsible AI enthusiasts” . Several toolkits in our corpus are listed as part of Microsoft’s “responsible AI” resources [24, 25, 26]. There seems to be rhetorical power in aligning these toolkits with practices of responsible innovation, although questions about what people or groups the companies or toolkit users are responsible to are not explicitly discussed. More broadly, what it means to align toolkits with responsible innovation is itself an open question.666 With origins in the rise of science and technology as a vector of political power in the 20th century
With origins in the rise of science and technology as a vector of political power in the 20th century(Stilgoe et al., 2013), “responsible innovation” frames free enterprise as the agents of ethics, implicitly removing from frame policymakers, regulation, and other forms of popular governance or oversight. Future work should investigate more deeply what discursive work “responsible innovation” does in the context of AI ethics more broadly, particularly as it concerns private enterprise.
Other toolkits look to external laws and standards as a legitimate basis for action; ethics is thus conceptualized as complying and acting in accordance with the law. Audit-AI, a tool that measures discriminatory patterns in data and machine learning predictions, explicitly cites U.S. labor regulations set by the Equal Employment Opportunity Commission (EEOC), writing that “According to the Uniform Guidelines on Employee Selection Procedures (UGESP; EEOC et al., 1978), all assessment tools should comply to fair standard of treatment for all protected groups” . Audit-AI similarly draws on EEOC practices when choosing a p-value for statistical significance and choosing other metrics to define bias. This aligns the toolkit with a regulatory authority’s practices as the basis for ethics, however it does not explicitly question whether this particular definition of fairness is applicable in contexts beyond the cultural and legal US employment context.
Several toolkits frame ethics as upholding human rights principles, drawing on the UN Declaration of Human Rights. In our dataset, this occurred most prominently in Microsoft’s Harms Modeling Toolkit: “As a part of our company’s dedication to the protection of human rights, Microsoft forged a partnership with important stakeholders outside of our industry, including the United Nations (UN)” . Supported by the UN’s Guiding Principles on Business and Human Rights (United Nations Human Rights Office of the High Commissioner, 2011), many large technology companies have made commitments to upholding and promoting human rights.777It has been argued that involving businesses in the human rights agenda can provide legitimacy and disseminate human rights norms in broader ways than nation states could alone (Ruggie, 2017). However, more recent research and commentary has been critical of technology companies’ commitments to human rights (Greene et al., 2019), with a 2019 UN report stating that big technology companies “operate in an almost human rights-free zone.” (Alston, 2019) This corresponds with prior research that shows how human rights discourses provide one source of values for AI ethics guidelines more broadly (Jobin et al., 2019). Many companies have existing resources or practices around human rights, such as human rights impact assessments (Metcalf et al., 2021; Kemp and Vanclay, 2013). Framing AI ethics as a human rights issue may help tactically align the toolkit with these pre-existing initiatives and practices.
4.2. The envisioned users and other stakeholders for toolkits (RQ2)
This section asks, who is to do the work of AI ethics? The design and supporting documentation of toolkits presupposes a particular audience—or, as Mattern (2021)
describes it, they “summon” particular users through the types of shared understanding, background knowledge, and expertise they draw on and presume their users to have. The toolkits in our corpus mention several specific job categories internal to the organizations in question: software engineers; data scientists; cross-functional, cross disciplinary teams; risk or internal governance teams; C-level executives; board members. To a lesser extent, they mention designers. All of these categories of stakeholders pre-configure specific logics of labor and power in technology design. Toolkits that mention engineering and data science roles focus on ethics as the practical, humdrum work of creating engineering specifications and then meeting those specifications. (One toolkit, Deon, is a command-line utility for generating “ethics checklists”) . For C-level executives and board members, toolkits frame ethics as both a business risk and a strategic differentiator in a crowded market. As the integrate.ai Responsible AI guide states, “Sustainable innovation means incentivizing risk professionals to act for quick business wins and showing business leaders why fairness and transparency are good for business.” 
Of course, stakeholders involved in AI design and development always already have their roles pre-configured by their job titles and organizational positionality; roles that the toolkits invoke and summon in their description of potential toolkit users and other relevant stakeholders. They (for example, “business leaders”) are sensitized toward particular facets of ethics, which are made relevant to them through legible terms (for example, “risk”). As such, the nature of these internal (i.e., internal to the institutions developing AI) stakeholders’ participation in the work of ethics is bound to vary. On what terms do these internal stakeholders get to participate? Borrowing from Hoffmann (2020) who in turn channels Ahmed (2012), what are the “terms of inclusion” for each of these internal stakeholders?
Technically-oriented tooling (like Google’s What If tool ) envisions technical staff who contribute directly to production codebases. Although toolkits rarely address the organizational positioning of engineers (and their concerns) directly, they are specific about the mechanism of action and means of participation for these technical tools. One runs statistical tests, provides assurances around edge cases, and keeps track of statistical markers like disparate impact or the p% rule.
For social and human-centered practices, the terms of participation are less clear. The rhetoric of these toolkits is one of participation—between cross-functional teams (comprised of different roles), between C-suite executives and tech labor, and between stakeholders both internal and external to the organization. But no toolkit quite specifies how this engagement should be enacted. Methodological detail is scant, let alone acknowledgements of power differentials between workers and executives, or tech workers and external stakeholders. Even those rare toolkits that do acknowledge power as a factor—for example, what the Ethics & Algorithms toolkit lists as its “mitigation #1”—under-specify how this power should be dealt with.
“Mitigation 1. Effective community engagement is people-centered, partnerships-driven, and power-aware. Engagement with the community should be social (using existing social networks and connections), technical (skills, tools, and digital spaces), physical (commons), and on equal terms (aware of and accounting for power).” 
While this “mitigation” refers specifically to the need to be aware of power, to account for power, it offers no specific strategies to become aware, to do such “accounting.” Who does that work, and how?
This question brings us to stakeholders external to companies, described as “the community” above. This group variously includes clients, vendors, customers, users, civil society groups, journalists, advocacy groups, community members, and others impacted by AI systems. These stakeholders are imagined as outside the organization in question, sometimes by several degrees (although some, such as customers, clients, and vendors, may be variously entangled with the organization’s operations). For example, the Harms Modeling toolkit lists “non-customer stakeholders; direct and indirect stakeholders; marginalized populations” . The Community Jury mentions “direct and indirect stakeholders impacted by the technology, representative of the diverse community in which the technology will be deployed” . Google’s Model Cards describes its artifacts as being for “everyone… experts and non-experts alike” . None of those toolkits, however, provide guidance on how to identify specific stakeholders, or how to engage with them once they have been identified. Indeed, the work these external stakeholders are imagined to do in these circumstances is under-specified. Their specific roles are under-imagined, relegated to the vague “raising concerns” or “providing input” from “on-the-ground perspectives.” We return to this point in the following section.
4.3. Work practices envisioned by toolkits (RQ3)
Much of the work of ethics as imagined by the toolkits focuses on technical work with ML models, in specific workflows and tooling suites, despite claims that fairness is sociotechnical (e.g.,). Many toolkits aimed at design and development teams call for engagement with stakeholders external to the team or company—and for such stakeholders to inform the team about potential ethical impacts, or for the AI design team to inform and communicate about ethical risks to stakeholders. However, there is little guidance provided by the tools on how to do this; these imagined roles for stakeholders beyond the development team are framed as informants or as recipients of information (without the ability to shape systems’ designs) (cf. Delgado et al., 2021; Sloane et al., 2020); and the technical orientation of many toolkits may preclude meaningful participation by non-technical stakeholders. As framed by the toolkits, the work of ethics is often imagined to be done by individual data scientists or ML teams, both of whom are imagined to have the power to influence key design decisions, without considering how organizational power dynamics may shape those processes. The imagined work of ethics here is largely individual self-reflection, or team discussions, but without a theory of change of how that self-reflection or discussions might lead to meaningful organizational shifts.
4.3.1. Emphasis on technical work
Much of the work of ethics as imagined by the toolkits (and their designers) is focused on technical work with ML models, in particular workflows and tooling suites. This is in spite of the claims from some toolkits that “fairness is a sociotechnical problem” [5, 25]. In practice, this means that tools’ imagined (and suggested) uses are oriented around the ML lifecycle, often integrated into specific ML tool pipelines. For instance, Amazon’s SageMaker describes how it provides the ability to “measure biases that can occur during each stage of the ML lifecycle (data collection, model training and tuning, and monitoring of ML models deployed for inference)” . Other toolkits go further, and are specifically designed to be implemented into particular ML programming tooling suites, such as Scala or Spark [18
], TensorFlow, or Google Cloud AI platform [10, 20]. Some toolkits, albeit substantially fewer, provide recommendations for how toolkit users might make different choices about how to use the tool depending on where they are in their ML lifecycle .
However, this emphasis on technical functionality offered by the toolkits, as well as the fact that many are designed to fit into ML modeling workflows and tooling suites suggests that non-technical stakeholders (whether they are non-technical workers involved in the design of AI systems, or stakeholders external to technology companies) may have difficulty using these toolkits to contribute to the work of ethical AI. At the very least, it implies that the intended users must have sufficient technical knowledge to understand how they would use the toolkit in their work—and further reinforces that the work of AI ethics is technical in nature, despite claims to the contrary [5, 25]. In this envisioned work, what role is there for designers and user researchers, for domain experts, or for people impacted by AI systems, in doing the work of AI ethics?
4.3.2. Calls to engage stakeholders, but little guidance on how
One of the key elements of AI ethics work suggested by toolkits involves engaging stakeholders external to the development team or their company (as discussed in Sec. 4.2). However, many toolkits lacked specific resources or approaches for how to do this engagement work. Toolkits often advocated for working with diverse groups of stakeholders to inform the development team about potential impacts of their systems, or to “seek more information from stakeholders that you identified as potentially experiencing harm” . For some toolkits, this was envisioned to take the form of user research, recommending that teams “bring on a neutral user researcher to ensure everyone is heard”  (what it means for a researcher to be “neutral” is left to the imagination), or to “help teams think through how people may interact with a design” . Others envisioned this information gathering as workshop sessions or discussions, as in the consequence scanning guide  or community jury approach .
Although some toolkits called for AI development teams to learn about the impacts of their systems from external stakeholders, a smaller subset were designed to support external stakeholders or groups in better understanding the impacts of AI. For instance, the Algorithmic Equity Toolkit was designed to help citizens and community groups “find out more about a specific automated decision system” by providing a set of questions for people to ask to policymakers and technology vendors . In addition, some developer-facing tools such as Model Cards were designed to provide information to “help advocacy groups better understand the impact of AI on their communities” .
Despite these calls for engagement, toolkits lack concrete resources for precisely how to engage external stakeholders in either understanding the ethical impact of AI systems or involving them in the process of their design to support more ethical outcomes. Some toolkits explicitly name particular activities that would benefit from involving a wide range of stakeholders, such as the Harms Modeling toolkit: “You can complete this ideation activity individually, but ideally it is conducted as collaboration between developers, data scientists, designers, user researcher, business decision-makers, and other disciplines that are involved in building the technology” . The stakeholders named by the Harms Modeling toolkit, however, are still “disciplines involved in building the technology”  and not, for instance, people who are harmed or otherwise impacted by the system outside of the company. Others, such as the Ethics & Algorithms toolkit, broaden the scope, recommending that “you will almost certainly need additional people to help - whether they are stakeholders, data analysts, information technology professionals, or representatives from a vendor that you are working with” . However, despite framing the activity as a “collaboration”  or “help”  such toolkits provide little guidance for how to navigate the power dynamics or organizational politics involved in convening a diverse group to use the toolkit.
4.3.3. Theories of change
Ethical AI toolkits present different theories of change for how practitioners using the toolkits may effect change in the design, development, or deployment of AI/ML systems. For many toolkits, individuals within the organization are envisioned to be the catalysts for change via oaths  or “an individual exercise”  where individuals are prompted to “facilitat[e] your own reflective process” . This approach is aligned with what Boyd and others have referred to as developing ethical sensitivity (Boyd, 2020; Weaver et al., 2008). Some toolkits explicitly articulated the belief that individual practitioners who are aware of possible ethical issues may be able to change the direction of the design process. For instance, “The goal of Deon is to push that conversation forward and provide concrete, actionable reminders to the developers that have influence over how data science gets done” [12
]. However, this belief that individual data scientists “have influence over how data science gets done” may be at odds with the reality of organizational power structures that may lead to changes in AI design.
In other cases, the implicit theory of change involves product and development teams having conversations, which are then thought to lead to changes in design decisions towards more ethical design processes or outcomes. Some toolkits propose activities designed to “elicit conversation and encourage risk evaluation as a team” . Others start with individual ethical sensitivity, then move to team-level discussions, suggesting that the toolkit should “provoke discussion among good-faith actors who take their ethical responsibilities seriously” . Such group-level activities rely on having discussions with “good-faith actors,” presumably those who have developed some level of individual sensitivity to ethical issues. As one toolkit suggests for these group-level conversations, “There is a good chance someone else is having similar thoughts and these conversations will help align the team” . In this framing, the work of ethics involves finding like-minded individuals and getting to alignment within the team. However, this approach relies on the possibility of reaching alignment. As such, it may not provide sufficient support for individuals whose ethical views about AI may differ from their team. Individuals may feel social pressure from others on their team to stay silent, or not appear to be contrarian in the face of consensus from the rest of their team (cf. Madaio et al., 2020).
In fact, despite many toolkits’ claims to empower individual practitioners to raise issues, toolkits largely appeared not to address fundamental questions of worker power and collective action. For instance, the IDEO AI Ethics Cards state that “all team members should be empowered to trust their instincts and raise this Pause flag… at any point if a concept or feature does not feel human-centered” , and similarly the Design Ethically Toolkit advises that “Having a variety of different thinkers who are all empowered to speak in the brainstorm session makes a world of a difference” . However, the Design Ethically toolkit was the only example in our corpus that provided resources to support workplace organizing to meaningfully secure power for tech workers in driving change within their organizations.
Finally, other toolkits pose theories of change that suggest that pressure from external sources (i.e., media, public pressure or advocacy, or other civil society actors or organizations) may lead to changes in AI design and deployment (usually implied to be within corporate or government contexts). The Algorithmic Equity Kit in particular, is explicitly designed to provide resources for “community groups involved in advocacy campaigns”  to help support that advocacy work. Other toolkits, such as the Ethics & Algorithms Toolkit, focus on government agencies using AI that are “facing increasing pressure from the public, the media, and academic institutions to be more transparent and accountable about their use” . As such, the toolkit offers resources for government agencies to respond to such pressure and provide more transparency and accountability in their algorithmic systems.
More generally, the toolkits enact some form of solutionism—the belief that ethical issues that may arise in AI design can be solved with the right tool or process (typically the approach they propose). Some tools [e.g., 2, 3, 10, 20] suggest that ethical values such as fairness can be achieved via technical tools alone: “If all fairness metrics are fair, The Bias Report will evaluate the current model as fair.” . Some toolkits (albeit fewer) do note the limitations of purely technical solutions to fundamentally sociotechnical problems [3, 5, 10], as in AIF360’s documentation, which states that “the metrics and algorithms in AIF360… clearly do not capture the full scope of fairness in all situations” [3
]. As the What-If tool documentation states, “There is no one right [definition of fairness], but we probably can agree that humans, not computers, are the ones who should answer this question” . However, even with these acknowledgements, the documentation goes on to note the important role that the toolkit plays in enabling humans to answer that question, as “What-If lets us play ‘what if’ with theories of fairness, see the trade-offs, and make the difficult decisions that only humans can make” .
These general framings suggest a particular flavor of solutionism, in which the work of ethics in AI design involves following a particular process (i.e., the one proposed by the toolkit). Toolkits propose ethical work practices that fit into existing development processes [e.g., 12], in ways that suggest that all that is needed is the addition of an activity or discussion prompt and not, for instance, fundamental changes to the corporate values systems or business models that may lead to harms from AI systems. Some toolkits were explicit that ethical AI work should not significantly disrupt existing corporate priorities, saying, “Business goals and ethics checks should guide technical choices; technical feasibility should influence scope and priorities; executives should set the right incentives and arbitrate stalemates” .
Throughout these toolkits, we observed a mismatch between the imagined roles and work practices for ethics in AI and the support the toolkits provided for achieving those roles and practices. Toolkits suggested multi-stakeholder approaches to addressing ethical issues in sociotechnical ways, but most toolkits provided little scaffolding for the social dimensions of ethics or for engaging stakeholders from multiple (non-technical) backgrounds. These technosolutionist approaches to AI ethics suggest that the toolkits may act as a “technology of de-politicization” (cf. Hitzig, 2020), sublimating sociopolitical considerations in favor of technical fixes. With few exceptions [e.g., 17], the toolkits took a decontextualized approach to ethics, largely divorced from the sociopolitical nuance of what ethics might mean in the contexts in which AI systems may be deployed, or how ethical work practices might be enacted within the organizational contexts of the sites of AI production (e.g., technology companies). In such a decontextualized view of ethics, toolkit designers envision individual users who have the agency to make decisions about their design of AI systems, and who are not beholden to the role of power dynamics within the workplace: organizational hierarchies, misaligned priorities, and incentives for ethical work practices—key considerations for the use of AI ethics toolkits, given the reality of business priorities and profit motives.
When toolkits did attend to how ethical work might fit within business processes, many of them strategically leveraged discourses of business risk and responsible innovation to help motivate adoption of ethics tools and processes. Tactically, this may allow toolkits to tap into existing institutional processes and resources (for example, mechanisms for managing legal liability). In so doing, companies sidestep questions of how logics of capital accumulation themselves shape the capacity for AI systems to exert harms and shape the sociotechnical imaginaries (cf. Jasanoff and Kim, 2015) for what ethics might mean—or foreclose alternative ways of conceptualizing ethics. As a result, ethical concerns may be sublimated to the interests of capital. In the following sections, we unpack implications of our findings for AI ethics toolkit designers and researchers.
5.1. Recommendations for toolkit design
Practitioners will continue to require support in enacting ethics in AI, and toolkits are one potential approach to provide such support, as evidenced by their ongoing popularity. Our findings suggest three concrete recommendations for improving toolkits’ potential to support the work of AI ethics. Toolkits should: (1) provide support for the non-technical dimensions of AI ethics work; (2) support the work of engaging with stakeholders from non-technical backgrounds; (3) structure the work of AI ethics as a problem for collective action.
5.1.1. Embrace the non-technical dimensions of ethics work
Despite emerging awareness that fairness is sociotechnical, the majority of toolkits provided resources to support technical work practices (despite calls for toolkit users to engage in other forms of work [e.g., 5]). This might entail resources to support understanding the theories and concepts of ethics in non-technical ways,888Note that Fairlearn  has—since we conducted the data analysis for this paper—published resources in its user guide for understanding social science concepts such as construct validity for concepts such as fairness (Jacobs and Wallach, 2021) and explanations of sociotechnical abstraction traps (Selbst et al., 2019) as well as resources drawing from the social sciences for understanding stakeholders’ situated experiences and perceptions of AI systems and their impacts. For instance, toolkit designers might incorporate methods from qualitative research, user research, or value-sensitive design (e.g., (Friedman et al., 2002)), as some existing tools suggest (e.g., ). While some AI ethics education tools are beginning to be designed with these perspectives (e.g., value cards (Shen et al., 2021a)), fewer practitioner-oriented toolkits utilize them. As a precursor to this, practitioners may need support in identifying the stakeholders for their systems and use cases, in the contexts in which those systems are (or will be) deployed, including community members, data subjects, or others beyond the users, paying customers, or operators of a given AI system (Madaio et al., 2021). Approaches such as stakeholder mapping from fields like Human-Computer Interaction (e.g., Yoo, 2018) may be useful here, and such resources may be incorporated into AI ethics toolkits.
5.1.2. Support for engaging with stakeholders from non-technical backgrounds
Although many toolkits call for engaging stakeholders from different backgrounds and forms of expertise (internal stakeholders such as designers or business leaders; external stakeholders such as advocacy groups and policymakers), the toolkits themselves offer little support for how their users might bridge disciplinary divides. Toolkits should support this translational work.999Some emerging work is exploring the role of “boundary objects” (cf. Star, 1989) to help practitioners align on key concepts and develop a shared language, e.g., PAIR Symposium 2020, although this work has not focused on ethics of AI specifically. This might entail, for instance, asking what fairness means to the various stakeholders implicated in ethical AI, or communicating the output of algorithmic impact assessments (e.g., various fairness metrics) in ways that non-technical stakeholders can understand and work with (Cheng et al., 2021; Shen et al., 2020). The Algorithmic Equity Toolkit (whose design process is discussed in (Krafft et al., 2021)) tackles this challenge from the perspective of community members and groups, providing resources to these external stakeholders to support their advocacy work . Meanwhile, recent research has explored how to engage non-technical stakeholders in discussions about tradeoffs in model performance (e.g., Cheng et al., 2021; Shen et al., 2020, 2021a), or in participatory AI design processes more generally (Delgado et al., 2021; Sloane et al., 2020), although such approaches have not yet been incorporated into toolkits as far as we are aware. Moreover, approaches that stakeholders impacted by AI have taken to conduct “crowd audits” of algorithmic harms (e.g., Shen et al., 2021b) have not yet made their way into the toolkits we analyzed, where the results of such crowd audits might be used to shape AI practitioners’ development practices.
5.1.3. Structure the work of AI ethics as a problem for collective action
One question we found palpably missing in our discussions was, how do toolkits support stakeholders in grappling with organizational dynamics involved in doing the work of ethics? Toolkits should structure ethical AI as a problem for collective action for multiple groups of stakeholders, rather than work for individual practitioners. Silbey has written about the “safety culture” promoted in other high-stakes industries (e.g., fossil fuel extraction), where the responsibility to avoid catastrophe is located in the behaviors and attitudes of individual actors (typically those with the least power in the organization), rather than systemic processes or organizational oversight (Silbey, 2009). This perspective may entail supporting collective action by workers within tech companies, or fostering communities of practice of professionals working on ethical AI across institutions (to share knowledge and best practices, as well as shift professional norms and standards), or supporting collective efforts for ethical AI across industry professionals and communities impacted by AI. This might involve support for helping practitioners communicate to organizational leadership and advocating for the need to engage in ethical AI work practices, or advocating for additional time or resources to do this work. One form this might take is providing support for strategic alignment of ethics discourses with business priorities and discourses (e.g., business risk, responsible innovation, corporate social responsibility, etc). However, given the risk that this approach might smuggle in business logics that subvert ethical aims (see Sec. 4.1), toolkit designers might instead consider how to support the users of their toolkits in becoming aware of the organizational power dynamics that may impact the work of ethics (e.g., power mapping exercises (LittleSis, 2017)), including identifying institutional levers they can pull to shape organizational norms and practices from the bottom up. This might also involve providing support for organizing collective action in the workplaces, such as unions, tactical walkouts, or other uses of labor power based on their role in technology production (Khovanskaya and Sengers, 2019; Wong, 2021; Stark et al., 2021; Ozoma, 2021). Prior research found that technology professionals pursuing design justice sought project- and institutional-level tools and interventions rather than individual-level ones (Spitzberg et al., 2020). Few toolkits we saw (with the Data Ethically toolkit as a notable exception ) provide resources to inform and support practitioners about the role of collective action in ethical AI.
5.2. Reflections and implications for research
As the prior sections suggest, the content and guidance provided by toolkits constructs particular ways of seeing the world—what constitutes an ethical problem, who should be responsible for addressing those problems, and what are the legitimate practices for addressing them. These toolkits represent a form of “professional vision,” which, as Goodwin has argued, is how the discursive practices of professional cultures shape how we see the world in socially situated and historically constituted ways (Goodwin, 2015). Similarly, in Silbey’s work on industrial safety culture, she argues that disasters that are not spectacular or sudden—such as slow-acting oil leaks—are often ignored, “existing physically, but not in any organizationally cognizable form” (Silbey, 2009). For ethics of AI, the discursive practices instantiated in our tools shape how the field sees the ethical terrain for action—what are the objects of concern, how might they be made legible or amenable to action, what resources might be marshalled to address them, and by whom.
Moreover, the very choice of utilizing the metaphor and format of “toolkits” as a predominant way to address AI ethics suggests an orientation towards solutionism. While they provide a useful format for sharing information and practices across boundaries and contexts, an over-reliance on toolkits may risk decontextualizing (Kelty, 2018) or abstracting (Selbst et al., 2019) their information content away from the social and political contexts where AI systems are deployed and governed (Sambasivan et al., 2021; Stark et al., 2021), and from the organizational contexts in which those toolkits may be used (Suchman, 2002).101010This pattern mirrors Scott’s concepts of legibility and simplification (Scott, 1998). For toolkits to be legible among communities of practice and organizational structures that seek to build systems at scale, toolkits make ethical practices legible in ways that are often simplified and do not account for the hetereogeneity of contextual experiences and on the ground practices of doing AI ethics, requiring users who can do this difficult translation work.
What ways of “seeing” AI ethics do all toolkits miss? What are new ways of seeing that can produce new, practical interventions? New approaches might move beyond toolkits and look to other theories of change, such as political economy (Stark et al., 2021). However, we as authors note that our situatedness in particular debates in the West may occlude our sensitivity to alternative ethical frameworks. Indigenous notions of “making kin” (Lewis et al., 2018) could reveal radical new possibilities for what AI ethics could be, and by what processes it may be enacted. How can we, as a research community, make space for such alternatives? Following from this problem-posing orientation, we do not offer solutions here, but instead pose these as questions for researchers, practitioners, and communities to address through developing alternatives to the dominant paradigm of the toolkit. Some promising examples include the People’s Guide to AI zine (Onuoha and Nucera, 2018); J. Khadijah Abdurahman’s and We Be Imagining’s call for lighting “alternate beacons” to help “organize for different futures” for technology development (Abdurahman, 2021)
; and the AI Now Institute’s series on a new lexicon to offer narratives beyond those from the Global North to critically study AI(Raval and Kak, 2021).We call on the FAccT community to amplify and expand these efforts.
We examined a small subset of toolkits which may not be representative of all AI ethics toolkits. Most of the toolkits we examined were from tech companies and academia, and we may thus miss out on toolkits developed by nonprofits, civil society, or government agencies. Furthermore, the toolkits we examined largely skewed towards industry practitioners as the envisioned users (with some exceptions; e.g., ), and were largely intended to fit into AI development processes (as suggested by the large proportion of toolkits that were open source code). As such, future work should explore toolkits intended to be used by policymakers, civil society, or community stakeholders more generally. In addition, our corpus was built from search queries; as such, searching for toolkits using terms we did not include here may result in identifying toolkits that we did not include in our corpus. More broadly, however, our positionality has shaped how we approach our research, including the research questions we chose, the toolkits we identified, and how we coded and interpreted our data.111111Indeed the corpus we developed may have been shaped by how search engines returned results to us, search results that were likely fine-tuned to our relational identities as researchers in academia and industry living in the U.S. As Sambasivan et al. (Sambasivan et al., 2021) (among others, such as Ding (2018)) have pointed out, AI ethics may mean different things in different cultural contexts, including relying on different legal frameworks, and aiming towards fundamentally different outcomes. Our corpus is necessarily partial and reflective of our positionality and cultural context.
This paper investigates how AI ethics toolkits frame and embed particular visions for what it means to do the work of addressing ethics. Based on our findings, we recommend that designers of AI ethics toolkits should better support the social dimensions of ethics work, provide support for engaging with diverse stakeholders, and frame AI ethics as a problem for collective action rather than individual practice. Toolkit development should be tied more closely to empirical research that studies the social, organizational, and technical work required to surface and address ethical issues. Creating tools or resources in a format that challenges the notions of the “toolkit” per se may open up the design space to foster new approaches to AI ethics. While no single artifact alone will solve all AI ethics problems, intentionally diversifying the forms of work that such artifacts envision and support may enable more effective ethical interventions in the work practices adopted by developers, designers, researchers, policymakers, and other stakeholders.
Acknowledgements.Withheld for blind review.
- A Body of Work That Cannot Be Ignored. Logic (15: Beacons). External Links: Cited by: §5.2.
- On being included. Duke University Press. Cited by: §4.2.
- FairSight: visual analytics for fairness in decision making. IEEE Transactions on Visualization and Computer Graphics 26 (1), pp. 1086–1095. External Links: Cited by: 3rd item.
- Report of the Special Rapporteur on extreme poverty and human rights. Technical report Technical Report October, Vol. 17564, United Nations. External Links: Cited by: footnote 7.
- Putting ai ethics to work: are the tools fit for purpose?. AI and Ethics, pp. 1–25. Cited by: §2.1.2.
- Privacy on the Ground: Driving Corporate Behavior in the United States and Europe. The MIT Press, Cambridge, Massachusetts. Cited by: §2.2.
- Ethical sensitivity in machine learning development. In Conference Companion Publication of the 2020 on Computer Supported Cooperative Work and Social Computing, pp. 87–92. Cited by: §4.3.3.
- Using thematic analysis in psychology. Qualitative research in psychology 3 (2), pp. 77–101. Cited by: §3.3.
- Soliciting stakeholders’ fairness notions in child maltreatment predictive systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–17. Cited by: §5.1.2.
- Surveying the landscape of ethics-focused design methods. arXiv preprint arXiv:2102.08909. Cited by: §2.1.2.
- Dimensions of UX Practice that Shape Ethical Awareness. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, New York, NY, USA, pp. 1–13. External Links: Cited by: §2.2.
Building trustworthy ai solutions: a case for practical solutions for small businesses.
IEEE Transactions on Artificial Intelligence(), pp. 1–1. External Links: Cited by: §2.1.2.
- Stakeholder participation in ai: beyond” add diverse stakeholders and stir”. arXiv preprint arXiv:2111.01122. Cited by: §4.3, §5.1.2.
- Deciphering china’s ai dream. Future of Humanity Institute Technical Report. Cited by: §5.3.
- Promoting service design as a core practice in interaction design. In Proceedings of the 5th International Congress of International Association of Societies of Design Research-IASDR, Vol. 13. Cited by: footnote 1.
- Value sensitive design: theory and methods. University of Washington technical report (2-12). Cited by: §5.1.1.
- Datasheets for datasets. Communications of the ACM 64 (12), pp. 86–92. Cited by: §2.1.2.
- Professional vision. In Aufmerksamkeit, pp. 387–425. Cited by: §2.1.1, §5.2.
- Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning. In Proceedings of the 52nd Hawaii International Conference on System Sciences, External Links: Cited by: footnote 7.
- The normative gap: mechanism design and ideal theories of justice. Economics & Philosophy 36 (3), pp. 407–434. Cited by: §5.
- Terms of inclusion: Data, discourse, violence. New Media & Society, pp. 146144482095872. External Links: Cited by: §4.2.
- Improving fairness in machine learning systems: what do industry practitioners need?. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–18. External Links: Cited by: §1, §2.1.2.
- Values in Repair. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems - CHI ’16, New York, New York, USA, pp. 1403–1414. External Links: Cited by: §4.1.1.
- Measurement and fairness. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 375–385. Cited by: footnote 8.
- Values as Hypotheses: Design, Inquiry, and the Service of Values. Design Issues 31 (4), pp. 91–104. External Links: Cited by: §4.1.1.
- Dreamscapes of modernity: sociotechnical imaginaries and the fabrication of power. University of Chicago Press. Cited by: §5.
- The global landscape of AI ethics guidelines. Nature Machine Intelligence, pp. 1–11. Cited by: §2.2, §4.1.2.
- The participatory development toolkit. External Links: Cited by: §2.1.1, §5.2.
- Human rights and impact assessment: clarifying the connections in practice. Impact Assessment and Project Appraisal 31 (2), pp. 86–96. External Links: Cited by: §4.1.2.
- Data Rhetoric and Uneasy Alliances: Data Advocacy in US Labor History. In Proceedings of the 2019 on Designing Interactive Systems Conference, New York, NY, USA, pp. 1391–1403. External Links: Cited by: §5.1.3.
- An action-oriented ai policy toolkit for technology audits by community advocates and activists. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, New York, NY, USA, pp. 772–781. External Links: Cited by: §2.1.2, 1st item, §5.1.2.
- Values as lived experience: Evolving value sensitive design in support of value discovery. In Proceedings of the 27th international conference on Human factors in computing systems - CHI 09, New York, New York, USA, pp. 1141. External Links: Cited by: §4.1.1.
- The landscape and gaps in open source fairness toolkits. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–13. Cited by: §1, §1, §2.1.2, §3.2.
- Making kin with the machines. Journal of Design and Science. Cited by: §5.2.
- Map the Power Toolkit. External Links: Cited by: §5.1.3.
- Co-designing checklists to understand organizational challenges and opportunities around fairness in ai. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–14. Cited by: §1, §2.2, §2.2, §4.3.3.
- Assessing the fairness of ai systems: ai practitioners’ processes, challenges, and needs for support. arXiv preprint arXiv:2112.05675. Cited by: §4.1.1, §5.1.1.
- Unboxing the toolkit. External Links: Cited by: §2.1.1, §4.2.
- Owning ethics: corporate logics, silicon valley, and the institutionalization of ethics. Social Research: An International Quarterly 86 (2), pp. 449–476. Cited by: §2.2, §2.2.
- Algorithmic impact assessments and accountability: the co-construction of impacts. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, New York, NY, USA, pp. 735–746. External Links: Cited by: §4.1.2.
- Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency, pp. 220–229. Cited by: §2.1.2.
- AI ethics–too principled to fail?. Note: CoRR arXiv:1906.06668 Cited by: §2.2.
- From what to how: an initial review of publicly available ai ethics tools, methods and research to translate principles into practices. In Ethics, Governance, and Policies in Artificial Intelligence, pp. 153–183. Cited by: §2.1.2.
- From bad users and failed uses to responsible technologies: a call to expand the ai ethics toolkit. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 5–6. Cited by: §2.2.
- A People’s Guide to AI. Allied Media Projects. External Links: Cited by: §5.2.
- The Tech Worker Handbook. External Links: Cited by: §5.1.3.
- Problem formulation and fairness. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 39–48. Cited by: §2.2.
- Trust in data science: collaboration, translation, and accountability in corporate data science projects. Proceedings of the ACM on Human-Computer Interaction 2 (CSCW), pp. 1–28. Cited by: §2.2.
- Making data science systems work. Big Data & Society 7 (2), pp. 2053951720939605. Cited by: §2.2.
- Technological Dramas. Science, Technology, & Human Values 17 (3), pp. 282–312. External Links: Cited by: §4.1.1.
- Differential vulnerabilities and a diversity of tactics: what toolkits teach us about cybersecurity. Proceedings of the ACM on Human-Computer Interaction 2 (CSCW), pp. 1–24. Cited by: §2.1.2.
- Where responsible ai meets reality: practitioner perspectives on enablers for shifting organizational practices. arXiv preprint arXiv:2006.12358. Cited by: §1, §2.2, §2.2.
- A New AI Lexicon: Responses and Challenges to the Critical AI discourse. External Links: Cited by: §5.2.
- Towards fairness in practice: a practitioner-oriented rubric for evaluating fair ml toolkits. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–13. Cited by: §1, §1, §2.1.2, §3.2.
- The Social Construction of the UN Guiding Principles on Business & Human Rights. External Links: Cited by: footnote 7.
- Re-imagining algorithmic fairness in india and beyond. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, New York, NY, USA, pp. 315–328. External Links: Cited by: §4.1.1, §5.2, §5.3.
- Principles to practices for responsible ai: closing the gap. arXiv preprint arXiv:2006.04707. Cited by: §1, §2.2.
- Seeing Like a State: How certain schemes to improve the human condition have failed. Yale University Press, New Haven. Cited by: §2.1.1, footnote 10.
- Fairness and abstraction in sociotechnical systems. In Proceedings of the conference on fairness, accountability, and transparency, pp. 59–68. Cited by: §5.2, footnote 8.
- Value cards: an educational toolkit for teaching social impacts of machine learning through deliberation. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, New York, NY, USA, pp. 850–861. External Links: Cited by: §2.1.2, 1st item, §5.1.1, §5.1.2.
- Everyday algorithm auditing: understanding the power of everyday users in surfacing harmful algorithmic behaviors. arXiv preprint arXiv:2105.02980. Cited by: §5.1.2.
- Designing alternative representations of confusion matrices to support non-expert public understanding of algorithm performance. Proceedings of the ACM on Human-Computer Interaction 4 (CSCW2), pp. 1–22. Cited by: §5.1.2.
- How to see values in social computing: Methods for Studying Values Dimensions. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing, New York, NY, USA, pp. 426–435. External Links: Cited by: §4.1.1.
- Values levers: building ethics into design. Science, Technology, & Human Values 38 (3), pp. 374–397. Cited by: §2.2.
- Human values matter: why value-sensitive design should be part of every UX designer’s toolkit. External Links: Cited by: 2nd item.
- Taming prometheus: talk about safety and culture. Annual Review of Sociology 35, pp. 341–369. Cited by: §5.1.3, §5.2.
- Participation is not a design fix for machine learning. arXiv preprint arXiv:2007.02423. Cited by: §4.3, §5.1.2.
- Principles at Work: Applying “Design Justice” in Professionalized Workplaces. Technical report External Links: Cited by: §5.1.3.
- The structure of ill-structured solutions: boundary objects and heterogeneous distributed problem solving. In Distributed artificial intelligence, pp. 37–54. Cited by: footnote 9.
- Critical perspectives on governance mechanisms for ai/ml systems. In The Cultural Life of Machine Learning, pp. 257–280. Cited by: §5.1.3, §5.2, §5.2.
- Developing a framework for responsible innovation. Research Policy 42 (9), pp. 1568–1580. External Links: Cited by: footnote 6.
- Located accountabilities in technology production. Scandinavian journal of information systems 14 (2), pp. 7. Cited by: §2.2, §5.2, footnote 1.
- Guiding Principles on Business and Human Rights: Implementing the United Nations ”Protect, Respect and Remedy” Framework. Technical report United Nations. External Links: Cited by: §4.1.2.
- Ethical sensitivity in professional practice: concept analysis. Journal of advanced nursing 62 (5), pp. 607–618. Cited by: §4.3.3.
- Tactics of soft resistance in user experience professionals’ values work. Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2), pp. 1–28. Cited by: §2.2, §5.1.3.
- Genres of organizational communication: a structurational approach to studying communication and media. Academy of management review 17 (2), pp. 299–326. Cited by: §2.1.1.
- Stakeholder tokens: a constructive method for value sensitive design stakeholder analysis. Ethics and Information Technology, pp. 1–5. Cited by: §5.1.1.
Appendix A Toolkit Listing and Analysis
Ethics Kit, http://ethicskit.org/tools.html
Model Cards, https://modelcards.withgoogle.com/about
AI Fairness 360, https://aif360.mybluemix.net/
Ethics & Algorithms Toolkit https://ethicstoolkit.ai/
Consequence Scanning Kit, https://www.doteveryone.org.uk/project/consequence-scanning/
AI Ethics Cards, https://www.ideo.com/post/ai-ethics-collaborative-activities-for-designers
What If Tool, https://pair-code.github.io/what-if-tool/
Digital Impact Toolkit, https://digitalimpact.io/toolkit/
Deon Ethics Checklist, http://deon.drivendata.org/
Design Ethically Toolkit, https://www.designethically.com/toolkit
Weights and Biases, https://wandb.ai/site
Responsible AI in Consumer Enterprise, https://static1.squarespace.com/static/5d387c126be524000116bbdb/t/5d77e37092c6df3a5151c866/1568138185862/Ethics-of-artificial-intelligence.pdf
Algorithmic Equity Toolkit (AEKit), https://www.aclu-wa.org/AEKit
LinkedIn Fairness Toolkit (LiFT), https://github.com/linkedin/LiFT, https://engineering.linkedin.com/blog/2020/lift-addressing-bias-in-large-scale-ai-applications
Audit AI, https://github.com/pymetrics/audit-ai
TensorFlow Fairness Indicators, https://github.com/tensorflow/fairness-indicators
Judgment Call https://docs.microsoft.com/en-us/azure/architecture/guide/responsible-innovation/judgmentcall
SageMaker Clarify, https://sagemaker-examples.readthedocs.io/en/latest/sagemaker_processing/fairness_and_explainability/fairness_and_explainability.html
NLP CheckList, https://github.com/marcotcr/checklist
HAX Workbook and Playbook, https://www.microsoft.com/en-us/haxtoolkit/workbook/
Community Jury, https://docs.microsoft.com/en-us/azure/architecture/guide/responsible-innovation/community-jury/
Harms Modeling, https://docs.microsoft.com/en-us/azure/architecture/guide/responsible-innovation/harms-modeling/
Algorithmic Accountability Policy Toolkit, https://ainowinstitute.org/aap-toolkit.pdf
|ID||Toolkit Name||Toolkit Author(s)||Author Types||Audience(s)||Form Factor|
|1||Ethics Kit||Open Data Institute, Common Good, Co-op Digital, Hyper Island, Plot||Non-Profit, Design Agency||Designers||Design Exercises, Worksheets|
|2||Model Cards||Technology Company||Developers, Policymakers, Analysts, Advocates, Users||Examples, Webpage|
|3||AI Fairness 360||IBM||Technology Company||Data Scientists||Open Source Code, Documentation, Code Examples, Tutorials|
|4||InterpretML||Microsoft||Technology Company||Data Scientists||Open Source Code, Documentation, Code Examples|
|5||Fairlearn||Miro Dudik (Microsoft Research), Microsoft Research, Open Source Community||Technology Company; Open Source Community||Data Scientists||Open Source Code, Documentaiton, User Guide, Code Examples|
|6||Aequitas||University of Chicago Center for Data Science and Public Policy||University||ML Developers, Analysts, Policymakers||Open Source Code, Web Audit Tool, Example, Documentation|
|7||Ethics & Algorithms Toolkit||Johns Hopkins Center for Government Excellence (GovEx), City and County of San Francisco, Harvard DataSmart, Data Community DC||University, Government Agency, Non-Profit||Government Leaders, Stakeholders, Data Analysts, Information Technology Professionals, Vendor Representatives||Guide, Worksheets|
|8||Consequence Scanning Kit||Dot Everyone||Non-Profit||Team Members, User Advocates, Tech and Business Specialists, Business or External Stakeholders||Manual, Exercises|
|9||AI Ethics Cards||IDEO||Design Agency||Designers||Cards|
|10||What If Tool||People + AI Research Team (Google)||Technology Company||Data scientists||Open Source Code, Tutorials, Documentation, Examples|
|11||Digital Impact Toolkit||Stanford Digital Civil Society Lab||University||Civil Society Organizations||Checklists, Worksheets, Reading Materials|
|12||Deon Ethics Checklist||DrivenData||Non-Profit||Developers||Checklist, Open Source Code, Documentation|
|13||Design Ethically Toolkit||Kat Zhou||Tech Worker||Designers||Exercises, Worksheets|
|14||Lime||Macro Ribeiro, Sameer Singh, Carlos Guestrin (University of Washington); Open Source Community||University; Open Source Community||Data Scientists||Open Source Code, Documentation|
|ID||Toolkit Name||Toolkit Author(s)||Author Types||Audience(s)||Form Factor|
|15||Weights and Biases||Weights and Biases||Technology Company||Developers||SaaS product, Articles|
|16||Responsible AI in Consumer Enterprise||integrate.ai||Technology Company||Organizations, Executive Leadership, Implementation teams||Guide, Framework|
|17||Algorithmic Equity Toolkit (AEKit)||ACLU of Washington, Critical Platform Studies Group, Tech Fairness Coalition||University; Non-Profit||Community Groups||Activities|
|18||LinkedIn Fairness Toolkit (LiFT)||Technology Company||Machine Learning Developers||Open Source Code, Documentation, Blog|
|19||Audit AI||Pymetrics||Technology Company||Data Scientists||Open Source Code, Documentation, Examples|
|20||TensorFlow Fairness Indicators||Technology Company||”Teams”||Open Source Code, Documentation, Examples|
|21||Judgment Call||Microsoft Research||Technology Company||Technology builders, managers, designers||Cards, Activities|
|22||SageMaker Clarify||Amazon||Technology Company||”AWS customers”||Proprietary Code, Documentation, Example|
|23||NLP CheckList||Marco Tulio Ribeiro (Microsoft Research), Tongshuang Wu (University of Washington), Carlos Guestrin (University of Washington), Smaeer Singh (UC Irvine)||University; Technology Company||Team||Open Source Code, Documentation, Examples|
|24||HAX Workbook and Playbook||Microsoft Research||Technology Company||UX, AI, project management, and engineering teams||Guide, Workbook/Worksheets, Examples, Guidelines|
|25||Community Jury||Microsoft||Technology Company||Product Team||Activity|
|26||Harms Modeling||Microsoft||Technology Company||Technology Builders||Activity|
|27||Algorithmic Accountability Policy Toolkit||AI Now||Non-Profit||Legal and Policy Advocates||PDF Guide|