Log In Sign Up

Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing

Rising concern for the societal implications of artificial intelligence systems has inspired a wave of academic and journalistic literature in which deployed systems are audited for harm by investigators from outside the organizations deploying the algorithms. However, it remains challenging for practitioners to identify the harmful repercussions of their own systems prior to deployment, and, once deployed, emergent issues can become difficult or impossible to trace back to their source. In this paper, we introduce a framework for algorithmic auditing that supports artificial intelligence system development end-to-end, to be applied throughout the internal organization development lifecycle. Each stage of the audit yields a set of documents that together form an overall audit report, drawing on an organization's values or principles to assess the fit of decisions made throughout the process. The proposed auditing framework is intended to contribute to closing the accountability gap in the development and deployment of large-scale artificial intelligence systems by embedding a robust process to ensure audit integrity.


page 1

page 2

page 3

page 4


Robust Artificial Intelligence and Robust Human Organizations

Every AI system is deployed by a human organization. In high risk applic...

Towards Accountability for Machine Learning Datasets: Practices from Software Engineering and Infrastructure

Rising concern for the societal implications of artificial intelligence ...

An Algorithmic Equity Toolkit for Technology Audits by Community Advocates and Activists

A wave of recent scholarship documenting the discriminatory harms of alg...

Towards Self-constructive Artificial Intelligence: Algorithmic basis (Part I)

Artificial Intelligence frameworks should allow for ever more autonomous...

Reasons, Values, Stakeholders: A Philosophical Framework for Explainable Artificial Intelligence

The societal and ethical implications of the use of opaque artificial in...

Dynamic Algorithmic Service Agreements Perspective

A multi-disciplinary understanding of the concepts of identity, agency, ...

Technical Report: Developing a Working Data Hub

Data forms a key component of any enterprise. The need for high quality ...

1. Introduction

With the increased access to artificial intelligence (AI) development tools and Internet-sourced datasets, corporations, nonprofits and governments are deploying AI systems at an unprecedented pace, often in massive-scale production systems impacting millions if not billions of users (Al-Jarrah et al., 2015). In the midst of this widespread deployment, however, come valid concerns about the effectiveness of these automated systems for the full scope of users, and especially a critique of systems that have the propensity to replicate, reinforce or amplify harmful existing social biases (Buolamwini and Gebru, 2018; Raji and Buolamwini, 2019; Kiritchenko and Mohammad, 2018). External audits are designed to identify these risks from outside the system and serve as accountability measures for these deployed models. However, such audits tend to be conducted after model deployment, when the system has already negatively impacted users (Green and Chen, 2019; Moy, 2019).

Figure 1. High-level overview of the context of an internal algorithmic audit. The audit is conducted during product development and prior to launch. The audit team leads the product team, management and other stakeholders in contributing to the audit. Policies and principles, including internal and external ethical expectations, also feed into the audit to set the standard for performance.

In this paper, we present internal algorithmic audits as a mechanism to check that the engineering processes involved in AI system creation and deployment meet declared ethical expectations and standards, such as organizational AI principles. The audit process is necessarily boring, slow, meticulous and methodical—antithetical to the typical rapid development pace for AI technology. However, it is critical to slow down as algorithms continue to be deployed in increasingly high-stakes domains. By considering historical examples across industries, we make the case that such audits can be leveraged to anticipate potential negative consequences before they occur, in addition to providing decision support to design mitigations, more clearly defining and monitoring potentially adverse outcomes, and anticipating harmful feedback loops and system-level risks (Ensign et al., 2017). Executed by a dedicated team of organization employees, internal audits operate within the product development context and can inform the ultimate decision to abandon the development of AI technology when the risks outweigh the benefits (see Figure 1).

Inspired from the practices and artifacts of several disciplines, we go further to develop SMACTR, a defined internal audit framework meant to guide practical implementations. Our framework strives to establish interdisciplinarity as a default in audit and engineering processes while providing the much needed structure to support the conscious development of AI systems.

2. Governance, Accountability and Audits

We use accountability to mean the state of being responsible or answerable for a system, its behavior and its potential impacts (Kohli et al., 2018). Although algorithms themselves cannot be held accountable as they are not moral or legal agents (Bryson et al., 2017), the organizations designing and deploying algorithms can through governance structures. Proposed standard ISO 37000 defines this structure as ”the system by which the whole organization is directed, controlled and held accountable to achieve its core purpose over the long term.”111 If the responsible development of artificial intelligence is a core purpose of organizations creating AI, then a governance system by which the whole organization is held accountable should be established.

In environmental studies, Lynch and Veland (Lynch and Veland, 2018) introduced the concept of urgent governance, distinguishing between auditing for system reliability vs societal harm. For example, a power plant can be consistently productive while causing harm to the environment through pollution (Leveson, 2011). Similarly, an AI system can be found technically reliable and functional through a traditional engineering quality assurance pipeline without meeting declared ethical expectations. A separate governance structure is necessary for the evaluation of these systems for ethical compliance. This evaluation can be embedded in the established quality assurance workflow but serves a different purpose, evaluating and optimizing for a different goal centered on social benefits and values rather than typical performance metrics such as accuracy or profit (Kroll et al., 2016). Although concerns about reliability are related, and although practices for testing production AI systems are established for industry practitioners (Breck et al., 2017), issues involving social impact, downstream effects in critical domains, and ethics and fairness concerns are not typically covered by concepts such as technical debt and reliability engineering.

2.1. What is an audit?

Audits are tools for interrogating complex processes, often to determine whether they comply with company policy, industry standards or regulations (Liu, 2012). The IEEE standard for software development defines an audit as an independent evaluation of conformance of software products and processes to applicable regulations, standards, guidelines, plans, specifications, and procedures (IEEE, 2008). Building from methods of external auditing in investigative journalism and research (Diakopoulos, 2014; Sandvig et al., 2014; Raji and Buolamwini, 2019), algorithmic auditing has started to become similar in spirit to the well-established practice of bug bounties, where external hackers are paid for finding vulnerabilities and bugs in released software (Maillart et al., 2017). These audits, modeled after intervention strategies in information security and finance (Raji and Buolamwini, 2019), have significantly increased public awareness of algorithmic accountability.

An external audit of automated facial analysis systems exposed high disparities in error rates among darker-skinned women and lighter-skinned men (Buolamwini and Gebru, 2018), showing how structural racism and sexism can be encoded and reinforced through AI systems.  (Buolamwini and Gebru, 2018) reveals interaction failures, in which the production and deployment of an AI system interacts with unjust social structures to contribute to biased predictions, as Safiya Noble has described (Noble, 2018). Such findings demonstrate the need for companies to understand the social and power dynamics of their deployed systems’ environments, and record such insights to manage their products’ impact.

2.2. AI Principles as Customized Ethical Standards

According to Mittelstadt (Mittelstadt, 2019), at least 63 public-private initiatives have produced statements describing high-level principles, values and other tenets to guide the ethical development, deployment and governance of AI. Important values such as ensuring AI technologies are subject to human direction and control, and avoiding the creation or reinforcement of unfair bias, have been included in many organizations’ ethical charters. However, the AI industry lacks proven methods to translate principles into practice (Mittelstadt, 2019), and AI principles have been criticized for being vague and providing little to no means of accountability (Whittlestone et al., 2019; Greene et al., 2019). Nevertheless, such principles are becoming common methods to define the ethical priorities of an organization and thus the operational goals for which to aim (Zeng et al., 2018; Jobin et al., 2019). Thus, in the absence of more formalized and universal standards, they can be used as a North Star to guide the evaluation of the development lifecycle, and internal audits can investigate alignment with declared AI principles prior to model deployment. We propose a framing of risk analyses centered on the failure to achieve AI principle objectives, outlining an audit practice that can begin translating ethical principles into practice.

2.3. Audit Integrity and Procedural Justice

Audit results are at times approached with skepticism since they are reliant on and vulnerable to human judgment. To establish the integrity of the audit itself as an independently valid result, the audit must adhere to the proper execution of an established audit process. This is a repeatedly observed phenomenon in tax compliance auditing, where several international surveys of tax compliance demonstrate that a fixed and vetted tax audit methodology is one of the most effective strategies to convince companies to respect audit results and pay their full taxes (Faizal et al., 2017; Murphy, 2003).

Procedural justice implies the legitimacy of an outcome due to the admission of a fair and thorough process. Establishing procedural justice to increase compliance is thus a motivating factor for establishing common and robust frameworks through which independent audits can demonstrate adherence to standards. In addition, audit integrity is best established when auditors themselves live up to an ethical standard, vetted by adherence to an expected code of conduct or norm in how the audit is to be conducted. In finance, for example, it became clear that any sense of dishonesty or non-transparency in audit methodology would lead audit targets to dismiss rather than act on results (Satava et al., 2006).

2.4. The Internal Audit

External auditing, in which companies are accountable to a third party (Raji and Buolamwini, 2019), are fundamentally limited by lack of access to internal processes at the audited organizations. Although external audits conducted by credible experts are less affected by organization-internal considerations, external auditors can only access model outputs, for example by using an API (Sandvig et al., 2014). Auditors do not have access to intermediate models or training data, which are often protected as trade secrets (Burrell, 2016). Internal auditors’ direct access to systems can thus help extend traditional external auditing paradigms by incorporating additional information typically unavailable for external evaluations to reveal previously unidentifiable risks.

The goals of an internal audit are similar to quality assurance, with the objective to enrich, update or validate the risk analysis for product deployment. Internal audits aim to evaluate how well the product candidate, once in real-world operation, will fit the expected system behaviour encoded in standards.

A modification in objective from a post-deployment audit to pre-deployment audit applied throughout the development process enables proactive ethical intervention methods, rather than simply informing reactive measures only implementable after deployment, as is the case with a purely external approach. Because there is an increased level of system access in an internal audit, identified gaps in performance or processes can be mapped to sociotechnical considerations that should be addressed through joint efforts with product teams. As the audit results can lead to ambiguous conclusions, it is critical to identify key stakeholders and decision makers who can drive appropriate responses to audit outcomes.

Additionally, with an internal audit, because auditors are employees of the organization and communicate their findings primarily to an internal audience, there is opportunity to leverage these audit outcomes for recommendations of structural organizational changes needed to make the entire engineering development process auditable and aligned with ethical standards. Ultimately, internal audits complement external accountability, generating artifacts or transparent information (Shah, 2018) that third parties can use for external auditing, or even end-user communication. Internal audits can thus enable review and scrutiny from additional stakeholders, by enforcing transparency through stricter reporting requirements.

3. Lessons From Auditing Practices in Other Industries

Improving the governance of artificial intelligence development is intended to reduce the risks posed by new technology. While not without faults, safety-critical and regulated industries such as aerospace and medicine have long traditions of auditable processes and design controls that have dramatically improved safety (Teixeira et al., 2013; Verma et al., 2010).

3.1. Aerospace

Globally, there is one commercial airline accident per two million flights (Rodrigues and Cusick, 2011). This remarkable safety record is the result of a joint and concerted effort over many years by aircraft and engine manufacturers, airlines, governments, regulatory bodies, and other industry stakeholders (Rodrigues and Cusick, 2011)

. As modern avionic systems have increased in size and complexity (for example, the Boeing 787 software is estimated at 13 million lines of code

(Judas and Prokop, 2011)

), the standard 1-in-1,000,000,000 per use hour maximum failure probability for critical aerospace systems remains an underappreciated engineering marvel

(Driscoll et al., 2003).

However, as the recent Boeing 737 MAX accidents indicate, safety is never finished, and the qualitative impact of failures cannot be ignored—even one accident can impact the lives of many and is rightfully acknowledged as a catastrophic tragedy. Complex systems tend to drift toward unsafe conditions unless constant vigilance is maintained (Leveson, 2011). It is the sum of the tiny probabilities of individual events that matters in complex systems—if this grows without bound, the probability of catastrophe goes to one. The Borel-Cantelli Lemmas are formalizations of this statistical phenomenon (Chung and Erdös, 1952), which means that we can never be satisfied with safety standards. Additionally, standards can be compromised if competing business interests take precedence. Because the non-zero risk of failure grows over time, without continuous active measures being developed to mitigate risk, disaster becomes inevitable (Haigh, 2012).

3.1.1. Design checklists

Checklists are simple tools for assisting designers in having a more informed view of important questions, edge cases and failures (Hall and Driscoll, 2014). Checklists are widely used in aerospace for their proven ability to improve safety and designs. There are several cautions about using checklists during the development of complex software, such as the risk of blind application, the broader context and nuanced interrelated concerns are not considered. However, a checklist can be beneficial. It is good practice to avoid yes/no questions to reduce the risk that the checklist becomes a box-ticking activity, for example by asking designers and engineers to describe their processes for assessing ethical risk. Checklist use should also be related to real-world failures and higher-level system hazards.

3.1.2. Traceability

Another key concept from aerospace and safety-critical software engineering is traceability—which is concerned with the relationships between product requirements, their sources and system design. This practice is familiar to the software industry in requirements engineering (Bennaceur et al., 2019). However, in AI research, it can often be difficult to trace the provenance of large datasets or to interpret the meaning of model weights—to say nothing of the challenge of understanding how these might relate to system requirements. Additionally, as the complexity of sociotechnical systems is rapidly increasing, and as the speed and complexity of large-scale artificial intelligence systems increase, new approaches are necessary to understand risk (Leveson, 2011).

3.1.3. Failure Modes and Effects Analysis

Finally, a standard tool in safety engineering is a Failure Modes and Effects Analysis (FMEA), methodical and systematic risk management approach that examines a proposed design or technology for foreseeable failures (Stamatis, 2003). The main purpose of a FMEA is to define, identify and eliminate potential failures or problems in different products, designs, systems and services. Prior to conducting a FMEA, known issues with a proposed technology should be thoroughly mapped through a literature review and by collecting and documenting the experiences of the product designers, engineers and managers. Further, the risk exercise is based on known issues with relevant datasets and models, information that can be gathered from interviews and from extant technical documentation.

FMEAs can help designers improve or upgrade their products to reduce risk of failure. They can also help decision makers formulate corresponding preventive measures or improve reactive strategies in the event of post-launch failure. FMEAs are widely used in many fields including aerospace, chemical engineering, design, mechanical engineering and medical devices. To our knowledge, however, the FMEA method has not been applied to examine ethical risks in production-scale artificial intelligence models or products.

3.2. Medical devices

Internal and external quality assurance audits are a daily occurrence in the pharmaceutical and medical device industry. Audit document trails are as important as the drug products and devices themselves. The history of quality assurance audits in medical devices dates from several medical disasters in which devices, such as infusion pumps and autoinjectors, failed or were used improperly (Vanderveen, 2005).

3.2.1. Design Controls

For medical devices, the stages of product development are strictly defined. In fact, federal law (Code of Federal Regulations Title 21) mandates that medical-device makers establish and maintain design control procedures to ensure that design requirements are met and designs and development processes are auditable. Practically speaking, design controls are a documented method of ensuring that the end product matches the intended use, and that potential risks from using the technology have been anticipated and mitigated (Teixeira et al., 2013). The purpose is to ensure that anticipated risks related to the use of technology are driven down to the lowest degree that is reasonably practicable.

3.2.2. Intended Use

Medical-device makers must maintain procedures to ensure that design requirements meet the intended use of the device. The intended use of a device (or, increasingly in medicine, an algorithm—see (Price and Nicholson, 2017) for more) determines the level of design control required: for example, a tongue depressor (a simple piece of wood) is the lowest class of risk (Class I), while a deep brain implant would be the highest (Class III). The intended use of a tongue depressor could be to displace the tongue to facilitate examination of the surrounding organs and tissues, differentiating a tongue depressor from a Popsicle stick. This may be important when considering an algorithm that can be used to identify cats or to identify tumors; depending on its intended use, the same algorithm might have drastically different risk profiles, and additional risks arise from unintended uses of the technology.

3.2.3. Design History File

For products classified as medical devices, at every stage of the development process, device makers must document the design input, output, review, verification, validation, transfer and changes—the design control process (section 3.2.1). Evidence that medical device designers and manufacturers have followed design controls must be kept in a design history file (DHF), which must be an accurate representation and documentation of the product and its development process. Included in the DHF is an extensive risk assessment and hazard analysis, which must be continuously updated as new risks are discovered. Companies also proactively maintain post-market surveillance for any issues that may arise with safety of a medical device.

3.2.4. Structural Vulnerability

In medicine there is a deep acknowledgement of socially determinant factors in healthcare access and effectiveness, and an awareness of the social biases influencing the dynamic of prescriptions and treatments. This widespread acknowledgement led to the framework of operationalizing structural vulnerability in healthcare contexts, and effectively the design of an assessment tool to record the anticipated social conditions surrounding a particular remedy or medical recommendation (Quesada et al., 2011). Artificial intelligence models are equally subject to social influence and social impact, and undergoing such assessments on more holistic and population- or environment-based considerations is relevant to algorithmic auditing.

3.3. Finance

As automated accounting systems started to appear in the 1950s, corporate auditors continued to rely on manual procedures to audit around the computer. In the 1970s, the Equity Funding Corporation scandal and the passage of the Foreign Corrupt Practices Act spurred companies to more thoroughly integrate internal controls throughout their accounting systems. This heightened the need to audit these systems directly. The 2002 Sarbanes-Oxley Act introduced sweeping changes to the profession in demanding greater focus on financial reporting and fraud detection (Byrnes et al., 2018).

Financial auditing had to play catch-up as the complexity and automation of financial business practices became too unwieldy to manage manually. Stakeholders in large companies and government regulators desired a way to hold companies accountable. Concerns among regulators and shareholders that the managers in large financial firms would squander profits from newly created financial instruments prompted the development of financial audits (Styhre, 2015).

Additionally, as financial transactions and markets became more automated, abstract and opaque, threats to social and economic values were answered increasingly with audits. But financial auditing lagged behind the process of technology-enabled financialization of markets and firms.

3.3.1. Audit Infrastructure

In general, internal financial audits seek assurance that the organization has a formal governance process that is operating as intended: values and goals are established and communicated, the accomplishment of goals is monitored, accountability is ensured and values are preserved. Further, internal audits seek to find out whether significant risks within the organization are being managed and controlled to an acceptable level (Soh and Martinov-Bennie, 2011).

Internal financial auditors typically have unfettered access to necessary information, people, records and outsourced operations across the organization. IIA Performance Standard 2300, Performing the Engagement (of Internal Auditors. Research Foundation and of Internal Auditors, 2007), states that internal auditors should identify, analyze, evaluate and record sufficient information to achieve the audit objectives. The head of internal audit determines how internal auditors carry out their work and the level of evidence required to support their conclusions.

3.4. Discussion and Challenges

The lessons from other industries above are a useful guide toward building internal accountability to society as a stakeholder. Yet, there are many novel and unique aspects of artificial intelligence development that present urgent research challenges to overcome.

Current software development practice in general, and artificial intelligence development in particular, does not typically follow the waterfall or verification-and-validation approach (Cusumano and Smith, 1995). These approaches are still used, in combination with agile methods, in the above-mentioned industries because they are much more documentation-oriented, auditable and requirements-driven. Agile artificial intelligence development is much faster and iterative, and thus presents a challenge to auditability. However, applying agile methodologies to internal audits themselves is a current topic of research in the internal audit profession.222

Most internal audit functions outside of heavily regulated industries tend to take a risk-based approach. They work with product teams to ask ”what could go wrong” at each step of a process and use that to build a risk register (Patterson and Neailey, 2002). This allows risks to rise to the surface in a way that is informed by the people who know these processes and systems the best. Internal audits can also leverage relevant experts from within the company to facilitate such discussions and provide additional insight on potential risks (Bing et al., 2005).

Large-scale production AI systems are extraordinarily complex, and a critical line of future research relates to addressing the interaction of highly complex coupled sociotechnical systems. Moreover, there is a dynamic complex interaction between users as sources of data, data collection, and model training and updating. Additionally, governance processes based solely on risk have been criticized for being unable to anticipate the most profound impacts from technological innovation, such as the financial crisis in 2008, in which big data and algorithms played a large role (Muniesa et al., 2013; Noble, 2018; O’neil, 2016).

With artificial intelligence systems it can be difficult to trace model output back to requirements because these may not be explicitly documented, and issues may only become apparent once systems are released. However, from an ethical and moral perspective it is incumbent on producers of artificial intelligence systems to anticipate ethics-related failures before launch. However, as (Parker, 2012) and (Holstein et al., 2018) point out, the design, prototyping and maintenance of AI systems raises many unique challenges not commonly faced with other kinds of intelligent systems or computing systems more broadly. For example, data entanglement results from the fact that artificial intelligence is a tool that mixes data sources together. As Scully et al. point out, artificial intelligence models create entanglement and make the isolation of improvements effectively impossible (Sculley et al., 2014), which they call Change Anything Change Everything. We suggest that by having explicit documentation about the purpose, data, and model space, potential hazards could be identified earlier in the development process.

Selbst and Barocas argue that one must seek explanations of the process behind a model‘s development, not just explanations of the model itself (Selbst and Barocas, 2018). As a relatively young community focused on fairness, accountability, and transparency in AI, we have some indication of the system culture requirements needed to normalize, for example, an adequately thorough documentation procedure and guidelines (Gebru et al., 2018; Mitchell et al., 2019). Still, we lack the formalization of a standard model development template or practice, or process guidelines for when and in which contexts it is appropriate to implement certain recommendations. In these cases, internal auditors can work with engineering teams to construct the missing documentation to assess practices against the scope of the audit. Improving documentation can then be a remediation for future work.

Also, as AI is at times considered a general purpose technology with multiple and dual uses (Trajtenberg, 2018), the lack of reliable standardization poses significant challenges to governance efforts. This challenge is compounded by increasing customization and variability of what an AI product development lifecycle looks like depending on the anticipated context of deployment or industry.

We thus combine learnings from prior practice in adjacent industries while recognizing the uniqueness of the commercial AI industry to identify key opportunities for internal auditing in our specific context. We do so in a way that is appropriate to the requirements of an AI system.

Figure 2. Overview of Internal Audit Framework. Gray indicates a process, and the colored sections represent documents. Documents in orange are produced by the auditors, blue documents are produced by the engineering and product teams and green outputs are jointly developed.

4. SMACTR: An internal audit framework

We now outline the components of an initial internal audit framework, which can be framed as encompassing five distinct stages—Scoping, Mapping, Artifact Collection, Testing and Reflection (SMACTR)—all of which have their own set of documentation requirements and account for a different level of the analysis of a system. Figure 2 illustrates the full set of artifacts recommended for each stage.

To illustrate the utility of this framework, we contextualize our descriptions with the hypothetical example of Company X Inc., a large multinational software engineering consulting firm, specializing in developing custom AI solutions for a diverse range of clients. We imagine this company has designated five AI principles, paraphrased from the most commonly identified AI principles in a current online English survey (Jobin et al., 2019)–”Transparency”, ”Justice, Fariness & Non-Discrimination”, ”Safety & Non-Maleficence”, ”Responsibility & Accountability” and ”Privacy”. We also assume that the corporate structure of Company X is typical of any technical consultancy, and design our stakeholder map by this assumption.

Company X has decided to pilot the SMACTR internal audit framework to fulfill a corporate mandate towards responsible innovation practice, accommodate external accountability and operationalize internal consistency with respect to its identified AI principles. The fictional company thus pilots the audit framework on two hypothetical client projects.

The first (hypothetical) client wishes to develop a child abuse screening tool similar to that of the real cases extensively studied and reported on (Chouldechova et al., 2018; Cuccaro-Alamin et al., 2017; Goldhaber-Fiebert and Prince, 2019; Keddell, 2019; Courtland, 2018; Eubanks, 2018). This complex case intersects heavily with applications in high-risk scenarios with dire consequences. This scenario demonstrates how, for algorithms interfacing with high-risk contexts, a structured framework can allow for the careful consideration of all the possibilities and risks with taking on the project, and the extent of its understood social impact.

The second invented client is Happy-Go-Lucky, Inc., an imagined photo service company looking for a smile detection algorithm to automatically trigger the cameras in their installed physical photo booths. In this scenario, the worst case is a lack of customer satisfaction—the stakes are low and the situation seems relatively straightforward. This scenario demonstrates how in even seemingly simple and benign cases, ethical consideration of system deployment can reveal underlying issues to be addressed prior to deployment, especially when we contextualize the model within the setting of the product and deployment environment.

An end-to-end worked example of the audit framework is available as supplementary material to this paper for the Happy-Go-Lucky, Inc. client case. This includes demonstrative templates of all recommended documentation, with the exception of specific process files such as any experimental results, interview transcripts, a design history file and the summary report. Workable templates can also be accessed as an online resource here.

4.1. The Governance Process

To design our audit procedure, we suggest complementing formal risk assessment methodologies with ideas from responsible innovation, which stresses four key dimensions: anticipation, reflexivity, inclusion and responsiveness (Stilgoe et al., 2013), as well as system-theoretic concepts that help grapple with increasing complexity and coupling of artificial intelligence systems with the external world (Leveson, 2011). Risk-based assessments can be limited in their ability to capture social and ethical stakes, and they should be complemented by anticipatory questions such as, what if…?. The aim is to increase ethical foresight through systematic thinking about the larger sociotechnical system in which a product will be deployed (Mittelstadt and Floridi, 2016). There are also intersections between this framework and just effective product development theory (Brown and Eisenhardt, 1995), as many of the components of audit design refocus the product development process to prioritize the user and their ultimate well-being, resulting in a more effective product performance outcome.

At a minimum, the internal audit process should enable critical reflections on the potential impact of a system, serving as internal education and training on ethical awareness in addition to leaving what we refer to as a transparency trail of documentation at each step of the development cycle (see Figure 2). To shift the process into an actionable mechanism for accountability, we present a validated and transparently outlined procedure that auditors can commit to. The thoroughness of our described process will hopefully engage the trust of audit targets to act on and acknowledge post-audit recommendations for engineering practices in alignment with prescribed AI principles.

This process primarily addresses how to conduct internal audits, providing guidance for those that have already deemed an audit necessary but would like to further define the scope and execution details. Though not covered here, an equally important process is determining what systems to audit and why. Each industry has a way to judge what requires a full audit, but that process is discretionary and dependent on a range of contextual factors pertinent to the industry, the organization, audit team resourcing, and the case at hand. Risk prioritization and the necessary variance in scrutiny is a separately interesting and rich research topic on its own. The process outlined below can be applied in full or in a lighter-weight formulation, depending on the level of assessment desired.

4.2. The Scoping Stage

For both clients, a product or request document is provided to specify the requirements and expectations of the product or feature. The goal of the scoping stage is to clarify the objective of the audit by reviewing the motivations and intended impact of the investigated system, and confirming the principles and values meant to guide product development. This is the stage in which the risk analysis begins by mapping out intended use cases and identifying analogous deployments either within the organization or from competitors or adjacent industries. The goal is to anticipate areas to investigate as potential sources of harm and social impact. At this stage, interaction with the system should be minimal.

In the case of the smile-triggered phone booth, a smile detection model is required, providing a simple product, with not a broad scope of considerations as the potential for harm does not go much beyond inconvenience or customer exclusion and dissatisfaction. For the child abuse detection product, there are many more approaches to solving the issue and many more options for how the model interacts with the broader system. The use case itself involves many ethical considerations, as an ineffective model may result in serious consequences like death or family separation.

The key artifacts developed by the auditors from this stage include an ethical review of the system use case and a social impact assessment. Pre-requisite documents from the product and engineering team should be a declaration or confirmation statement of ethical objectives, standards and AI principles. The product team should also provide a Product Requirements Document (PRD), or project proposal from the initial planning of the audited product.

4.2.1. Artifact: Ethical Review of System Use Case

When a potential AI system is in the development pipeline, it should be reviewed with a series of questions that first and foremost check to see, at a high level, whether the technology aligns with a set of ethical values or principles. This can take the form of an ethical review that considers the technology from a responsible innovation perspective by asking who is likely to be impacted and how.

Importantly, we stress standpoint diversity in this process. Algorithm development implicitly encodes developer assumptions that they may not be aware of, including ethical and political values. Thus it is not always possible for individual technology workers to identify or assess their own biases or faulty assumptions (Intemann, 2010). For this reason, a critical range of viewpoints is included in the review process. The essential inclusion of independent domain experts and marginalized groups in the ethical review process ”has the potential to lead to more rigorous critical reflection because their experiences will often be precisely those that are most needed in identifying problematic background assumptions and revealing limitations with research questions, models, or methodologies” (Intemann, 2010). Another method to elicit implicit biases or motivated cognition (Kruglanski, 1996) is to ask people to reflect on their preliminary assessment and then ask whether they might have reason to regret the action later on. This can shed light on how our position in society biases our assumptions and ways of knowing (Dobbe et al., 2018).

An internal ethics review board that includes a diversity of voices should review proposed projects and document its views. Internal ethics review boards are common in biomedical research, and the purpose of these boards is to ensure that the rights, safety, and well-being of all human subjects involved in medical research are protected (of the World Medical Association and others, 2014). Similarly, the purpose of an ethics review board for AI systems includes safeguarding human rights, safety, and well-being of those potentially impacted.

4.2.2. Artifact: Social Impact Assessment

A social impact assessment should inform the ethical review. Social impact assessments are commonly defined as a method to analyze and mitigate the unintended social consequences, both positive and negative, that occur when a new development, program, or policy engages with human populations and communities (Vanclay, 2003). In it, we describe how the use of an artificial intelligence system might change people’s ways of life, their culture, their community, their political systems, their environment, their health and well-being, their personal and property rights, and their experiences (positive or negative) (Vanclay, 2003).

The social impact assessment includes two primary steps: an assessment of the severity of the risks, and an identification of the relevant social, economic, and cultural impacts and harms that an artificial intelligence system applied in context may create. The severity of risk is the degree to which the specific context of the use case is assessed to determine the degree in which potential harms may be amplified. The severity assessment proceeds from the analysis of impacts and harms to give a sense of the relative severity of the harms and impacts depending on the sensitivity, constraints, and context of the use case.

4.3. The Mapping Stage

The mapping stage is not a step in which testing is actively done, but rather a review of what is already in place and the perspectives involved in the audited system. This is also the time to map internal stakeholders, identify key collaborators for the execution of the audit, and orchestrate the appropriate stakeholder buy-in required for execution. At this stage, the FMEA (Section 3.1.3) should begin and risks should be prioritized for later testing.

As Company X is a consultancy, this stage mainly requires identifying the stakeholders across product and engineering teams anchored to this particular client project, and recording the nature of their involvement and contribution. This enables an internal record of individual accountability with respect to participation towards the final outcome, and enables the trace of relevant contacts for future inquiry.

For the child abuse detection algorithm, the initial identification of failure modes reveals the high stakes of the application, and immediate threats to the ”Safety & Non-Maleficence” principle. False positives overwhelm staff and may lead to the separation of families that could have recovered. False negatives may result in a dead or injured child that could have been rescued. For the smile detector, failures disproportionately impact those with alternative emotional expressions—those with autism, different cultural norms on the formality of smiling, or different expectations for the photograph who are then excluded from the product by design.

The key artifacts from this stage include a stakeholder map and collaborator contact list, a system map of the product development lifecycle, and the engineering system overview, especially in cases where multiple models inform the end product. Additionally, this stage includes a design history file review of all existing documentation of the development process or historical artifacts on past versions of the product. Finally, it includes a report or interview transcripts on key findings from internal ethnographic fieldwork involving the stakeholders and engineers.

4.3.1. Artifact: Stakeholder Map

Who was involved in the system audit and collaborators in the execution of the audit should be outlined. Clarifying participant dynamics ensures a more transparent representation of the provided information, giving further context to the intended interpretation of the final audit report.

4.3.2. Artifact: Ethnographic Field Study

As Leveson points out, bottom-up decentralized decision making can lead to failures in complex sociotechnical systems (Leveson, 2011). Each local decision may be correct in the limited context in which it was made, but can lead to problems when these decisions and organizational behaviors interact. With modern large-scale artificial intelligence projects and API development, it can be difficult to gain a shared understanding at the right level of system description to understand how local decisions, such as the choice of dataset or model architecture, will impact final system behavior.

Therefore, ethnography-inspired fieldwork methodology based on how audits are conducted in other industries, such as finance (Styhre, 2015) and healthcare (Rodríguez et al., 2014) is useful to get a deeper and qualitative understanding of the engineering and product development process. As in internal financial auditing, access to key people in the organization is important. This access involves semi-structured interviews with a range of individuals close to the development process and documentation gathering to gain an understanding of possible gaps that need to be examined more closely.

Traditional metrics for artificial intelligence like loss may conceal fairness concerns, social impact risks or abstraction errors (Selbst et al., 2019). A key challenge is to assess how the numerical metrics specified in the design of an artificial intelligence system reflect or conform with these values. Metrics and measurement are important parts of the auditing process, but should not become aims and ends in themselves when weighing whether an algorithmic system under audit is ethically acceptable for release. Taking metrics measured in isolation risks recapitulating the abstraction error that (Selbst et al., 2019) point out, ”To treat fairness and justice as terms that have meaningful application to technology separate from a social context is therefore to make a category error, or as we posit here, an abstraction error.” What we consider data is already an interpretation, highly subjective and contested (Furner, 2016). Metrics must be understood in relation to the engineering context in which they were developed and the social context into which they will be deployed. During the interviews, auditors should capture and pay attention to what falls outside the measurements and metrics, and to render explicit the assumptions and values the metrics apprehend (Styhre, 2018). For example, the decision about whether to prioritize the false positive rate over false negative rate (precision/recall) is a question about values and cannot be answered without stating the values of the organization, team or even engineer within the given development context.

4.4. The Artifact Collection Stage

Note that the collection of these artifacts advances adherence to the declared AI principles of the organization on ”Responsibility & Accountability” and ”Transparency”.

In this stage, we identify and collect all the required documentation from the product development process, in order to prioritize opportunities for testing. Often this implies a record of data and model dynamics though application-based systems can include other product development artifacts such as design documents and reviews, in addition to systems architecture diagrams and other implementation planning documents and retrospectives.

At times documentation can be distributed across different teams and stakeholders, or is missing altogether. In certain cases, the auditor is in a position to enforce retroactive documentation requirements on the product team, or craft documents themselves.

The model card for the smile detection model is the template model card from the original paper (Mitchell et al., 2019). A hypothetical datasheet for this system is filled out using studies on the CelebA dataset, with which the smile detector is built (Liu et al., 2015; Merler et al., 2019)

. In the model card, we identify potential for misuse if smiling is confused for positive affect. From the datasheet for the CelebA dataset, we see that although the provided binary gender labels seem balanced for this dataset (58.1% female, 42% male), other demographic details are quite skewed (77.8% aged 0-45, 22.1% aged over 46 and 14.2% lighter-skinned, 85.8% darker-skinned)

(Merler et al., 2019).

The key artifact from auditors during this stage is the audit checklist, one method of verifying that all documentation pre-requisites are provided in order to commence the audit. Those pre-requisites can include model and data transparency documentation.

4.4.1. Artifact: Design Checklist

This checklist is a method of taking inventory of all the expected documentation to have been generated from the product development cycle. It ensures that the full scope of expected product processes and that the corresponding documentation required to be completed before the audit review can begin are finished. This is also a procedural evaluation of the development process for the system, to ensure that appropriate actions were pursued throughout system development ahead of the evaluation of the final system outcome.

4.4.2. Artifacts: Datasheets and Model Cards

Two recent standards can be leveraged to create auditable documentation, model cards and datasheets (Mitchell et al., 2019; Gebru et al., 2018). Both model cards and datasheets are important tools toward making algorithmic development and the algorithms themselves more auditable, with the aim of anticipating risks and harms with using artificial intelligence systems. Ideally, these artifacts should be developed and/or collected by product stakeholders during the course of system development.

To clarify the intended use cases of artificial intelligence models and minimize their usage in contexts for which they are not well suited, Mitchell et al. recommend that released models be accompanied by documentation detailing their performance characteristics (Mitchell et al., 2019), called a model card. This should include information about how the model was built, what assumptions were made during development, and what type of model behavior might be experienced by different cultural, demographic or phenotypic groups. A model card is also extremely useful for internal development purposes to make clear to stakeholders details about trained models that are included in larger software pipelines, which are parts of internal organizational dynamics, which are then parts of larger sociotechnical logics and processes. A robust model card is key to documenting the intended use of the model as well as information about the evaluation data, model scope and risks, and what might be affecting model performance.

Model cards are intended to complement ”Datasheets for Datasets” (Gebru et al., 2018). Datasheets for machine learning datasets are derived by analogy from the electronics hardware industry, where a datasheet for an electronics component describes its operating characteristics, test results, and recommended uses. A critical part of the datasheet covers the data collection process. This set of questions are intended to provide consumers of the dataset with the information they need to make informed decisions about using the dataset: what mechanisms or procedures were used to collect the data? Was any ethical review process conducted? Does the dataset relate to people?

This documentation feeds into the auditors’ assessment process.

4.5. The Testing Stage

This stage is where the majority of the auditing team’s testing activity is done—when the auditors execute a series of tests to gauge the compliance of the system with the prioritized ethical values of the organization. Auditors engage with the system in various ways, and produce a series of artifacts to demonstrate the performance of the analyzed system at the time of the audit. Additionally, auditors review the documentation collected from the previous stage and begin to make assessments of the likelihood of system failures to comply with declared principles.

High variability in approach is likely during this stage, as the tests that need to be executed change dramatically depending on organizational and system context. Testing should be based on a risk prioritization from the FMEA.

For the smile detector, we might employ counterfactual adversarial examples designed to confuse the model and find problematic failure modes derived from the FMEA. For the child prediction model, we test performance on a selection of diverse user profiles. These profiles can also be treated for variables that correlate with vulnerable groups to test whether the model has learned biased associations with race or SES.

For the ethical risk analysis chart, we look at the principles and realize that there are immediate risks to the ”Privacy” principle—with one case involving juvenile data, which is sensitive, and the other involving face data, a biometric. This is also when it becomes clear that in the smiling booth case, there is disproportionate performance for certain underrepresented user subgroups, thus jeopardizing the ”Justice, Fariness & Non-Discrimination” principle.

The main artifacts from this stage of the auditing process are the results of tests such as adversarial probing of the system and an ethical risk analysis chart.

4.5.1. Artifact: Adversarial Testing

Adversarial testing is a common approach to finding vulnerabilities in both pre-release and post-launch technology, for example in privacy and security testing (Brubaker et al., 2014). In general, adversarial testing attempts to simulate what a hostile actor might do to gain access to a system, or to push the limits of the system into edge case or unstable behavior to elicit very-low probability but high-severity failures.

In this process, direct non-statistical testing uses tailored inputs to the model to see if they result in undesirable outputs. These inputs can be motivated by an intersectional analysis, for example where an ML system might produce unfair outputs based on demographic and phenotypic groups that might combine in non-additive ways to produce harm, or over time recapitulate harmful stereotypes or reinforce unjust social dynamics (for example, in the form of opportunity denial). This is distinct from adversarially attacking a model with human-imperceptible pixel manipulations to trick the model into misidentifying previously learned outputs (Gu and Rigazio, 2014), but these approaches can be complementary. This approach is more generally defined—encompassing a range of input options to try in an active attempt to fool the system and incite identified failure modes from the FMEA.

Internal adversarial testing prior to launch can reveal unexpected product failures before they can impact the real world. Additionally, proactive adversarial testing of already-launched products can be a best practice for lifecycle management of released systems. The FMEA should be updated with these results, and the relative changes to risks assessed.

4.5.2. Artifact: Ethical Risk Analysis Chart

The ethical risk analysis chart considers the combination of the likelihood of a failure and the severity of a failure to define the importance of the risk. Highly likely and dangerous risks are considered the most high-priority threats. Each risk is assigned a severity indication of ”high”, ”mid” and ”low” depending on their combination of these features.

Failure likelihood is estimated by considering the occurrence of certain failures during the adversarial testing of the system and the severity of the risk is identified in earlier stages, from informative processes such as the social impact assessment and ethnographic interviews.

4.6. The Reflection Stage

This phase of the audit is the more reflective stage, when the results of the tests at the execution stage are analyzed in juxtaposition with the ethical expectations clarified in the audit scoping. Auditors update and formalize the final risk analysis in the context of test results, outlining specific principles that may be jeopardized by the AI system upon deployment. This phase will reflect on product decisions and design recommendations that could be made following the audit results.

Additionally, key artifacts at this stage may include a mitigation plan or action plan, jointly developed by the audit and engineering teams, that outlines prioritized risks and test failures that the engineering team is in a position to mitigate for future deployments or for a future version of the audited system.

For the smile detection algorithm, the decision could be to train a new version of the model on more diverse data before considering deployment, and add more samples of underrepresented populations in CelebA to the training data. It could be decided that the use case does not necessarily define affect, but treats smiling as a favourable photo pose. Design choices for other parts of the product outside the model should be considered—for instance, an opt-in functionality with user permissions required on the screen before applying the model-controlled function, and the default being that the model-controlled trigger is disabled. There could also be an included disclaimer on privacy, assuring users of safe practices for face data storage and consent. Once these conditions are met, Company X could be confident to greenlight developing this product for the client.

For the child abuse detection model—this is a more complex decision. Given the ethical considerations involved, the project may be stalled or even cancelled, requiring further inquiry into the ethics of the use case, and the capability of the team to complete the mitigation plan required to deploy an algorithm in such a high risk scenario.

4.6.1. Artifact: Algorithmic Use-related Risk Analysis and FMEA

The risk analysis should be informed by the social impact assessment and known issues with similar models. Following Leveson’s work on safety engineering (Leveson, 2011), we stress that careful attention must be paid to the distinction between the designers’ mental models of the artificial intelligence system and the user’s mental model. The designers’ mental models are an idealization of the artificial intelligence system before the model is released. Significant differences exist between this ideal model and how the actual system will behave or be used once deployed. This may be due to many factors, such as distributional drift (Lehman, 2019) where the training and test set distributions differ from the real-world distribution, or intentional or unintentional misuse of the model for purposes other than those for which it was designed. Reasonable and foreseeable misuse of the model should be anticipated by the designer. Therefore, the user’s mental model of the system should be anticipated and taken into consideration. Large gaps between the intended and actual uses of algorithms have been found in contexts such as criminal justice and web journalism (Christin, 2017).

This adds complexity to anticipated hazards and risks, nevertheless these should be documented where possible. Christin points out the importance of studying the practices, uses, and implementations surrounding algorithmic technologies. Intellectually, this involves establishing new exchanges between literatures that may not usually interact, such as critical data studies, the sociology of work, and organizational analysis. We propose that known use-related issues with deployed systems be taken into account during the design stage. The format of the risk analysis can be variable depending on context, and there are many valuable templates to be found in Failure Modes and Effects Analysis (Section 3.1.3) framing and other risk analysis tools in finance and medical deployments.

4.6.2. Artifact: Remediation and Risk Mitigation Plan

After the audit is completed and findings are presented to the leadership and product teams, it is important to develop a plan for remediating these problems. The goal is to drive down the risk of ethical concerns or potential negative social impacts to the extent reasonably practicable. This plan can be reviewed by the audit team and leadership to better inform deployment decisions.

For the concerns raised in any audit against ethical values, a technical team will want to know: what is the threshold for acceptable performance? If auditors discover, for example, unequal classifier performance across subgroups, how close to parity is necessary to say the classifier is acceptable? In safety engineering, a risk threshold is usually defined under which the risk is considered tolerable. Though a challenging problem, similar standards could be established and developed in the ethics space as well.

4.6.3. Artifact: Algorithmic Design History File

Inspired by the concept of the design history file from the medical device industry (Teixeira et al., 2013), we propose an algorithmic design history file (ADHF) which would collect all the documentation from the activities outlined above related to the development of the algorithm. It should point to the documents necessary to demonstrate that the product or model was developed in accordance with an organization’s ethical values, and that the benefits of the product outweigh any risks identified in the risk analysis process.

This design history file would form the basis of the final audit report, which is a written evaluation by the organization’s audit team. The ADHF should assist with an audit trail, enabling the reconstruction of key decisions and events during the development of the product. The algorithmic report would then be a distillation and summary of the ADHF.

4.6.4. Artifact: Algorithmic Audit Summary Report

The report aggregates all key audit artifacts, technical analyses and documentation, putting this in one accessible location for review. This audit report should be compared qualitatively and quantitatively to the expectations outlined in the given ethical objectives and any corresponding engineering requirements.

5. Limitations of Internal Audits

Internal auditors necessarily share an organizational interest with the target of the audit. While it is important to maintain an independent and objective viewpoint during the execution of an audit, we awknowledge that this is challenging. The audit is never isolated from the practices and people conducting the audit, just as artificial intelligence systems are not independent of their developers or of the larger sociotechnical system. Audits are not unified or monolithic processes with an objective ”view from nowhere”, but must be understood as a ”patchwork of coupled procedures, tools and calculative processes” (Styhre, 2015). To avoid audits becoming simply acts of reputation management for an organization, the auditors should be mindful of their own and the organizations’ biases and viewpoints. Although long-standing internal auditing practices for quality assurance in the financial, aviation, chemical, food, and pharmaceutical industries have been shown to be an effective means of controlling risk in these industries (Taylor, 2018), the regulatory dynamics in these industries suggest that internal audits are only one important aspect of a broader system of required quality checks and balances.

6. Conclusion

AI has the potential to benefit the whole of society, however there is currently an inequitable risk distribution such that those who already face patterns of structural vulnerability or bias disproportionately bear the costs and harms of many of these systems. Fairness, justice and ethics require that those bearing these risks are given due attention and that organizations that build and deploy artificial intelligence systems internalize and proactively address these social risks as well, being seriously held to account for system compliance to declared ethical principles.


  • O. Y. Al-Jarrah, P. D. Yoo, S. Muhaidat, G. K. Karagiannidis, and K. Taha (2015) Efficient machine learning for big data: a review. Big Data Research 2 (3), pp. 87–93. Cited by: §1.
  • A. Bennaceur, T. T. Tun, Y. Yu, and B. Nuseibeh (2019) Requirements engineering. In Handbook of Software Engineering, pp. 51–92. Cited by: §3.1.2.
  • L. Bing, A. Akintoye, P. J. Edwards, and C. Hardcastle (2005) The allocation of risk in ppp/pfi construction projects in the uk. International Journal of project management 23 (1), pp. 25–35. Cited by: §3.4.
  • E. Breck, S. Cai, E. Nielsen, M. Salib, and D. Sculley (2017) The ml test score: a rubric for ml production readiness and technical debt reduction. In 2017 IEEE International Conference on Big Data (Big Data), pp. 1123–1132. Cited by: §2.
  • S. L. Brown and K. M. Eisenhardt (1995) Product development: past research, present findings, and future directions. Academy of management review 20 (2), pp. 343–378. Cited by: §4.1.
  • C. Brubaker, S. Jana, B. Ray, S. Khurshid, and V. Shmatikov (2014) Using frankencerts for automated adversarial testing of certificate validation. In in SSL/TLS Implementations,” IEEE Symposium on Security and Privacy, Cited by: §4.5.1.
  • J. J. Bryson, M. E. Diamantis, and T. D. Grant (2017) Of, for, and by the people: the legal lacuna of synthetic persons. Artificial Intelligence and Law 25 (3), pp. 273–291. Cited by: §2.
  • J. Buolamwini and T. Gebru (2018) Gender shades: intersectional accuracy disparities in commercial gender classification. In Conference on Fairness, Accountability and Transparency, pp. 77–91. Cited by: §1, §2.1.
  • J. Burrell (2016) How the machine “thinks“: understanding opacity in machine learning algorithms. Big Data & Society 3 (1), pp. 2053951715622512. Cited by: §2.4.
  • P. E. Byrnes, A. Al-Awadhi, B. Gullvist, H. Brown-Liburd, R. Teeter, J. D. Warren Jr, and M. Vasarhelyi (2018) Evolution of auditing: from the traditional approach to the future audit 1. In Continuous Auditing: Theory and Application, pp. 285–297. Cited by: §3.3.
  • A. Chouldechova, D. Benavides-Prado, O. Fialko, and R. Vaithianathan (2018) A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions. In Conference on Fairness, Accountability and Transparency, pp. 134–148. Cited by: §4.
  • A. Christin (2017) Algorithms in practice: comparing web journalism and criminal justice. Big Data & Society 4 (2), pp. 2053951717718855. Cited by: §4.6.1.
  • K. L. Chung and P. Erdös (1952) On the application of the borel-cantelli lemma. Transactions of the American Mathematical Society 72 (1), pp. 179–186. Cited by: §3.1.
  • R. Courtland (2018) Bias detectives: the researchers striving to make algorithms fair. Nature 558 (7710), pp. 357–357. Cited by: §4.
  • S. Cuccaro-Alamin, R. Foust, R. Vaithianathan, and E. Putnam-Hornstein (2017) Risk assessment and decision making in child protective services: predictive risk modeling in context. Children and Youth Services Review 79, pp. 291–298. Cited by: §4.
  • M. A. Cusumano and S. A. Smith (1995) Beyond the waterfall: software development at microsoft. Cited by: §3.4.
  • N. Diakopoulos (2014) Algorithmic accountability reporting: on the investigation of black boxes. Cited by: §2.1.
  • R. Dobbe, S. Dean, T. Gilbert, and N. Kohli (2018) A broader view on bias in automated decision-making: reflecting on epistemology and dynamics. arXiv preprint arXiv:1807.00553. Cited by: §4.2.1.
  • K. Driscoll, B. Hall, H. Sivencrona, and P. Zumsteg (2003) Byzantine fault tolerance, from theory to reality. In International Conference on Computer Safety, Reliability, and Security, pp. 235–248. Cited by: §3.1.
  • D. Ensign, S. A. Friedler, S. Neville, C. Scheidegger, and S. Venkatasubramanian (2017) Runaway feedback loops in predictive policing. arXiv preprint arXiv:1706.09847. Cited by: §1.
  • V. Eubanks (2018) A child abuse prediction model fails poor families. Wired Magazine. Cited by: §4.
  • S. M. Faizal, M. R. Palil, R. Maelah, and R. Ramli (2017) Perception on justice, trust and tax compliance behavior in malaysia. Kasetsart Journal of Social Sciences 38 (3), pp. 226–232. Cited by: §2.3.
  • J. Furner (2016) “Data“: the data. In Information Cultures in the Digital Age, pp. 287–306. Cited by: §4.3.2.
  • T. Gebru, J. Morgenstern, B. Vecchione, J. W. Vaughan, H. Wallach, H. Daumeé III, and K. Crawford (2018) Datasheets for datasets. arXiv preprint arXiv:1803.09010. Cited by: §3.4, §4.4.2, §4.4.2.
  • J. Goldhaber-Fiebert and L. Prince (2019) Impact evaluation of a predictive risk modeling tool for allegheny county’s child welfare office. Pittsburgh: Allegheny County.[Google Scholar]. Cited by: §4.
  • B. Green and Y. Chen (2019) Disparate interactions: an algorithm-in-the-loop analysis of fairness in risk assessments. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 90–99. Cited by: §1.
  • D. Greene, A. L. Hoffmann, and L. Stark (2019) Better, nicer, clearer, fairer: a critical assessment of the movement for ethical artificial intelligence and machine learning. In Proceedings of the 52nd Hawaii International Conference on System Sciences, Cited by: §2.2.
  • S. Gu and L. Rigazio (2014)

    Towards deep neural network architectures robust to adversarial examples

    arXiv preprint arXiv:1412.5068. Cited by: §4.5.1.
  • J. Haigh (2012) Probability: a very short introduction. Vol. 310, Oxford University Press. Cited by: §3.1.
  • B. Hall and K. Driscoll (2014) Distributed system design checklist. Cited by: §3.1.1.
  • K. Holstein, J. W. Vaughan, H. Daumé III, M. Dudík, and H. Wallach (2018) Improving fairness in machine learning systems: what do industry practitioners need?. arXiv preprint arXiv:1812.05239. Cited by: §3.4.
  • IEEE (2008) IEEE standard for software reviews and audits. IEEE Std 1028-2008 (), pp. 1–53. External Links: Document, ISSN Cited by: §2.1.
  • K. Intemann (2010) 25 years of feminist empiricism and standpoint theory: where are we now?. Hypatia 25 (4), pp. 778–796. Cited by: §4.2.1.
  • A. Jobin, M. Ienca, and E. Vayena (2019) Artificial intelligence: the global landscape of ethics guidelines. arXiv preprint arXiv:1906.11668. Cited by: §2.2, §4.
  • P. A. Judas and L. E. Prokop (2011) A historical compilation of software metrics with applicability to nasa‘s orion spacecraft flight software sizing. Innovations in Systems and Software Engineering 7 (3), pp. 161–170. Cited by: §3.1.
  • E. Keddell (2019) Algorithmic justice in child protection: statistical fairness, social justice and the implications for practice. Social Sciences 8 (10), pp. 281. Cited by: §4.
  • S. Kiritchenko and S. M. Mohammad (2018)

    Examining gender and race bias in two hundred sentiment analysis systems

    arXiv preprint arXiv:1805.04508. Cited by: §1.
  • N. Kohli, R. Barreto, and J. A. Kroll (2018)

    Translation tutorial: a shared lexicon for research and practice in human-centered software systems

    In 1st Conference on Fairness, Accountability, and Transparancy. New York, NY, USA, pp. 7. Cited by: §2.
  • J. A. Kroll, S. Barocas, E. W. Felten, J. R. Reidenberg, D. G. Robinson, and H. Yu (2016) Accountable algorithms. U. Pa. L. Rev. 165, pp. 633. Cited by: §2.
  • A. W. Kruglanski (1996) Motivated social cognition: principles of the interface.. Cited by: §4.2.1.
  • J. Lehman (2019) Evolutionary computation and ai safety: research problems impeding routine and safe real-world application of evolution. arXiv preprint arXiv:1906.10189. Cited by: §4.6.1.
  • N. Leveson (2011) Engineering a safer world: systems thinking applied to safety. MIT press. Cited by: §2, §3.1.2, §3.1, §4.1, §4.3.2, §4.6.1.
  • J. Liu (2012) The enterprise risk management and the risk oriented internal audit. Ibusiness 4 (03), pp. 287. Cited by: §2.1.
  • Z. Liu, P. Luo, X. Wang, and X. Tang (2015) Deep learning face attributes in the wild. In

    Proceedings of the IEEE international conference on computer vision

    pp. 3730–3738. Cited by: §4.4.
  • A. H. Lynch and S. Veland (2018) Urgency in the anthropocene. MIT Press. Cited by: §2.
  • T. Maillart, M. Zhao, J. Grossklags, and J. Chuang (2017) Given enough eyeballs, all bugs are shallow? revisiting eric raymond with bug bounty programs. Journal of Cybersecurity 3 (2), pp. 81–90. Cited by: §2.1.
  • M. Merler, N. Ratha, R. S. Feris, and J. R. Smith (2019) Diversity in faces. arXiv preprint arXiv:1901.10436. Cited by: §4.4.
  • M. Mitchell, S. Wu, A. Zaldivar, P. Barnes, L. Vasserman, B. Hutchinson, E. Spitzer, I. D. Raji, and T. Gebru (2019) Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 220–229. Cited by: §3.4, §4.4.2, §4.4.2, §4.4.
  • B. D. Mittelstadt and L. Floridi (2016) The ethics of big data: current and foreseeable issues in biomedical contexts. Science and engineering ethics 22 (2), pp. 303–341. Cited by: §4.1.
  • B. Mittelstadt (2019) AI ethics: too principled to fail?. SSRN. Cited by: §2.2.
  • L. Moy (2019) How police technology aggravates racial inequity: a taxonomy of problems and a path forward. Available at SSRN 3340898. Cited by: §1.
  • F. Muniesa, M. Lenglet, et al. (2013) Responsible innovation in finance: directions and implications. Responsible Innova-tion: Managing the Responsible Emergence of Science and Innovation in Society. Wiley, London, pp. 185–198. Cited by: §3.4.
  • K. Murphy (2003) Procedural justice and tax compliance.. Australian Journal of Social Issues (Australian Council of Social Service) 38 (3). Cited by: §2.3.
  • S. U. Noble (2018) Algorithms of oppression: how search engines reinforce racism. nyu Press. Cited by: §2.1, §3.4.
  • C. O’neil (2016) Weapons of math destruction: how big data increases inequality and threatens democracy. Broadway Books. Cited by: §3.4.
  • I. of Internal Auditors. Research Foundation and I. of Internal Auditors (2007) The professional practices framework. Inst of Internal Auditors. Cited by: §3.3.1.
  • G. A. of the World Medical Association et al. (2014) World medical association declaration of helsinki: ethical principles for medical research involving human subjects.. The Journal of the American College of Dentists 81 (3), pp. 14. Cited by: §4.2.1.
  • C. Parker (2012) Unexpected challenges in large scale machine learning. In Proceedings of the 1st International Workshop on Big Data, Streams and Heterogeneous Source Mining: Algorithms, Systems, Programming Models and Applications, pp. 1–6. Cited by: §3.4.
  • F. D. Patterson and K. Neailey (2002) A risk register database system to aid the management of project risk. International Journal of Project Management 20 (5), pp. 365–374. Cited by: §3.4.
  • W. Price and I. Nicholson (2017) Regulating black-box medicine. Mich. L. Rev. 116, pp. 421. Cited by: §3.2.2.
  • J. Quesada, L. K. Hart, and P. Bourgois (2011) Structural vulnerability and health: latino migrant laborers in the united states. Medical anthropology 30 (4), pp. 339–362. Cited by: §3.2.4.
  • I. D. Raji and J. Buolamwini (2019) Actionable auditing: investigating the impact of publicly naming biased performance results of commercial ai products. In AAAI/ACM Conf. on AI Ethics and Society, Cited by: §1, §2.1, §2.4.
  • C. Rodrigues and S. Cusick (2011) Commercial aviation safety 5/e. McGraw Hill Professional. Cited by: §3.1.
  • G. S. Rodríguez, M. O. Cabases, M. M. Delgado, F. E. Reboll, A. P. Peris, M. B. Saera, et al. (2014) Audits in real time for safety in critical care: definition and pilot study. Medicina intensiva 38 (8), pp. 473–482. Cited by: §4.3.2.
  • C. Sandvig, K. Hamilton, K. Karahalios, and C. Langbort (2014) Auditing algorithms: research methods for detecting discrimination on internet platforms. Data and discrimination: converting critical concerns into productive inquiry 22. Cited by: §2.1, §2.4.
  • D. Satava, C. Caldwell, and L. Richards (2006) Ethics and the auditing culture: rethinking the foundation of accounting and auditing. Journal of Business Ethics 64 (3), pp. 271–284. Cited by: §2.3.
  • D. Sculley, G. Holt, D. Golovin, E. Davydov, T. Phillips, D. Ebner, V. Chaudhary, and M. Young (2014) Machine learning: the high interest credit card of technical debt. Cited by: §3.4.
  • A. D. Selbst and S. Barocas (2018) The intuitive appeal of explainable machines. Fordham L. Rev. 87, pp. 1085. Cited by: §3.4.
  • A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi (2019) Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 59–68. Cited by: §4.3.2.
  • H. Shah (2018) Algorithmic accountability. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376 (2128), pp. 20170362. Cited by: §2.4.
  • D. S. Soh and N. Martinov-Bennie (2011) The internal audit function: perceptions of internal audit roles, effectiveness and evaluation. Managerial Auditing Journal 26 (7), pp. 605–622. Cited by: §3.3.1.
  • D. H. Stamatis (2003) Failure mode and effect analysis: fmea from theory to execution. ASQ Quality press. Cited by: §3.1.3.
  • J. Stilgoe, R. Owen, and P. Macnaghten (2013) Developing a framework for responsible innovation. Research Policy 42 (9), pp. 1568–1580. Cited by: §4.1.
  • A. Styhre (2015) The financialization of the firm: managerial and social implications. Edward Elgar Publishing. Cited by: §3.3, §4.3.2, §5.
  • A. Styhre (2018) The unfinished business of governance: towards new governance regimes. In The Unfinished Business of Governance, Cited by: §4.3.2.
  • J. Taylor (2018) Quality assurance of chemical measurements. Routledge. Cited by: §5.
  • M. B. Teixeira, M. Teixeira, and R. Bradley (2013) Design controls for the medical device industry. CRC press. Cited by: §3.2.1, §3, §4.6.3.
  • M. Trajtenberg (2018) AI as the next gpt: a political-economy perspective. Technical report National Bureau of Economic Research. Cited by: §3.4.
  • F. Vanclay (2003) International principles for social impact assessment. Impact assessment and project appraisal 21 (1), pp. 5–12. Cited by: §4.2.2.
  • T. Vanderveen (2005) Averting highest-risk errors is first priority. Patient Safety and Quality Healthcare 2, pp. 16–21. Cited by: §3.2.
  • A. K. Verma, S. Ajit, D. R. Karanki, et al. (2010) Reliability and safety engineering. Vol. 43, Springer. Cited by: §3.
  • J. Whittlestone, R. Nyrup, A. Alexandrova, and S. Cave (2019) The role and limits of principles in ai ethics: towards a focus on tensions. In Proceedings of the AAAI/ACM Conference on AI Ethics and Society, Honolulu, HI, USA, pp. 27–28. Cited by: §2.2.
  • Y. Zeng, E. Lu, and C. Huangfu (2018) Linking artificial intelligence principles. arXiv preprint arXiv:1812.04814. Cited by: §2.2.