Explainability Auditing for Intelligent Systems: A Rationale for Multi-Disciplinary Perspectives

08/05/2021 ∙ by Markus Langer, et al. ∙ Universität Saarland University of Bonn 0

National and international guidelines for trustworthy artificial intelligence (AI) consider explainability to be a central facet of trustworthy systems. This paper outlines a multi-disciplinary rationale for explainability auditing. Specifically, we propose that explainability auditing can ensure the quality of explainability of systems in applied contexts and can be the basis for certification as a means to communicate whether systems meet certain explainability standards and requirements. Moreover, we emphasize that explainability auditing needs to take a multi-disciplinary perspective, and we provide an overview of four perspectives (technical, psychological, ethical, legal) and their respective benefits with respect to explainability auditing.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

National and international guidelines for trustworthy artificial intelligence (AI) consider explainability (as well as related concepts such as transparency and interpretability) to be a central facet of trustworthy systems (i.e., systems that can be trusted) [8, 16]. In fact, explainability seems to be the most commonly featured concept in regulatory guidelines on AI around the globe [20, 17]. Thus, ensuring system explainability seems to be a central step towards trustworthy AI.

As it is considered to be a central enabler of many crucial desiderata associated with intelligent systems [11, 3, 26], the importance of explainability for overall system trustworthiness becomes even more apparent. These desiderata can take the form of goals, interests, needs, and demands of the multiple stakeholders involved in the development, deployment, and actual use of intelligent systems [26]. For example, such desiderata could be to have usable, robust, or accountable systems [3, 9, 26]. With regard to such desiderata, explainability aims to ensure an improved understanding of system processes and outputs to a) enable stakeholders to more easily decide whether a specific desideratum is fulfilled, and b) facilitate improvement of a system with respect to a given desideratum [26]. For instance, better understanding of system processes and outputs can help to improve system usability for users [12], can aid developers to increase system robustness [10], can help regulating bodies to clarify legal and ethical accountability in case of unfavorable outcomes [6], and can lead to more warranted trust in intelligent systems [26, 19].

However, ensuring and assessing system explainability in applied contexts is challenging. For developers, it is demanding to design systems that provide insights into their decision processes and that enable other stakeholders to better understand these processes [15]. For users, it will be only after some experience with a system that they realize whether they can understand its decision processes and outputs [22]. Similarly, regulating bodies might only realize after an unfavorable outcome has happened that they are not able to understand what kind of system malfunction has led to this outcome. Adding to this complexity, explainability is a multi-disciplinary area of research and practice [2]. Consequently, for a comprehensive picture regarding explainability in applied contexts, explainabiltiy needs to be analyzed from the perspectives of multiple disciplines [26]. For instance, focusing solely on the technical perspective of explainability (e.g., how to design systems that, in principle, can provide insights into their decision-making processes) will lack a human-centered analysis of how explainability can fulfill societal desiderata [15] For this, we might additionally need a psychological perspective which empirically investigates whether a specific explainability approach has the intended effects on human-system interaction. In addition, explainability is associated with ethical and legal questions (e.g., concerning the allocation of responsibility, or the General Data Protection Regulation of the European Union) and, thus, these perspectives complements the technical and psychological perspectives.

In this vision paper, we highlight that explainability auditing and certification, as a way to ensure and assess system explainability, needs to take multi-disciplinary perspectives to enable a comprehensive analysis of explainability in applied contexts. During audits, auditors investigate whether products or processes meet certain quality standards and requirements. Certification is a way to communicate that a product or process has undergone quality control or suffices certain requirements, and can thus be an outcome of auditing processes. Both auditing and certification can help stakeholders to quickly assess whether products or processes follow certain quality standards and, in consequence, adequately evaluate their alignment with respect to these standards. However, explainability auditing has only received brief notion in previous work (e.g., in [31]). In this paper we reinforce this idea and, moreover, highlight that explainability auditing is only possible by taking a multi-disciplinary perspective on the auditing process, as no single discipline can succeed in fully capturing the complexity of defining requirements on explainability in systems.

This paper is structured along four different perspectives for explainability auditing (i.e., technical, psychological, legal, and ethical). For each of these perspectives, we present dimensions that might be necessary to consider in auditing processes. Additionally, we present possible benefits that may result from ensuring that explainability meets these dimensions.

Ii Why Explainability Auditing?

In what follows, we emphasize different perspectives on explainability auditing that provide a rationale regarding a) what dimensions we need to investigate in an explainability auditing process, and b) what benefits we can expect when these dimensions are met. Clearly, this is not a comprehensive list of perspectives on explainability auditing, as we could imagine further perspectives to complement our multi-disciplinary approach (e.g., a sociological perspective). However, this list is intended to envision how different perspectives may come together in order to ensure system explainability.

Ii-a Technical Perspective

Ii-A1 Technical Dimensions

From a technical perspective, explainability auditing needs to assess the current status of a system’s explainability. Following, we present sample auditing dimensions from a technical perspective (for a more extensive list of technical auditing dimensions, see [38]).

Functional explainability

Is a system designed in such a way that it allows for human insight, or does it provide additional methods that shed light onto its decision-processes? In the first case, we call a system ante-hoc, in the second-case post-hoc explainable [3]. Testing whether a system is capable of providing intelligibility to humans is a basic requirement for system explainability. This may happen on two levels: either the system is explainable with regards to specific outputs (local explainability), or the system’s decision-making process is explainable as a whole (global explainability) [38, 18].

Faithful explainability

Does the system provide information that describe its decision-making process accurately and truthfully? Ensuring that a system provides faithful insights into its decision-making processes is essential for its trustworthiness and the information’s reliability [36]. For instance, faithful explainability is likely present in systems that reveal actual causal chains of their decision processes [30].

Interactive explainability

Can the system’s explanations adapt or be adapted to respective stakeholder needs? Most contemporary explainability approaches rely on a one-size-fits-all solution when delivering explanations [38]. However, to provide useful information to stakeholders, explainability approaches need to integrate stakeholders’ background knowledge and their explanatory needs given a particular system. Interactive explainability seems to be key with respect to these requirements [38, 37].

Explainability trade-off

Is it more important to have an explainable system, or should the system rather be efficient or accurate? Some intelligent systems are or can be made explainable (at least to some degree) without a loss in efficiency or accuracy, but for others this is not technically feasible [9, 38]. Whether to trade-off accuracy for explainability is a decision that must not only be made, but must also be justified, and should, therefore, be included in an auditing process.

Ii-A2 Technical Benefits

Explainability can be an important building block for better systems. Primarily, explainability can help developers to detect errors and, thus, can lead to increased system debugability, facilitating system safety and robustness [10]. Further benefits are verification and validation, which become easier through explainability [14]. Overall, the technical perspective is a foundation for all other perspectives on explainability because without the technical perspective, requirements from other perspective cannot be met.

Ii-B Psychological Perspective

Ii-B1 Psychological Dimensions

The psychological perspective on system explainability mostly reflects user needs in the respective application context of an intelligent system. Following, we present sample auditing dimensions from a psychological perspective.

Understandability

Does the provided information help people to better understand system decision-making processes? This may be the primary psychological dimension to investigate in explainability audits [26, 23] as much of the previous work has resulted in explainability approaches aimed towards helping developers understand system decision processes but not other stakeholders [9, 33].

Context-Dependency

Does the system provide context-related intelligibility of its decision-making processes or of its outputs? In this case, context-related means that people need insights that depend on their goals and needs relevant to the context [26, 38]. For instance, if medical doctors want to learn why a system produced a respective diagnosis, they may want to have detailed information helping them to understand why a system produced a respective output. In contrast, if they are under time-pressure and want to quickly decide what might be the best patient treatment, different kind of information might be more helpful [1].

Usability

Does the system provide easy to access information, and are explainability functionalities easy to use [41, 13]? System explainability needs to be usable, meaning that people can actually use the system in a way that they can access the information they need to better understand the system’s decision-making processes. Ensuring usable explainability can mean to optimize user interfaces or interactivity between user and system [30].

Honesty

Does the system provide non-deceptive information? There are emerging discussions on possible “dark patterns” of explainability [13]. Explainability auditing needs to explore whether system explainability contributes to the goals of respective stakeholders or whether the system is designed to nudge or persuade people instead of providing actually relevant information. In the case of honest explainability, stakeholder interests might diverge as system deployers might intentionally want systems to be designed in a way that ensures certain user behavior whereas users might be less happy about systems that try to influence their behavior [9].

Ii-B2 Psychological Benefits

First, ensuring the psychological dimensions may help to develop warranted trust in systems [19, 27]. If we ensure that system explainability meets the psychological dimensions outlined above, people will be better able to assess when and under what conditions to follow system recommendations. Second, ensuring psychological dimensions of explainability can increase system acceptance and thus actual use of systems in applied contexts [41]. Although maybe obvious, designing explainability qualities of systems in a way that are relevant and usable will ensure that systems will actually end up being used instead of being ignored. Third, joint human-system performance can improve [25]. If systems provide understandable, context-dependent, usable, and honest information, this might enable users to make more informed and more accurate decisions in contrast to situations where they would receive too complex, irrelevant, or deceptive information for their respective task at hand.

Ii-C Legal Perspective

Ii-C1 Legal Dimensions

From a legal perspective, the auditing process should take into account all legal requirements for the use of intelligent systems, in order to be able to prove to the user that the intelligent system operates in compliance with the law (especially with regard to data protection and cybersecurity) and, in particular, that no fundamental rights of the individual are violated. Intelligent systems that are not able to prove their compliance with the legal system (as minimum requirement) cannot be considered trustworthy and explainable. In the future, the auditing process will also have to take into account regulatory requirements for the use of intelligent systems, such as those set out in the EU Commission’s Proposal for a Regulation laying down harmonised rules on artificial intelligence (COM(2021) 206 final).

General Data Protection Regulation (Gdpr)

When is the processing of data permissible under data protection law when using intelligent systems? Insofar as personal data is processed via an intelligent system, the requirements of the GDPR must be observed. These oblige the controller (Art. 4 No. 7 GDPR) to comply with the principles relating to processing of personal data (Art. 5 GDPR) as well as more detailed requirements, such as proof of a sufficient legal basis for data processing (Art. 6 GDPR) as well as measures for security (Art. 32 (1) GDPR). In addition to the general data protection requirements, Art. 22 GDPR may also need to be observed, which sets out requirements for automated decision-making (ADM). However, Art. 22 GDPR only applies if the ADM decision is not reviewed again by a human but directly translated into a decision of its own [42]. Thus, if a human correctness and plausibility check of the decision takes place, Art. 22 GDPR does not make any additional specifications [7]. However, if it is a decision based solely on automated processing that produces legal effects concerning the data subject or significantly affects them, it is only permissible if the requirements of Art. 22 (2) GDPR are fulfilled. This is the case if the decision is necessary for a contract between the data subject and the controller, if it is required by law, or if it is based on the data subject’s explicit consent.

Cybersecurity Act

What are the legal requirements for cybersecurity? In addition to the objectives, tasks and organizational matters of the European Union Agency for Cybersecurity (ENISA), the Cybersecurity Act also contains a framework for the establishment of a European cybersecurity certification for information and communications technology (ICT) products, ICT services and ICT processes, according to Art. 1 (1) of the Cybersecurity Act. According to Art. 2 No. 13 Cybersecurity Act, ICT services refer to services that consist entirely or predominantly of the transmission, storage, retrieval or processing of information by means of network and information systems. According to Art. 2 No. 14 Cybersecurity Act, the term ICT process includes all activities to design, develop, provide or maintain an ICT product or service. Intelligent systems, since they administer information by means of network and information systems, fulfill the requirements of an ICT service. In addition, activities related to intelligent systems may also meet the requirements for an ICT process if the aforementioned requirements are met.

However, the Cybersecurity Act does not impose any mandatory cybersecurity requirements on intelligent systems. Art. 46 et seq. of the Cybersecurity Act merely provide for a voluntary certification framework. Therefore, there is no obligation for manufacturers or operators to carry out certification, meaning that no binding requirements for the cybersecurity of intelligent systems have yet resulted from the Cybersecurity Act. At the same time, the lack of specific cybersecurity requirements on intelligent systems shows that current regulation does not sufficiently address new technologies and their particular threats. A legal framework that proactively shapes the use of intelligent systems would therefore be desirable.

Proposal for a regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)

Are there legal requirements for the use of intelligent systems? On April 21, 2021, the EU Commission published its proposal for a regulation laying down harmonised rules on artificial intelligence (COM(2021) 206 final). The planned regulation has clear parallels to the GDPR in several places, e.g. the risk-based approach and the scope of application. For example, the regulation applies not only to providers in the EU, but also to providers who offer their AI systems on the European market (market location principle). Art. 5 of the planned regulation prohibits certain use-case scenarios for AI, e.g. discrimination. In addition, Art. 6 et seq. of the planned regulation specifies high-risk areas of application for which stricter requirements apply than for ordinary AI applications. The Artificial Intelligence Act is expected to have an enormous impact on the legal framework for the use of intelligent systems in Europe, but also worldwide. As soon as the regulation is available in its final version, it should therefore be included in explainability audits and later be a mandatory part of the auditing process.

Ii-C2 Legal Benefits

From a legal point of view, an auditing process promises the advantage that compliance with legal obligations can be achieved and demonstrated to third parties. In particular, if auditing is carried out by an independent third party, the process creates the opportunity to actively enforce the law by making the auditing itself a legal obligation or by attaching legal benefits to it. This can also increase the incentive to audit an intelligent system.

Ii-D Ethical Perspective

Ii-D1 Ethical Dimensions

The ethical perspective on system explainability is build on two fundamental considerations. First, it is about upholding moral rights (e.g., as given by certain normative theories) of and fulfilling norms with respect to the involved stakeholders. Second, it is about enabling stakeholders to live up to their obligations. Following, we present sample auditing dimensions from an ethical perspective.

Responsibility

Does the provided information enable responsible decision-making? And: Does the information make the allocation of (moral) responsibility possible? Responsibility is about identifying who is blameworthy or answerable for a certain decision of a system [39]. Especially in cases where a human is in the loop and has to make a decision based on a system’s output, it is important to consider both of the above facets when incorporating explainability into a system. First, the provision of certain pieces of information may enable a human in the loop to become a responsible decision maker [35]. Subsequently, it becomes possible to allocate responsibility to this person [34]. However, not all types of explainability may help to achieve responsibility [26].

Non-Discrimination

Does the system provide information that makes it possible to detect or at least check for potential, and assess actual discrimination? Especially for people affected by decisions, the decisions of intelligent systems can have significant implications. Be it applying for a loan, a job, or a visa, in all these cases intelligent systems are increasingly used. However, system outputs that lead to decisions about people may involve unfair biases (see, e.g., [28]). The explanation of a rejection made by an intelligent system should make it possible to identify whether protected attributes like race or gender (also indirectly) affected the decision. Furthermore, where the influence of protected attributes cannot be prevented, explanations should at least enable parties affected by automated system-based decisions to understand why system outputs are biased and whether this bias is tolerable or even justified [24].

(Moral) Right to Explanation

Does the information provision comply with moral rights to explanation? Arguably, there is a moral right to explanation that requires intelligent systems to be able to provide certain types of explanations [21]. More precisely, people should receive explanations that enable them to contest decisions that are based on the recommendations of intelligent systems [40]. In general, the advance of intelligent systems to ever more areas of human lives often precludes people affected by system-based decisions from tracing how certain decisions came about and affected them [32]. This lack of traceability is morally problematic [4, 5]. Some types of explanations promise to empower people to contest and check decisions of intelligent systems [43].

Ii-D2 Ethical Benefits

The moral integrity of a system is a significant building block of its trustworthiness. As such, the possibility to check for this integrity, for instance, by checking whether the system unduly discriminates, is essential for stakeholders to develop appropriate trust into systems. Furthermore, the possibility to check for such an integrity can also contribute to system trustworthiness (at least when viewing trustworthiness from a philosophical point of view [29]). Lastly, systems that allow for 1) responsible decision-making, 2) an adequate allocation of responsibility, and 3) general contesting of decisions (as given by a moral right for explanations) are important in ensuring acceptance of decisions of intelligent systems [4, 36].

Iii Outlook and Conclusion

The main contribution of this vision paper is to propose multi-disciplinary dimensions for explainability auditing. While explainability auditing based on only one of the aforementioned perspectives may provide valuable quality control for this perspective (e.g., with respect to regulatory requirements), it will ultimately fall short with respect to the multi-disciplinary challenges associated with explainability.

The dimensions we have proposed in this paper are just a rough description of what could be the basis for explainability auditing. Future work would need to outline concrete auditing and certification processes and investigate what might be the most practical implementation of explainability auditing. In this sense, the proposed dimensions could serve as a starting point for creating an auditing checklist. To this end, future research might need to develop comprehensive lists of dimensions that are relevant in explainability auditing processes [26].

Auditing processes can be a first step towards quality norms (e.g., DIN norms) as well as a way to ensure these norms as soon as they are implemented. We hope that this paper initiates a discussion on the ways in which multi-disciplinary explainability auditing processes can be realized in practice.

Acknowledgments

Work on this paper was funded by the Volkswagen Foundation grants AZ 98512, 98513, and 98514 “Explainable Intelligent Systems” (EIS) and by the DFG grant 389792660 as part of TRR 248. We thank three anonymous reviewers for their feedback.

References

  • [1] R. Ackerman and T. Lauterman (2012) Taking reading comprehension exams on screen or on paper? A metacognitive analysis of learning texts under time pressure. Computers in Human Behavior 28 (5), pp. 1816–1828. External Links: Document, Document Cited by: §II-B1.
  • [2] A. Adadi and M. Berrada (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6 (), pp. 52138–52160. External Links: Document, ISSN 2169-3536 Cited by: §I.
  • [3] A. B. Arrieta, N. Díaz-Rodríguez, J. D. Ser, A. Bennetot, S. Tabik, A. Barbado, S. Garcia, S. Gil-Lopez, D. Molina, R. Benjamins, R. Chatila, and F. Herrera (2020) Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58, pp. 82–115. External Links: ISSN 1566-2535, Document Cited by: §I, §II-A1.
  • [4] K. Baum, H. Hermanns, and T. Speith (2018) From machine ethics to machine explainability and back. In International Symposium on Artificial Intelligence and Mathematics (ISAIM), pp. 1–8. Cited by: §II-D1, §II-D2.
  • [5] K. Baum, H. Hermanns, and T. Speith (2018) Towards a framework combining machine ethics and machine explainability. In Proceedings 3rd Workshop on formal reasoning about Causation, Responsibility, and Explanations in Science and Technology, (CREST), EPTCS, Vol. 286, pp. 34–49. External Links: Document Cited by: §II-D1.
  • [6] R. Binns, M. Van Kleek, M. Veale, U. Lyngs, J. Zhao, and N. Shadbolt (2018) ’It’s reducing a human being to a percentage’: perceptions of justice in algorithmic decisions. In Proceedings of the 2018 Conference on Human Factors in Computing Systems (CHI), New York, NY, USA, pp. 1–14. External Links: Document, ISBN 978-1-4503-5620-6 Cited by: §I.
  • [7] B. BuchnerJ. Kühling and B. Buchner (Eds.) (2020) Datenschutz-grundverordnung bdsg : kommentar. 3. Auflage edition, C.H. Beck, München. Note: Art. 22 DSGVO, Rn. 15 External Links: ISBN 9783406749940; 3406749941 Cited by: §II-C1.
  • [8] Bundesministerium für Bildung und Forschung, Bundesministerium für Wirtschaft und Energie, and Bundesministerium für Arbeit und Soziales (2018) Strategie Künstliche Intelligenz der Bundesregierung. Federal Government of Germany. External Links: Link Cited by: §I.
  • [9] J. Burrell (2016)

    How the machine ‘thinks’: understanding opacity in machine learning algorithms

    .
    Big Data & Society 3 (1), pp. 1–12. External Links: Document Cited by: §I, §II-A1, §II-B1, §II-B1.
  • [10] D. V. Carvalho, E. M. Pereira, and J. S. Cardoso (2019) Machine learning interpretability: a survey on methods and metrics. Electronics 8 (8). External Links: ISSN 2079-9292, Document Cited by: §I, §II-A2.
  • [11] L. Chazette, W. Brunotte, and T. Speith (2021) Exploring explainability: a definition, a model, and a knowledge catalogue. In IEEE 29th International Requirements Engineering Conference (RE), Cited by: §I.
  • [12] L. Chazette and K. Schneider (2020) Explainability as a non-functional requirement: challenges and recommendations. Requirements Engineering 25 (4), pp. 493–514. External Links: Document Cited by: §I.
  • [13] M. Chromik, M. Eiband, S. T. Völkel, and D. Buschek (2019) Dark patterns of explainability, transparency, and user control for intelligent systems. In Joint Proceedings of the ACM IUI 2019 Workshops, CEUR Workshop Proceedings, Vol. 2327. External Links: Link Cited by: §II-B1, §II-B1.
  • [14] K. Darlington (2013) Aspects of intelligent systems explanation. Universal Journal of Control and Automation 1 (2), pp. 40–51. External Links: Document Cited by: §II-A2.
  • [15] F. Doshi-Velez and B. Kim (2017) Towards a rigorous science of interpretable machine learning. CoRR abs/1702.08608. External Links: 1702.08608 Cited by: §I.
  • [16] EU High-Level Expert Group on Artificial Intelligence (2019) Ethics Guidelines for Trustworthy AI. External Links: Link Cited by: §I.
  • [17] T. Hagendorff (2020) The ethics of AI ethics: an evaluation of guidelines. Minds & Machines 30 (1), pp. 99–120. External Links: Document Cited by: §I.
  • [18] M. Hall, D. Harborne, R. Tomsett, V. Galetic, S. Quintana-Amate, A. Nottle, and A. Preece (2019) A systematic method to understand requirements for explainable AI (XAI) systems. In Proceedings of the IJCAI 2019 Workshop on Explainable Artificial Intelligence, pp. 21–27. Cited by: §II-A1.
  • [19] A. Jacovi, A. Marasović, T. Miller, and Y. Goldberg (2021) Formalizing trust in artificial intelligence: prerequisites, causes and goals of human trust in AI. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT), New York, NY, USA, pp. 624–635. External Links: Document Cited by: §I, §II-B2.
  • [20] A. Jobin, M. Ienca, and E. Vayena (2019) The global landscape of AI ethics guidelines. Nature Machine Intelligence 1 (9), pp. 389–399. External Links: Document Cited by: §I.
  • [21] T. W. Kim and B. R. Routledge (2021) Why a right to an explanation of algorithmic decision-making should exist: a trust-based approach. Business Ethics Quarterly, pp. 1–28. External Links: Document Cited by: §II-D1.
  • [22] D. Ko and A. R. Dennis (2011) Profiting from knowledge management: the impact of time and experience. Information Systems Research 22 (1), pp. 134–152. External Links: Document Cited by: §I.
  • [23] M. A. Köhl, K. Baum, M. Langer, D. Oster, T. Speith, and D. Bohlender (2019) Explainability as a non-functional requirement. In IEEE 27th International Requirements Engineering Conference (RE), pp. 363–368. External Links: ISSN 1090-705X, Document Cited by: §II-B1.
  • [24] M. Kusner, J. Loftus, C. Russell, and R. Silva (2017) Counterfactual fairness. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS), Red Hook, NY, USA, pp. 4069–4079. Cited by: §II-D1.
  • [25] V. Lai and C. Tan (2019) On human predictions with explanations and predictions of machine learning models: A case study on deception detection. In Proceedings of the 2019 ACM Conference on Fairness, Accountability, and Transparency (FAccT), New York, NY, USA, pp. 29–38. External Links: Document Cited by: §II-B2.
  • [26] M. Langer, D. Oster, T. Speith, H. Hermanns, L. Kästner, E. Schmidt, A. Sesing, and K. Baum (2021) What do we want from explainable artificial intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Articifial Intelligence 296. External Links: Document Cited by: §I, §I, §II-B1, §II-B1, §II-D1, §III.
  • [27] J. D. Lee and K. A. See (2004) Trust in automation: designing for appropriate reliance. Human Factors 46 (1), pp. 50–80. External Links: Document Cited by: §II-B2.
  • [28] B. Lepri, N. Oliver, E. Letouzé, A. Pentland, and P. Vinck (2018) Fair, transparent, and accountable algorithmic decision-making processes. Philosophy & Technology 31 (4), pp. 611–627. External Links: Document Cited by: §II-D1.
  • [29] C. McLeod (2020) Trust. In The Stanford Encyclopedia of Philosophy, E. N. Zalta (Ed.), Note: https://plato.stanford.edu/archives/fall2020/entries/trust/ Cited by: §II-D2.
  • [30] T. Miller (2019) Explanation in artificial intelligence: insights from the social sciences. Artificial Intelligence 267, pp. 1–38. External Links: ISSN 0004-3702, Document Cited by: §II-A1, §II-B1.
  • [31] J. Mökander and L. Floridi (2021) Ethics-based auditing to develop trustworthy AI. Minds & Machines, pp. 1–5. External Links: Document Cited by: §I.
  • [32] M. Mora-Cantallops, S. Sánchez-Alonso, E. García-Barriocanal, and M. Sicilia (2021) Traceability for trustworthy AI: a review of models and tools. Big Data and Cognitive Computing 5 (2). External Links: ISSN 2504-2289, Document Cited by: §II-D1.
  • [33] A. Páez (2019) The pragmatic turn in explainable artificial intelligence (XAI). Minds & Machines 29 (3), pp. 441–459. External Links: Document Cited by: §II-B1.
  • [34] W. Pieters (2011) Explanation and trust: what to tell the user in security and AI?. Ethics and Information Technology 13 (1), pp. 53–64. External Links: Document Cited by: §II-D1.
  • [35] S. Robbins (2019) A misdirected principle with a catch: explicability for AI. Minds and Machines 29 (4), pp. 495–514. External Links: Document Cited by: §II-D1.
  • [36] A. Rosenfeld and A. Richardson (2019) Explainability in human–agent systems. Autonomous Agents and Multi-Agent Systems 33 (6), pp. 673–705. External Links: Document Cited by: §II-A1, §II-D2.
  • [37] J. Schneider and J. Handali (2019) Personalized explanation for machine learning: a conceptualization. In Proceedings of the 27th European Conference on Information Systems (ECIS), pp. 1–17. Cited by: §II-A1.
  • [38] K. Sokol and P. A. Flach (2020) Explainability fact sheets: a framework for systematic assessment of explainable approaches. In Proceedings of the 2020 ACM Conference on Fairness, Accountability, and Transparency (FAcct), New York, NY, USA, pp. 56–67. External Links: Document Cited by: §II-A1, §II-A1, §II-A1, §II-A1, §II-B1.
  • [39] P. Strawson (1962) Freedom and resentment. Proceedings of the British Academy 48, pp. 1–25. Cited by: §II-D1.
  • [40] A. A. Tubella, A. Theodorou, V. Dignum, and L. Michael (2020) Contestable black boxes. In 4th International Joint Conference on Rules and Reasoning (RuleML+RR), Lecture Notes in Computer Science, Vol. 12173, pp. 159–167. External Links: Document Cited by: §II-D1.
  • [41] V. Venkatesh, M. G. Morris, G. B. Davis, and F. D. Davis (2003) User acceptance of information technology: toward a unified view. MIS Quarterly 27 (3), pp. 425–478. External Links: Document Cited by: §II-B1, §II-B2.
  • [42] K. von LewinskiS. Brink and H. A. Wolff (Eds.) (2021) BeckOK Datenschutzrecht. 35. Edition, Stand: 01.02.2021 edition, C.H. Beck, München. Note: Art. 22 DSGVO, Rn. 14 ff. Cited by: §II-C1.
  • [43] S. Wachter, B. Mittelstadt, and C. Russell (2018) Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harvard Journal of Law & Technology 31 (2), pp. 841–887. External Links: Document Cited by: §II-D1.