Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK

04/20/2023
by   Luca Nannini, et al.
0

Public attention towards explainability of artificial intelligence (AI) systems has been rising in recent years to offer methodologies for human oversight. This has translated into the proliferation of research outputs, such as from Explainable AI, to enhance transparency and control for system debugging and monitoring, and intelligibility of system process and output for user services. Yet, such outputs are difficult to adopt on a practical level due to a lack of a common regulatory baseline, and the contextual nature of explanations. Governmental policies are now attempting to tackle such exigence, however it remains unclear to what extent published communications, regulations, and standards adopt an informed perspective to support research, industry, and civil interests. In this study, we perform the first thematic and gap analysis of this plethora of policies and standards on explainability in the EU, US, and UK. Through a rigorous survey of policy documents, we first contribute an overview of governmental regulatory trajectories within AI explainability and its sociotechnical impacts. We find that policies are often informed by coarse notions and requirements for explanations. This might be due to the willingness to conciliate explanations foremost as a risk management tool for AI oversight, but also due to the lack of a consensus on what constitutes a valid algorithmic explanation, and how feasible the implementation and deployment of such explanations are across stakeholders of an organization. Informed by AI explainability research, we conduct a gap analysis of existing policies, leading us to formulate a set of recommendations on how to address explainability in regulations for AI systems, especially discussing the definition, feasibility, and usability of explanations, as well as allocating accountability to explanation providers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/22/2019

Towards Quantification of Explainability in Explainable Artificial Intelligence Methods

Artificial Intelligence (AI) has become an integral part of domains such...
research
10/04/2020

Explanation Ontology: A Model of Explanations for User-Centered AI

Explainability has been a goal for Artificial Intelligence (AI) systems ...
research
12/29/2022

A Theoretical Framework for AI Models Explainability

Explainability is a vibrant research topic in the artificial intelligenc...
research
03/22/2022

Explainability in reinforcement learning: perspective and position

Artificial intelligence (AI) has been embedded into many aspects of peop...
research
08/30/2023

Explainable Answer-set Programming

The interest in explainability in artificial intelligence (AI) is growin...
research
02/05/2019

Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI

This is an integrative review that address the question, "What makes for...
research
03/09/2021

Explanations in Autonomous Driving: A Survey

The automotive industry is seen to have witnessed an increasing level of...

Please sign up or login with your details

Forgot password? Click here to reset