Statutory Professions in AI governance and their consequences for explainable AI

06/15/2023
by   Labhaoise NiFhaolain, et al.
0

Intentional and accidental harms arising from the use of AI have impacted the health, safety and rights of individuals. While regulatory frameworks are being developed, there remains a lack of consensus on methods necessary to deliver safe AI. The potential for explainable AI (XAI) to contribute to the effectiveness of the regulation of AI is being increasingly examined. Regulation must include methods to ensure compliance on an ongoing basis, though there is an absence of practical proposals on how to achieve this. For XAI to be successfully incorporated into a regulatory system, the individuals who are engaged in interpreting/explaining the model to stakeholders should be sufficiently qualified for the role. Statutory professionals are prevalent in domains in which harm can be done to the health, safety and rights of individuals. The most obvious examples are doctors, engineers and lawyers. Those professionals are required to exercise skill and judgement and to defend their decision making process in the event of harm occurring. We propose that a statutory profession framework be introduced as a necessary part of the AI regulatory framework for compliance and monitoring purposes. We will refer to this new statutory professional as an AI Architect (AIA). This AIA would be responsible to ensure the risk of harm is minimised and accountable in the event that harms occur. The AIA would also be relied on to provide appropriate interpretations/explanations of XAI models to stakeholders. Further, in order to satisfy themselves that the models have been developed in a satisfactory manner, the AIA would require models to have appropriate transparency. Therefore it is likely that the introduction of an AIA system would lead to an increase in the use of XAI to enable AIA to discharge their professional obligations.

READ FULL TEXT
research
08/11/2021

Ontology drift is a challenge for explainable data governance

We introduce the needs for explainable AI that arise from Standard No. 2...
research
02/22/2021

Fair and Responsible AI: A Focus on the Ability to Contest

As the use of artificial intelligence (AI) in high-stakes decision-makin...
research
07/06/2023

VerifAI: Verified Generative AI

Generative AI has made significant strides, yet concerns about the accur...
research
05/14/2021

XAI Handbook: Towards a Unified Framework for Explainable AI

The field of explainable AI (XAI) has quickly become a thriving and prol...
research
07/21/2023

Model Reporting for Certifiable AI: A Proposal from Merging EU Regulation into AI Development

Despite large progress in Explainable and Safe AI, practitioners suffer ...
research
08/25/2023

ExD: Explainable Deletion

This paper focuses on a critical yet often overlooked aspect of data in ...
research
03/08/2021

A Framework for Enabling Safe and Resilient Food Factories for the Public Feeding Programs

Public feeding programs continue to be a major source of nutrition to a ...

Please sign up or login with your details

Forgot password? Click here to reset