Explaining decisions made with AI: A workbook (Use case 1: AI-assisted recruitment tool)

03/20/2021
by   David Leslie, et al.
0

Over the last two years, The Alan Turing Institute and the Information Commissioner's Office (ICO) have been working together to discover ways to tackle the difficult issues surrounding explainable AI. The ultimate product of this joint endeavour, Explaining decisions made with AI, published in May 2020, is the most comprehensive practical guidance on AI explanation produced anywhere to date. We have put together this workbook to help support the uptake of that guidance. The goal of the workbook is to summarise some of main themes from Explaining decisions made with AI and then to provide the materials for a workshop exercise that has been built around a use case created to help you gain a flavour of how to put the guidance into practice. In the first three sections, we run through the basics of Explaining decisions made with AI. We provide a precis of the four principles of AI explainability, the typology of AI explanations, and the tasks involved in the explanation-aware design, development, and use of AI/ML systems. We then provide some reflection questions, which are intended to be a launching pad for group discussion, and a starting point for the case-study-based exercise that we have included as Appendix B. In Appendix A, we go into more detailed suggestions about how to organise the workshop. These recommendations are based on two workshops we had the privilege of co-hosting with our colleagues from the ICO and Manchester Metropolitan University in January 2021. The participants of these workshops came from both the private and public sectors, and we are extremely grateful to them for their energy, enthusiasm, and tremendous insight. This workbook would simply not exist without the commitment and keenness of all our collaborators and workshop participants.

READ FULL TEXT
research
12/13/2018

Interaction Design for Explainable AI: Workshop Proceedings

As artificial intelligence (AI) systems become increasingly complex and ...
research
08/18/2022

Transcending XAI Algorithm Boundaries through End-User-Inspired Design

The boundaries of existing explainable artificial intelligence (XAI) alg...
research
12/02/2021

On Two XAI Cultures: A Case Study of Non-technical Explanations in Deployed AI System

Explainable AI (XAI) research has been booming, but the question "To who...
research
02/11/2022

Do People Engage Cognitively with AI? Impact of AI Assistance on Incidental Learning

When people receive advice while making difficult decisions, they often ...
research
10/11/2022

On Explainability in AI-Solutions: A Cross-Domain Survey

Artificial Intelligence (AI) increasingly shows its potential to outperf...
research
06/11/2019

Creation of User Friendly Datasets: Insights from a Case Study concerning Explanations of Loan Denials

Most explainable AI (XAI) techniques are concerned with the design of al...
research
04/08/2021

Question-Driven Design Process for Explainable AI User Experiences

A pervasive design issue of AI systems is their explainability–how to pr...

Please sign up or login with your details

Forgot password? Click here to reset