DeepAI AI Chat
Log In Sign Up

A Multistakeholder Approach Towards Evaluating AI Transparency Mechanisms

03/27/2021
by   Ana Lucic, et al.
0

Given that there are a variety of stakeholders involved in, and affected by, decisions from machine learning (ML) models, it is important to consider that different stakeholders have different transparency needs. Previous work found that the majority of deployed transparency mechanisms primarily serve technical stakeholders. In our work, we want to investigate how well transparency mechanisms might work in practice for a more diverse set of stakeholders by conducting a large-scale, mixed-methods user study across a range of organizations, within a particular industry such as health care, criminal justice, or content moderation. In this paper, we outline the setup for our study.

READ FULL TEXT
05/02/2023

SoK: Log Based Transparency Enhancing Technologies

This paper systematizes log based Transparency Enhancing Technologies. B...
12/12/2019

ABOUT ML: Annotation and Benchmarking on Understanding and Transparency of Machine Learning Lifecycles

We present the "Annotation and Benchmarking on Understanding and Transpa...
05/09/2019

Transparency in Maintenance of Recruitment Chatbots

We report on experiences with implementing conversational agents in the ...
01/02/2021

Dimensions of Transparency in NLP Applications

Broader transparency in descriptions of and communication regarding AI s...
11/15/2020

Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty

Transparency of algorithmic systems entails exposing system properties t...