DeepAI AI Chat
Log In Sign Up

A Multistakeholder Approach Towards Evaluating AI Transparency Mechanisms

by   Ana Lucic, et al.

Given that there are a variety of stakeholders involved in, and affected by, decisions from machine learning (ML) models, it is important to consider that different stakeholders have different transparency needs. Previous work found that the majority of deployed transparency mechanisms primarily serve technical stakeholders. In our work, we want to investigate how well transparency mechanisms might work in practice for a more diverse set of stakeholders by conducting a large-scale, mixed-methods user study across a range of organizations, within a particular industry such as health care, criminal justice, or content moderation. In this paper, we outline the setup for our study.


SoK: Log Based Transparency Enhancing Technologies

This paper systematizes log based Transparency Enhancing Technologies. B...

ABOUT ML: Annotation and Benchmarking on Understanding and Transparency of Machine Learning Lifecycles

We present the "Annotation and Benchmarking on Understanding and Transpa...

Transparency in Maintenance of Recruitment Chatbots

We report on experiences with implementing conversational agents in the ...

Dimensions of Transparency in NLP Applications

Broader transparency in descriptions of and communication regarding AI s...

Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty

Transparency of algorithmic systems entails exposing system properties t...