A Multistakeholder Approach Towards Evaluating AI Transparency Mechanisms

by   Ana Lucic, et al.

Given that there are a variety of stakeholders involved in, and affected by, decisions from machine learning (ML) models, it is important to consider that different stakeholders have different transparency needs. Previous work found that the majority of deployed transparency mechanisms primarily serve technical stakeholders. In our work, we want to investigate how well transparency mechanisms might work in practice for a more diverse set of stakeholders by conducting a large-scale, mixed-methods user study across a range of organizations, within a particular industry such as health care, criminal justice, or content moderation. In this paper, we outline the setup for our study.


ABOUT ML: Annotation and Benchmarking on Understanding and Transparency of Machine Learning Lifecycles

We present the "Annotation and Benchmarking on Understanding and Transpa...

Transparency in Maintenance of Recruitment Chatbots

We report on experiences with implementing conversational agents in the ...

Datensouveränität für Verbraucher:innen: Technische Ansätze durch KI-basierte Transparenz und Auskunft im Kontext der DSGVO

A sufficient level of data sovereignty is extremely difficult for consum...

On the Difficulties of Incentivizing Online Privacy through Transparency: A Qualitative Survey of the German Health Insurance Market

Today, online privacy is the domain of regulatory measures and privacy-e...

Dimensions of Transparency in NLP Applications

Broader transparency in descriptions of and communication regarding AI s...

Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty

Transparency of algorithmic systems entails exposing system properties t...

Two-dimensional optomechanics formed by the graphene sheet and photonic crystal cavity

We theoretically study photon transmission and mechanical ground state c...