AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap

06/02/2023
by   Q Vera Liao, et al.
0

The rise of powerful large language models (LLMs) brings about tremendous opportunities for innovation but also looming risks for individuals and society at large. We have reached a pivotal moment for ensuring that LLMs and LLM-infused applications are developed and deployed responsibly. However, a central pillar of responsible AI – transparency – is largely missing from the current discourse around LLMs. It is paramount to pursue new approaches to provide transparency for LLMs, and years of research at the intersection of AI and human-computer interaction (HCI) highlight that we must do so with a human-centered perspective: Transparency is fundamentally about supporting appropriate human understanding, and this understanding is sought by different stakeholders with different goals in different contexts. In this new era of LLMs, we must develop and design approaches to transparency by considering the needs of stakeholders in the emerging LLM ecosystem, the novel types of LLM-infused applications being built, and the new usage patterns and challenges around LLMs, all while building on lessons learned about how people process, interact with, and make use of information. We reflect on the unique challenges that arise in providing transparency for LLMs, along with lessons learned from HCI and responsible AI research that has taken a human-centered perspective on AI transparency. We then lay out four common approaches that the community has taken to achieve transparency – model reporting, publishing evaluation results, providing explanations, and communicating uncertainty – and call out open questions around how these approaches may or may not be applied to LLMs. We hope this provides a starting point for discussion and a useful roadmap for future research.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/12/2021

From Human-Computer Interaction to Human-AI Interaction: New Challenges and Opportunities for Enabling Human-Centered AI

While AI has benefited humans, it may also harm humans if not appropriat...
research
02/21/2023

Designerly Understanding: Information Needs for Model Transparency to Support Design Ideation for AI-Powered User Experience

Despite the widespread use of artificial intelligence (AI), designing us...
research
03/09/2023

Users are the North Star for AI Transparency

Despite widespread calls for transparent artificial intelligence systems...
research
02/16/2023

AI Usage Cards: Responsibly Reporting AI-generated Content

Given AI systems like ChatGPT can generate content that is indistinguish...
research
10/07/2022

Do We Need Explainable AI in Companies? Investigation of Challenges, Expectations, and Chances from Employees' Perspective

By using AI, companies want to improve their business success and innova...
research
11/11/2019

Experiences with Improving the Transparency of AI Models and Services

AI models and services are used in a growing number of highstakes areas,...
research
01/24/2022

Evaluating a Methodology for Increasing AI Transparency: A Case Study

In reaction to growing concerns about the potential harms of artificial ...

Please sign up or login with your details

Forgot password? Click here to reset