AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts

10/04/2021
by   Tongshuang Wu, et al.
9

Although large language models (LLMs) have demonstrated impressive potential on simple tasks, their breadth of scope, lack of transparency, and insufficient controllability can make them less effective when assisting humans on more complex tasks. In response, we introduce the concept of Chaining LLM steps together, where the output of one step becomes the input for the next, thus aggregating the gains per step. We first define a set of LLM primitive operations useful for Chain construction, then present an interactive system where users can modify these Chains, along with their intermediate results, in a modular way. In a 20-person user study, we found that Chaining not only improved the quality of task outcomes, but also significantly enhanced system transparency, controllability, and sense of collaboration. Additionally, we saw that users developed new ways of interacting with LLMs through Chains: they leveraged sub-tasks to calibrate model expectations, compared and contrasted alternative strategies by observing parallel downstream effects, and debugged unexpected model outputs by "unit-testing" sub-components of a Chain. In two case studies, we further explore how LLM Chains may be used in future applications.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

03/13/2022

PromptChainer: Chaining Large Language Model Prompts through Visual Programming

While LLMs can effectively help prototype single ML functionalities, man...
08/06/2019

Experiential AI

Experiential AI is proposed as a new research agenda in which artists an...
12/07/2018

What Are You Hiding? Algorithmic Transparency and User Perceptions

Extensive recent media focus has been directed towards the dark side of ...
10/26/2021

LayerZero: Trustless Omnichain Interoperability Protocol

The proliferation of blockchains has given developers a variety of platf...
09/22/2021

Salience-Aware Event Chain Modeling for Narrative Understanding

Storytelling, whether via fables, news reports, documentaries, or memoir...
04/14/2021

IGA : An Intent-Guided Authoring Assistant

While large-scale pretrained language models have significantly improved...
02/08/2021

RECAST: Enabling User Recourse and Interpretability of Toxicity Detection Models with Interactive Visualization

With the widespread use of toxic language online, platforms are increasi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.