Human-AI Guidelines in Practice: Leaky Abstractions as an Enabler in Collaborative Software Teams

07/04/2022
by   Hariharan Subramonyam, et al.
0

In conventional software development, user experience (UX) designers and engineers collaborate through separation of concerns (SoC): designers create human interface specifications, and engineers build to those specifications. However, we argue that Human-AI systems thwart SoC because human needs must shape the design of the AI interface, the underlying AI sub-components, and training data. How do designers and engineers currently collaborate on AI and UX design? To find out, we interviewed 21 industry professionals (UX researchers, AI engineers, data scientists, and managers) across 14 organizations about their collaborative work practices and associated challenges. We find that hidden information encapsulated by SoC challenges collaboration across design and engineering concerns. Practitioners describe inventing ad-hoc representations exposing low-level design and implementation details (which we characterize as leaky abstractions) to "puncture" SoC and share information across expertise boundaries. We identify how leaky abstractions are employed to collaborate at the AI-UX boundary and formalize a process of creating and using leaky abstractions.

READ FULL TEXT

page 1

page 5

page 13

page 14

page 15

page 18

page 19

page 20

research
03/19/2021

Lessons Learned from Educating AI Engineers

Over the past three years we have built a practice-oriented, bachelor le...
research
11/03/2020

Turning Software Engineers into AI Engineers

In industry as well as education as well as academics we see a growing n...
research
04/15/2021

Towards A Process Model for Co-Creating AI Experiences

Thinking of technology as a design material is appealing. It encourages ...
research
10/26/2020

Renovating Requirements Engineering: First Thoughts to Shape Requirements Engineering as a Profession

Legacy software systems typically include vital data for organizations t...
research
06/18/2021

Planning for Natural Language Failures with the AI Playbook

Prototyping AI user experiences is challenging due in part to probabilis...
research
02/19/2022

The four-fifths rule is not disparate impact: a woeful tale of epistemic trespassing in algorithmic fairness

Computer scientists are trained to create abstractions that simplify and...
research
09/04/2018

A Roadmap for the Value-Loading Problem

We analyze the value-loading problem. This is the problem of encoding mo...

Please sign up or login with your details

Forgot password? Click here to reset