Log In Sign Up

Progressive Disclosure: Designing for Effective Transparency

by   Aaron Springer, et al.

As we increasingly delegate important decisions to intelligent systems, it is essential that users understand how algorithmic decisions are made. Prior work has often taken a technocentric approach to transparency. In contrast, we explore empirical user-centric methods to better understand user reactions to transparent systems. We assess user reactions to global and incremental feedback in two studies. In Study 1, users anticipated that the more transparent incremental system would perform better, but retracted this evaluation after experience with the system. Qualitative data suggest this may arise because incremental feedback is distracting and undermines simple heuristics users form about system operation. Study 2 explored these effects in depth, suggesting that users may benefit from initially simplified feedback that hides potential system errors and assists users in building working heuristics about system operation. We use these findings to motivate new progressive disclosure principles for transparency in intelligent systems.


"I had a solid theory before but it's falling apart": Polarizing Effects of Algorithmic Transparency

The rise of machine learning has brought closer scrutiny to intelligent ...

Algorithmic Transparency with Strategic Users

Should firms that apply machine learning algorithms in their decision-ma...

What Are You Hiding? Algorithmic Transparency and User Perceptions

Extensive recent media focus has been directed towards the dark side of ...

Trustworthy Transparency by Design

Individuals lack oversight over systems that process their data. This ca...

Open Player Modeling: Empowering Players through Data Transparency

Data is becoming an important central point for making design decisions ...

Feedback Loops in Open Data Ecosystems

Public agencies are increasingly publishing open data to increase transp...