Progressive Disclosure: Designing for Effective Transparency

11/06/2018 ∙ by Aaron Springer, et al. ∙ 0

As we increasingly delegate important decisions to intelligent systems, it is essential that users understand how algorithmic decisions are made. Prior work has often taken a technocentric approach to transparency. In contrast, we explore empirical user-centric methods to better understand user reactions to transparent systems. We assess user reactions to global and incremental feedback in two studies. In Study 1, users anticipated that the more transparent incremental system would perform better, but retracted this evaluation after experience with the system. Qualitative data suggest this may arise because incremental feedback is distracting and undermines simple heuristics users form about system operation. Study 2 explored these effects in depth, suggesting that users may benefit from initially simplified feedback that hides potential system errors and assists users in building working heuristics about system operation. We use these findings to motivate new progressive disclosure principles for transparency in intelligent systems.



There are no comments yet.


page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.