"I had a solid theory before but it's falling apart": Polarizing Effects of Algorithmic Transparency

11/06/2018 ∙ by Aaron Springer, et al. ∙ 0

The rise of machine learning has brought closer scrutiny to intelligent systems, leading to calls for greater transparency and explainable algorithms. We explore the effects of transparency on user perceptions of a working intelligent system for emotion detection. In exploratory Study 1, we observed paradoxical effects of transparency which improves perceptions of system accuracy for some participants while reducing accuracy perceptions for others. In Study 2, we test this observation using mixed methods, showing that the apparent transparency paradox can be explained by a mismatch between participant expectations and system predictions. We qualitatively examine this process, indicating that transparency can undermine user confidence by causing users to fixate on flaws when they already have a model of system operation. In contrast transparency helps if users lack such a model. Finally, we revisit the notion of transparency and suggest design considerations for building safe and successful machine learning systems based on our insights.



There are no comments yet.


page 4

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.