DeepAI
Log In Sign Up

Dice in the Black Box: User Experiences with an Inscrutable Algorithm

12/07/2018
by   Aaron Springer, et al.
0

We demonstrate that users may be prone to place an inordinate amount of trust in black box algorithms that are framed as intelligent. We deploy an algorithm that purportedly assesses the positivity and negativity of a users' writing emotional writing. In actuality, the algorithm responds in a random fashion. We qualitatively examine the paths to trust that users followed while testing the system. In light of the ease with which users may trust systems exhibiting "intelligent behavior" we recommend corrective approaches.

READ FULL TEXT
10/20/2021

QoS-based Trust Evaluation for Data Services as a Black Box

This paper proposes a QoS-based trust evaluation model for black box dat...
10/22/2021

ProtoShotXAI: Using Prototypical Few-Shot Architecture for Explainable AI

Unexplainable black-box models create scenarios where anomalies cause de...
07/04/2017

Interpretable & Explorable Approximations of Black Box Models

We propose Black Box Explanations through Transparent Approximations (BE...
05/22/2018

"Why Should I Trust Interactive Learners?" Explaining Interactive Queries of Classifiers to Users

Although interactive learning puts the user into the loop, the learner r...
07/31/2019

What's in the box? Explaining the black-box model through an evaluation of its interpretable features

Algorithms are powerful and necessary tools behind a large part of the i...
04/12/2021

A Conceptual Framework for Establishing Trust in Real World Intelligent Systems

Intelligent information systems that contain emergent elements often enc...
10/03/2022

Aggregator Reuse and Extension for Richer Web Archive Interaction

Memento aggregators enable users to query multiple web archives for capt...