Dice in the Black Box: User Experiences with an Inscrutable Algorithm

12/07/2018
by   Aaron Springer, et al.
0

We demonstrate that users may be prone to place an inordinate amount of trust in black box algorithms that are framed as intelligent. We deploy an algorithm that purportedly assesses the positivity and negativity of a users' writing emotional writing. In actuality, the algorithm responds in a random fashion. We qualitatively examine the paths to trust that users followed while testing the system. In light of the ease with which users may trust systems exhibiting "intelligent behavior" we recommend corrective approaches.

READ FULL TEXT
research
10/20/2021

QoS-based Trust Evaluation for Data Services as a Black Box

This paper proposes a QoS-based trust evaluation model for black box dat...
research
10/22/2021

ProtoShotXAI: Using Prototypical Few-Shot Architecture for Explainable AI

Unexplainable black-box models create scenarios where anomalies cause de...
research
07/04/2017

Interpretable & Explorable Approximations of Black Box Models

We propose Black Box Explanations through Transparent Approximations (BE...
research
03/07/2023

Bootstrap The Original Latent: Learning a Private Model from a Black-box Model

In this paper, considering the balance of data/model privacy of model ow...
research
05/22/2018

"Why Should I Trust Interactive Learners?" Explaining Interactive Queries of Classifiers to Users

Although interactive learning puts the user into the loop, the learner r...
research
08/20/2020

The Role of Domain Expertise in User Trust and the Impact of First Impressions with Intelligent Systems

Domain-specific intelligent systems are meant to help system users in th...
research
04/12/2021

A Conceptual Framework for Establishing Trust in Real World Intelligent Systems

Intelligent information systems that contain emergent elements often enc...

Please sign up or login with your details

Forgot password? Click here to reset