Personalized Detection of Cognitive Biases in Actions of Users from Their Logs: Anchoring and Recency Biases

06/30/2022
by   Atanu R Sinha, et al.
0

Cognitive biases are mental shortcuts humans use in dealing with information and the environment, and which result in biased actions and behaviors (or, actions), unbeknownst to themselves. Biases take many forms, with cognitive biases occupying a central role that inflicts fairness, accountability, transparency, ethics, law, medicine, and discrimination. Detection of biases is considered a necessary step toward their mitigation. Herein, we focus on two cognitive biases - anchoring and recency. The recognition of cognitive bias in computer science is largely in the domain of information retrieval, and bias is identified at an aggregate level with the help of annotated data. Proposing a different direction for bias detection, we offer a principled approach along with Machine Learning to detect these two cognitive biases from Web logs of users' actions. Our individual user level detection makes it truly personalized, and does not rely on annotated data. Instead, we start with two basic principles established in cognitive psychology, use modified training of an attention network, and interpret attention weights in a novel way according to those principles, to infer and distinguish between these two biases. The personalized approach allows detection for specific users who are susceptible to these biases when performing their tasks, and can help build awareness among them so as to undertake bias mitigation.

READ FULL TEXT
research
11/19/2020

Toward a Bias-Aware Future for Mixed-Initiative Visual Analytics

Mixed-initiative visual analytics systems incorporate well-established d...
research
04/09/2018

A review of possible effects of cognitive biases on interpretation of rule-based machine learning models

This paper investigates to what extent do cognitive biases affect human ...
research
04/03/2023

Challenging the appearance of machine intelligence: Cognitive bias in LLMs

Assessments of algorithmic bias in large language models (LLMs) are gene...
research
10/20/2020

Bias in Conversational Search: The Double-Edged Sword of the Personalized Knowledge Graph

Conversational AI systems are being used in personal devices, providing ...
research
08/08/2022

Modular interface for managing cognitive bias in experts

Expert knowledge is required to interpret data across a range of fields....
research
08/22/2023

Targeted Data Augmentation for bias mitigation

The development of fair and ethical AI systems requires careful consider...
research
02/07/2023

Simulating the impact of cognitive biases on the mobility transition

Climate change is becoming more visible, and human adaptation is require...

Please sign up or login with your details

Forgot password? Click here to reset