Causal effect of racial bias in data and machine learning algorithms on user persuasiveness discriminatory decision making: An Empirical Study

01/22/2022
by   Kinshuk Sengupta, et al.
0

Language data and models demonstrate various types of bias, be it ethnic, religious, gender, or socioeconomic. AI/NLP models, when trained on the racially biased dataset, AI/NLP models instigate poor model explainability, influence user experience during decision making and thus further magnifies societal biases, raising profound ethical implications for society. The motivation of the study is to investigate how AI systems imbibe bias from data and produce unexplainable discriminatory outcomes and influence an individual's articulateness of system outcome due to the presence of racial bias features in datasets. The design of the experiment involves studying the counterfactual impact of racial bias features present in language datasets and its associated effect on the model outcome. A mixed research methodology is adopted to investigate the cross implication of biased model outcome on user experience, effect on decision-making through controlled lab experimentation. The findings provide foundation support for correlating the implication of carry-over an artificial intelligence model solving NLP task due to biased concept presented in the dataset. Further, the research outcomes justify the negative influence on users' persuasiveness that leads to alter the decision-making quotient of an individual when trying to rely on the model outcome to act. The paper bridges the gap across the harm caused in establishing poor customer trustworthiness due to an inequitable system design and provides strong support for researchers, policymakers, and data scientists to build responsible AI frameworks within organizations.

READ FULL TEXT
research
01/24/2023

Investigating Labeler Bias in Face Annotation for Machine Learning

In a world increasingly reliant on artificial intelligence, it is more i...
research
02/16/2023

Counterfactual Reasoning for Bias Evaluation and Detection in a Fairness under Unawareness setting

Current AI regulations require discarding sensitive features (e.g., gend...
research
11/27/2019

Fooling with facts: Quantifying anchoring bias through a large-scale online experiment

Living in the 'Information Age' means that not only access to informatio...
research
11/25/2022

Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization?

As the scope of machine learning broadens, we observe a recurring theme ...
research
06/17/2022

Explainability's Gain is Optimality's Loss? – How Explanations Bias Decision-making

Decisions in organizations are about evaluating alternatives and choosin...
research
08/22/2023

Addressing Fairness and Explainability in Image Classification Using Optimal Transport

Algorithmic Fairness and the explainability of potentially unfair outcom...
research
10/25/2021

Debiasing Credit Scoring using Evolutionary Algorithms

This paper investigates the application of machine learning when trainin...

Please sign up or login with your details

Forgot password? Click here to reset