Unmasking Nationality Bias: A Study of Human Perception of Nationalities in AI-Generated Articles

08/08/2023
by   Pranav Narayanan Venkit, et al.
0

We investigate the potential for nationality biases in natural language processing (NLP) models using human evaluation methods. Biased NLP models can perpetuate stereotypes and lead to algorithmic discrimination, posing a significant challenge to the fairness and justice of AI systems. Our study employs a two-step mixed-methods approach that includes both quantitative and qualitative analysis to identify and understand the impact of nationality bias in a text generation model. Through our human-centered quantitative analysis, we measure the extent of nationality bias in articles generated by AI sources. We then conduct open-ended interviews with participants, performing qualitative coding and thematic analysis to understand the implications of these biases on human readers. Our findings reveal that biased NLP models tend to replicate and amplify existing societal biases, which can translate to harm if used in a sociotechnical setting. The qualitative analysis from our interviews offers insights into the experience readers have when encountering such articles, highlighting the potential to shift a reader's perception of a country. These findings emphasize the critical role of public perception in shaping AI's impact on society and the need to correct biases in AI systems.

READ FULL TEXT
research
08/24/2023

Towards a Holistic Approach: Understanding Sociodemographic Biases in NLP Models using an Interdisciplinary Lens

The rapid growth in the usage and applications of Natural Language Proce...
research
09/07/2022

Power of Explanations: Towards automatic debiasing in hate speech detection

Hate speech detection is a common downstream application of natural lang...
research
09/18/2023

Bias of AI-Generated Content: An Examination of News Produced by Large Language Models

Large language models (LLMs) have the potential to transform our lives a...
research
05/17/2023

Smiling Women Pitching Down: Auditing Representational and Presentational Gender Biases in Image Generative AI

Generative AI models like DALL-E 2 can interpret textual prompts and gen...
research
06/13/2023

Adding guardrails to advanced chatbots

Generative AI models continue to become more powerful. The launch of Cha...
research
04/04/2023

Socio-economic landscape of digital transformation public NLP systems: A critical review

The current wave of digital transformation has spurred digitisation refo...
research
06/22/2023

On Hate Scaling Laws For Data-Swamps

`Scale the model, scale the data, scale the GPU-farms' is the reigning s...

Please sign up or login with your details

Forgot password? Click here to reset