Diverse Misinformation: Impacts of Human Biases on Detection of Deepfakes on Networks

10/18/2022
by   Juniper Lovato, et al.
0

Social media users are not equally susceptible to all misinformation. We call “diverse misinformation” the complex relationships between human biases and demographics represented in misinformation, and its impact on our susceptibility to misinformation is currently unknown. To investigate how users' biases impact susceptibility, we explore computer-generated videos called deepfakes as a type of diverse misinformation. We chose deepfakes as a case study for three reasons: 1.) their classification as misinformation is more objective; 2.) we can control the demographics of the persona presented; and 3.) deepfakes are a real-world concern with associated harms that need to be better understood. Our paper presents a survey (N=2,000) where U.S.-based participants are exposed to videos and asked questions about their attributes, not knowing they might be a deepfake. Our analysis investigates the extent to which different users are duped and by what perceived demographics of deepfake personas. First, if users not explicitly looking for deepfakes are not particularly accurate classifiers. Importantly, accuracy varies significantly by demographics, and participants are generally better at classifying videos that match them (especially male, white, and young participants). We extrapolate from these results to understand the population-level impacts of these biases using an idealized mathematical model of the interplay between diverse misinformation and crowd correction. Our model suggests that a diverse set of contacts might provide “herd correction” where friends can protect each other's blind spots. Altogether, human biases and the attributes of misinformation matter greatly, but having a diverse social group may help reduce susceptibility to misinformation.

READ FULL TEXT

page 4

page 7

page 9

research
09/20/2023

The Role of Inclusion, Control, and Ownership in Workplace AI-Mediated Communication

Large language models (LLMs) can exhibit social biases. Given LLMs' incr...
research
02/23/2023

Talking Abortion (Mis)information with ChatGPT on TikTok

In this study, we tested users' perception of accuracy and engagement wi...
research
09/28/2022

Racial Bias in the Beautyverse

This short paper proposes a preliminary and yet insightful investigation...
research
11/15/2018

Offline Biases in Online Platforms: a Study of Diversity and Homophily in Airbnb

How diverse are sharing economy platforms? Are they fair marketplaces, w...
research
01/23/2019

Evaluation of Biases in Self-reported Demographic and Psychometric Information: Traditional versus Facebook-based Surveys

Social media in scientific research offer a unique digital observatory o...
research
02/07/2023

Short-Form Videos Degrade Our Capacity to Retain Intentions: Effect of Context Switching On Prospective Memory

Social media platforms use short, highly engaging videos to catch users'...

Please sign up or login with your details

Forgot password? Click here to reset