A User-Driven Framework for Regulating and Auditing Social Media

04/20/2023
by   Sarah H. Cen, et al.
0

People form judgments and make decisions based on the information that they observe. A growing portion of that information is not only provided, but carefully curated by social media platforms. Although lawmakers largely agree that platforms should not operate without any oversight, there is little consensus on how to regulate social media. There is consensus, however, that creating a strict, global standard of "acceptable" content is untenable (e.g., in the US, it is incompatible with Section 230 of the Communications Decency Act and the First Amendment). In this work, we propose that algorithmic filtering should be regulated with respect to a flexible, user-driven baseline. We provide a concrete framework for regulating and auditing a social media platform according to such a baseline. In particular, we introduce the notion of a baseline feed: the content that a user would see without filtering (e.g., on Twitter, this could be the chronological timeline). We require that the feeds a platform filters contain "similar" informational content as their respective baseline feeds, and we design a principled way to measure similarity. This approach is motivated by related suggestions that regulations should increase user agency. We present an auditing procedure that checks whether a platform honors this requirement. Notably, the audit needs only black-box access to a platform's filtering algorithm, and it does not access or infer private user information. We provide theoretical guarantees on the strength of the audit. We further show that requiring closeness between filtered and baseline feeds does not impose a large performance cost, nor does it create echo chambers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/18/2021

Capitol (Pat)riots: A comparative study of Twitter and Parler

On 6 January 2021, a mob of right-wing conservatives stormed the USA Cap...
research
06/17/2021

Disinformation, Stochastic Harm, and Costly Filtering: A Principal-Agent Analysis of Regulating Social Media Platforms

The spread of disinformation on social media platforms such as Facebook ...
research
06/17/2020

Regulating algorithmic filtering on social media

Through the algorithmic filtering (AF) of content, social media platform...
research
09/12/2022

Mathematical Framework for Online Social Media Regulation

Social media platforms (SMPs) leverage algorithmic filtering (AF) as a m...
research
02/01/2022

What is the Will of the People? Moderation Preferences for Misinformation

To reduce the spread of misinformation, social media platforms may take ...
research
04/22/2023

Trust and Reliance in Consensus-Based Explanations from an Anti-Misinformation Agent

The illusion of consensus occurs when people believe there is consensus ...
research
07/01/2021

When Curation Becomes Creation: Algorithms, Microcontent, and the Vanishing Distinction between Platforms and Creators

Ever since social activity on the Internet began migrating from the wild...

Please sign up or login with your details

Forgot password? Click here to reset