Uncovering Latent Biases in Text: Method and Application to Peer Review

10/29/2020
by   Emaad Manzoor, et al.
0

Quantifying systematic disparities in numerical quantities such as employment rates and wages between population subgroups provides compelling evidence for the existence of societal biases. However, biases in the text written for members of different subgroups (such as in recommendation letters for male and non-male candidates), though widely reported anecdotally, remain challenging to quantify. In this work, we introduce a novel framework to quantify bias in text caused by the visibility of subgroup membership indicators. We develop a nonparametric estimation and inference procedure to estimate this bias. We then formalize an identification strategy to causally link the estimated bias to the visibility of subgroup membership indicators, provided observations from time periods both before and after an identity-hiding policy change. We identify an application wherein "ground truth" bias can be inferred to evaluate our framework, instead of relying on synthetic or secondary data. Specifically, we apply our framework to quantify biases in the text of peer reviews from a reputed machine learning conference before and after the conference adopted a double-blind reviewing policy. We show evidence of biases in the review ratings that serves as "ground truth", and show that our proposed framework accurately detects these biases from the review text without having access to the review ratings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/07/2021

Does double-blind peer-review reduce bias? Evidence from a top computer science conference

Peer review is widely regarded as essential for advancing scientific res...
research
05/19/2022

Towards Understanding Gender-Seniority Compound Bias in Natural Language Generation

Women are often perceived as junior to their male counterparts, even wit...
research
02/26/2018

Surrogate Scoring Rules and a Dominant Truth Serum for Information Elicitation

We study information elicitation without verification (IEWV) and ask the...
research
12/31/2019

On Testing for Biases in Peer Review

We consider the issue of biases in scholarly research, specifically, in ...
research
06/26/2023

WinoQueer: A Community-in-the-Loop Benchmark for Anti-LGBTQ+ Bias in Large Language Models

We present WinoQueer: a benchmark specifically designed to measure wheth...
research
08/28/2020

The Identity Fragmentation Bias

Consumers interact with firms across multiple devices, browsers, and mac...
research
08/05/2019

Discovery of Bias and Strategic Behavior in Crowdsourced Performance Assessment

With the industry trend of shifting from a traditional hierarchical appr...

Please sign up or login with your details

Forgot password? Click here to reset