Selection Collider Bias in Large Language Models
In this paper we motivate the causal mechanisms behind sample selection induced collider bias (selection collider bias) that can cause Large Language Models (LLMs) to learn unconditional dependence between entities that are unconditionally independent in the real world. We show that selection collider bias can be amplified in underspecified learning tasks, and that the magnitude of the resulting spurious correlations appear scale agnostic. While selection collider bias can be difficult to overcome, we describe a method to exploit the resulting spurious correlations for determination of when a model may be uncertain about its prediction, and demonstrate that it matches human uncertainty in tasks with gender pronoun underspecification on an extended version of the Winogender Schemas evaluation set.
READ FULL TEXT