DeepAI AI Chat
Log In Sign Up

Diverse Adversaries for Mitigating Bias in Training

01/25/2021
by   Xudong Han, et al.
0

Adversarial learning can learn fairer and less biased models of language than standard methods. However, current adversarial techniques only partially mitigate model bias, added to which their training procedures are often unstable. In this paper, we propose a novel approach to adversarial learning based on the use of multiple diverse discriminators, whereby discriminators are encouraged to learn orthogonal hidden representations from one another. Experimental results show that our method substantially improves over standard adversarial removal methods, in terms of reducing bias and the stability of training.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/09/2019

On Adversarial Removal of Hypothesis-only Bias in Natural Language Inference

Popular Natural Language Inference (NLI) datasets have been shown to be ...
12/28/2021

Adversarial Learning for Incentive Optimization in Mobile Payment Marketing

Many payment platforms hold large-scale marketing campaigns, which alloc...
02/27/2020

ConQUR: Mitigating Delusional Bias in Deep Q-learning

Delusional bias is a fundamental source of error in approximate Q-learni...
03/12/2022

Towards Equal Opportunity Fairness through Adversarial Learning

Adversarial training is a common approach for bias mitigation in natural...
03/22/2022

A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning

Vision-language models can encode societal biases and stereotypes, but t...
09/22/2021

Contrastive Learning for Fair Representations

Trained classification models can unintentionally lead to biased represe...
01/15/2020

Stereotypical Bias Removal for Hate Speech Detection Task using Knowledge-based Generalizations

With the ever-increasing cases of hate spread on social media platforms,...