DeepAI AI Chat
Log In Sign Up

Diverse Adversaries for Mitigating Bias in Training

by   Xudong Han, et al.

Adversarial learning can learn fairer and less biased models of language than standard methods. However, current adversarial techniques only partially mitigate model bias, added to which their training procedures are often unstable. In this paper, we propose a novel approach to adversarial learning based on the use of multiple diverse discriminators, whereby discriminators are encouraged to learn orthogonal hidden representations from one another. Experimental results show that our method substantially improves over standard adversarial removal methods, in terms of reducing bias and the stability of training.


page 1

page 2

page 3

page 4


On Adversarial Removal of Hypothesis-only Bias in Natural Language Inference

Popular Natural Language Inference (NLI) datasets have been shown to be ...

Adversarial Learning for Incentive Optimization in Mobile Payment Marketing

Many payment platforms hold large-scale marketing campaigns, which alloc...

ConQUR: Mitigating Delusional Bias in Deep Q-learning

Delusional bias is a fundamental source of error in approximate Q-learni...

Towards Equal Opportunity Fairness through Adversarial Learning

Adversarial training is a common approach for bias mitigation in natural...

A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning

Vision-language models can encode societal biases and stereotypes, but t...

Contrastive Learning for Fair Representations

Trained classification models can unintentionally lead to biased represe...

Stereotypical Bias Removal for Hate Speech Detection Task using Knowledge-based Generalizations

With the ever-increasing cases of hate spread on social media platforms,...