Towards Learning an Unbiased Classifier from Biased Data via Conditional Adversarial Debiasing

03/10/2021
by   Christian Reimers, et al.
0

Bias in classifiers is a severe issue of modern deep learning methods, especially for their application in safety- and security-critical areas. Often, the bias of a classifier is a direct consequence of a bias in the training dataset, frequently caused by the co-occurrence of relevant features and irrelevant ones. To mitigate this issue, we require learning algorithms that prevent the propagation of bias from the dataset into the classifier. We present a novel adversarial debiasing method, which addresses a feature that is spuriously connected to the labels of training images but statistically independent of the labels for test images. Thus, the automatic identification of relevant features during training is perturbed by irrelevant features. This is the case in a wide range of bias-related problems for many computer vision tasks, such as automatic skin cancer detection or driver assistance. We argue by a mathematical proof that our approach is superior to existing techniques for the abovementioned bias. Our experiments show that our approach performs better than state-of-the-art techniques on a well-known benchmark dataset with real-world images of cats and dogs.

READ FULL TEXT
research
08/23/2021

BiaSwap: Removing dataset bias with bias-tailored swapping augmentation

Deep neural networks often make decisions based on the spurious correlat...
research
10/15/2021

Data Generation using Texture Co-occurrence and Spatial Self-Similarity for Debiasing

Classification models trained on biased datasets usually perform poorly ...
research
02/21/2022

Toward more generalized Malicious URL Detection Models

This paper reveals a data bias issue that can severely affect the perfor...
research
11/28/2015

Applying deep learning to classify pornographic images and videos

It is no secret that pornographic material is now a one-click-away from ...
research
05/18/2020

Unbiased Learning to Rank via Propensity Ratio Scoring

Implicit feedback, such as user clicks, is a major source of supervision...
research
07/08/2019

Unsupervised Domain Alignment to Mitigate Low Level Dataset Biases

Dataset bias is a well-known problem in the field of computer vision. Th...
research
01/10/2023

Look Beyond Bias with Entropic Adversarial Data Augmentation

Deep neural networks do not discriminate between spurious and causal pat...

Please sign up or login with your details

Forgot password? Click here to reset