Addressing Missing Sources with Adversarial Support-Matching

03/24/2022
by   Thomas Kehrenberg, et al.
0

When trained on diverse labeled data, machine learning models have proven themselves to be a powerful tool in all facets of society. However, due to budget limitations, deliberate or non-deliberate censorship, and other problems during data collection and curation, the labeled training set might exhibit a systematic shortage of data for certain groups. We investigate a scenario in which the absence of certain data is linked to the second level of a two-level hierarchy in the data. Inspired by the idea of protected groups from algorithmic fairness, we refer to the partitions carved by this second level as "subgroups"; we refer to combinations of subgroups and classes, or leaves of the hierarchy, as "sources". To characterize the problem, we introduce the concept of classes with incomplete subgroup support. The representational bias in the training set can give rise to spurious correlations between the classes and the subgroups which render standard classification models ungeneralizable to unseen sources. To overcome this bias, we make use of an additional, diverse but unlabeled dataset, called the "deployment set", to learn a representation that is invariant to subgroup. This is done by adversarially matching the support of the training and deployment sets in representation space. In order to learn the desired invariance, it is paramount that the sets of samples observed by the discriminator are balanced by class; this is easily achieved for the training set, but requires using semi-supervised clustering for the deployment set. We demonstrate the effectiveness of our method with experiments on several datasets and variants of the problem.

READ FULL TEXT

page 2

page 18

page 24

page 25

research
08/27/2023

Semi-Supervised Learning in the Few-Shot Zero-Shot Scenario

Semi-Supervised Learning (SSL) leverages both labeled and unlabeled data...
research
08/12/2020

Null-sampling for Interpretable and Fair Representations

We propose to learn invariant representations, in the data domain, to ac...
research
09/25/2020

Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce Discrimination

A growing specter in the rise of machine learning is whether the decisio...
research
07/02/2016

Rademacher Complexity Bounds for a Penalized Multiclass Semi-Supervised Algorithm

We propose Rademacher complexity bounds for multiclass classifiers train...
research
02/02/2016

Unsupervised High-level Feature Learning by Ensemble Projection for Semi-supervised Image Classification and Image Clustering

This paper investigates the problem of image classification with limited...
research
09/05/2018

Modified Diversity of Class Probability Estimation Co-training for Hyperspectral Image Classification

Due to the limited amount and imbalanced classes of labeled training dat...
research
07/03/2023

Systematic Bias in Sample Inference and its Effect on Machine Learning

A commonly observed pattern in machine learning models is an underpredic...

Please sign up or login with your details

Forgot password? Click here to reset