Scalable Infomin Learning

02/21/2023
by   Yanzhi Chen, et al.
0

The task of infomin learning aims to learn a representation with high utility while being uninformative about a specified target, with the latter achieved by minimising the mutual information between the representation and the target. It has broad applications, ranging from training fair prediction models against protected attributes, to unsupervised learning with disentangled representations. Recent works on infomin learning mainly use adversarial training, which involves training a neural network to estimate mutual information or its proxy and thus is slow and difficult to optimise. Drawing on recent advances in slicing techniques, we propose a new infomin learning approach, which uses a novel proxy metric to mutual information. We further derive an accurate and analytically computable approximation to this proxy metric, thereby removing the need of constructing neural network-based mutual information estimators. Experiments on algorithmic fairness, disentangled representation learning and domain adaptation verify that our method can effectively remove unwanted information with limited time budget.

READ FULL TEXT
research
08/06/2022

HSIC-InfoGAN: Learning Unsupervised Disentangled Representations by Maximising Approximated Mutual Information

Learning disentangled representations requires either supervision or the...
research
04/18/2019

Disentangled Representation Learning with Information Maximizing Autoencoder

Learning disentangled representation from any unlabelled data is a non-t...
research
02/14/2022

KNIFE: Kernelized-Neural Differential Entropy Estimation

Mutual Information (MI) has been widely used as a loss regularizer for t...
research
08/11/2021

Learning Bias-Invariant Representation by Cross-Sample Mutual Information Minimization

Deep learning algorithms mine knowledge from the training data and thus ...
research
05/07/2022

Learning Disentangled Textual Representations via Statistical Measures of Similarity

When working with textual data, a natural application of disentangled re...
research
08/04/2022

Invariant Representations with Stochastically Quantized Neural Networks

Representation learning algorithms offer the opportunity to learn invari...
research
09/26/2022

Deep Fair Clustering via Maximizing and Minimizing Mutual Information

Fair clustering aims to divide data into distinct clusters, while preven...

Please sign up or login with your details

Forgot password? Click here to reset