DeepAI AI Chat
Log In Sign Up

On the Global Optima of Kernelized Adversarial Representation Learning

by   Bashir Sadeghi, et al.

Adversarial representation learning is a promising paradigm for obtaining data representations that are invariant to certain sensitive attributes while retaining the information necessary for predicting target attributes. Existing approaches solve this problem through iterative adversarial minimax optimization and lack theoretical guarantees. In this paper, we first study the "linear" form of this problem i.e., the setting where all the players are linear functions. We show that the resulting optimization problem is both non-convex and non-differentiable. We obtain an exact closed-form expression for its global optima through spectral learning and provide performance guarantees in terms of analytical bounds on the achievable utility and invariance. We then extend this solution and analysis to non-linear functions through kernel representation. Numerical experiments on UCI, Extended Yale B and CIFAR-100 datasets indicate that, (a) practically, our solution is ideal for "imparting" provable invariance to any biased pre-trained data representation, and (b) empirically, the trade-off between utility and invariance provided by our solution is comparable to iterative minimax optimization of existing deep neural network based approaches. Code is available at


page 9

page 10


Adversarial Representation Learning With Closed-Form Solvers

Adversarial representation learning aims to learn data representations f...

Evading the Adversary in Invariant Representation

Representations of data that are invariant to changes in specified nuisa...

Non-linear ICA based on Cramer-Wold metric

Non-linear source separation is a challenging open problem with many app...

Unitary-Group Invariant Kernels and Features from Transformed Unlabeled Data

The study of representations invariant to common transformations of the ...

Resource-Efficient Invariant Networks: Exponential Gains by Unrolled Optimization

Achieving invariance to nuisance transformations is a fundamental challe...

Convex Representation Learning for Generalized Invariance in Semi-Inner-Product Space

Invariance (defined in a general sense) has been one of the most effecti...

ReduNet: A White-box Deep Network from the Principle of Maximizing Rate Reduction

This work attempts to provide a plausible theoretical framework that aim...