Learning Disentangled Textual Representations via Statistical Measures of Similarity

05/07/2022
by   Pierre Colombo, et al.
7

When working with textual data, a natural application of disentangled representations is fair classification where the goal is to make predictions without being biased (or influenced) by sensitive attributes that may be present in the data (e.g., age, gender or race). Dominant approaches to disentangle a sensitive attribute from textual representations rely on learning simultaneously a penalization term that involves either an adversarial loss (e.g., a discriminator) or an information measure (e.g., mutual information). However, these methods require the training of a deep neural network with several parameter updates for each update of the representation model. As a matter of fact, the resulting nested optimization loop is both time consuming, adding complexity to the optimization dynamic, and requires a fine hyperparameter selection (e.g., learning rates, architecture). In this work, we introduce a family of regularizers for learning disentangled representations that do not require training. These regularizers are based on statistical measures of similarity between the conditional probability distributions with respect to the sensitive attributes. Our novel regularizers do not require additional training, are faster and do not involve additional tuning while achieving better results both when combined with pretrained and randomly initialized text encoders.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/06/2021

A Novel Estimator of Mutual Information for Learning to Disentangle Textual Representations

Learning disentangled representations of textual data is essential for m...
research
05/05/2022

On Disentangled and Locally Fair Representations

We study the problem of performing classification in a manner that is fa...
research
08/06/2022

HSIC-InfoGAN: Learning Unsupervised Disentangled Representations by Maximising Approximated Mutual Information

Learning disentangled representations requires either supervision or the...
research
02/21/2023

Scalable Infomin Learning

The task of infomin learning aims to learn a representation with high ut...
research
05/18/2021

rx-anon – A Novel Approach on the De-Identification of Heterogeneous Data based on a Modified Mondrian Algorithm

Traditional approaches for data anonymization consider relational data a...
research
08/28/2022

Towards Disentangled Speech Representations

The careful construction of audio representations has become a dominant ...

Please sign up or login with your details

Forgot password? Click here to reset