Information Removal at the bottleneck in Deep Neural Networks

09/30/2022
by   Enzo Tartaglione, et al.
0

Deep learning models are nowadays broadly deployed to solve an incredibly large variety of tasks. Commonly, leveraging over the availability of "big data", deep neural networks are trained as black-boxes, minimizing an objective function at its output. This however does not allow control over the propagation of some specific features through the model, like gender or race, for solving some an uncorrelated task. This raises issues either in the privacy domain (considering the propagation of unwanted information) and of bias (considering that these features are potentially used to solve the given task). In this work we propose IRENE, a method to achieve information removal at the bottleneck of deep neural networks, which explicitly minimizes the estimated mutual information between the features to be kept “private” and the target. Experiments on a synthetic dataset and on CelebA validate the effectiveness of the proposed approach, and open the road towards the development of approaches guaranteeing information removal in deep neural networks.

READ FULL TEXT
research
02/10/2022

Information Flow in Deep Neural Networks

Although deep neural networks have been immensely successful, there is n...
research
03/02/2021

EnD: Entangling and Disentangling deep representations for bias correction

Artificial neural networks perform state-of-the-art in an ever-growing n...
research
05/05/2023

Mining bias-target Alignment from Voronoi Cells

Despite significant research efforts, deep neural networks are still vul...
research
08/04/2020

A non-discriminatory approach to ethical deep learning

Artificial neural networks perform state-of-the-art in an ever-growing n...
research
07/05/2022

Disentangling private classes through regularization

Deep learning models are nowadays broadly deployed to solve an incredibl...
research
06/27/2022

Monitoring Shortcut Learning using Mutual Information

The failure of deep neural networks to generalize to out-of-distribution...
research
12/04/2022

Statistical Physics of Deep Neural Networks: Initialization toward Optimal Channels

In deep learning, neural networks serve as noisy channels between input ...

Please sign up or login with your details

Forgot password? Click here to reset