Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization

02/26/2020
by   Sicheng Zhu, et al.
6

Training machine learning models to be robust against adversarial inputs poses seemingly insurmountable challenges. To better understand model robustness, we consider the underlying problem of learning robust representations. We develop a general definition of representation vulnerability that captures the maximum change of mutual information between the input and output distributions, under the worst-case input distribution perturbation. We prove a theorem that establishes a lower bound on the minimum adversarial risk that can be achieved for any downstream classifier based on this definition. We then propose an unsupervised learning method for obtaining intrinsically robust representations by maximizing the worst-case mutual information between input and output distributions. Experiments on downstream classification tasks and analyses of saliency maps support the robustness of the representations found using unsupervised learning with our training principle.

READ FULL TEXT

page 8

page 18

research
01/21/2022

Robust Unsupervised Graph Representation Learning via Mutual Information Maximization

Recent studies have shown that GNNs are vulnerable to adversarial attack...
research
03/09/2020

Matching Text with Deep Mutual Information Estimation

Text matching is a core natural language processing research problem. Ho...
research
02/04/2020

Graph Representation Learning via Graphical Mutual Information Maximization

The richness in the content of various information networks such as soci...
research
06/04/2020

Info3D: Representation Learning on 3D Objects using Mutual Information Maximization and Contrastive Learning

A major endeavor of computer vision is to represent, understand and extr...
research
04/27/2023

The Mutual Information In The Vicinity of Capacity-Achieving Input Distributions

The mutual information is analyzed as a function of the input distributi...
research
07/22/2020

Robust Machine Learning via Privacy/Rate-Distortion Theory

Robust machine learning formulations have emerged to address the prevale...
research
04/26/2019

Towards a Non-Stochastic Information Theory

The δ-mutual information between uncertain variables is introduced as a ...

Please sign up or login with your details

Forgot password? Click here to reset