Monotonicity Regularization: Improved Penalties and Novel Applications to Disentangled Representation Learning and Robust Classification

05/17/2022
by   João Monteiro, et al.
0

We study settings where gradient penalties are used alongside risk minimization with the goal of obtaining predictors satisfying different notions of monotonicity. Specifically, we present two sets of contributions. In the first part of the paper, we show that different choices of penalties define the regions of the input space where the property is observed. As such, previous methods result in models that are monotonic only in a small volume of the input space. We thus propose an approach that uses mixtures of training instances and random points to populate the space and enforce the penalty in a much larger region. As a second set of contributions, we introduce regularization strategies that enforce other notions of monotonicity in different settings. In this case, we consider applications, such as image classification and generative modeling, where monotonicity is not a hard constraint but can help improve some aspects of the model. Namely, we show that inducing monotonicity can be beneficial in applications such as: (1) allowing for controllable data generation, (2) defining strategies to detect anomalous data, and (3) generating explanations for predictions. Our proposed approaches do not introduce relevant computational overhead while leading to efficient procedures that provide extra benefits over baseline models.

READ FULL TEXT

page 15

page 16

page 17

page 20

page 21

page 22

page 24

page 25

research
05/20/2022

Leveraging Relational Information for Learning Weakly Disentangled Representations

Disentanglement is a difficult property to enforce in neural representat...
research
02/20/2020

MaxUp: A Simple Way to Improve Generalization of Neural Network Training

We propose MaxUp, an embarrassingly simple, highly effective technique f...
research
12/06/2021

Encouraging Disentangled and Convex Representation with Controllable Interpolation Regularization

We focus on controllable disentangled representation learning (C-Dis-RL)...
research
07/20/2018

Explaining Image Classifiers by Adaptive Dropout and Generative In-filling

Explanations of black-box classifiers often rely on saliency maps, which...
research
03/11/2023

Robust Learning from Explanations

Machine learning from explanations (MLX) is an approach to learning that...
research
12/17/2021

AutoTransfer: Subject Transfer Learning with Censored Representations on Biosignals Data

We provide a regularization framework for subject transfer learning in w...
research
06/16/2023

Vacant Holes for Unsupervised Detection of the Outliers in Compact Latent Representation

Detection of the outliers is pivotal for any machine learning model depl...

Please sign up or login with your details

Forgot password? Click here to reset