Signed Input Regularization

11/16/2019
by   Saeid Asgari Taghanaki, et al.
16

Over-parameterized deep models usually over-fit to a given training distribution, which makes them sensitive to small changes and out-of-distribution samples at inference time, leading to low generalization performance. To this end, several model-based and randomized data-dependent regularization methods are applied, such as data augmentation, which prevent a model from memorizing the training distribution. Instead of the random transformation of the input images, we propose SIGN, a new regularization method, which modifies the input variables using a linear transformation by estimating each variable's contribution to the final prediction. Our proposed technique maps the input data to a new manifold where the less important variables are de-emphasized. To test the effectiveness of the proposed idea and compare it with other competing methods, we design several test scenarios, such as classification performance, uncertainty, out-of-distribution, and robustness analyses. We compare the methods using three different datasets and four models. We find that SIGN encourages more compact class representations, which results in the model's robustness to random corruptions and out-of-distribution samples while also simultaneously achieving superior performance on normal data compared to other competing methods. Our experiments also demonstrate the successful transferability of the SIGN samples from one model to another.

READ FULL TEXT

page 2

page 4

page 5

page 6

page 8

research
11/20/2022

Feature Weaken: Vicinal Data Augmentation for Classification

Deep learning usually relies on training large-scale data samples to ach...
research
09/15/2019

Wasserstein Diffusion Tikhonov Regularization

We propose regularization strategies for learning discriminative models ...
research
10/10/2019

First Order Ambisonics Domain Spatial Augmentation for DNN-based Direction of Arrival Estimation

In this paper, we propose a novel data augmentation method for training ...
research
04/01/2019

Stratified Random Sampling for Dependent Inputs

A new approach of obtaining stratified random samples from statistically...
research
08/12/2023

Semantic Equivariant Mixup

Mixup is a well-established data augmentation technique, which can exten...
research
03/23/2022

Out of Distribution Detection, Generalization, and Robustness Triangle with Maximum Probability Theorem

Maximum Probability Framework, powered by Maximum Probability Theorem, i...
research
08/01/2022

The Effect of Omitted Variables on the Sign of Regression Coefficients

Omitted variables are a common concern in empirical research. We show th...

Please sign up or login with your details

Forgot password? Click here to reset