Robust Learning with Jacobian Regularization

08/07/2019
by   Judy Hoffman, et al.
3

Design of reliable systems must guarantee stability against input perturbations. In machine learning, such guarantee entails preventing overfitting and ensuring robustness of models against corruption of input data. In order to maximize stability, we analyze and develop a computationally efficient implementation of Jacobian regularization that increases classification margins of neural networks. The stabilizing effect of the Jacobian regularizer leads to significant improvements in robustness, as measured against both random and adversarial input perturbations, without severely degrading generalization properties on clean data.

READ FULL TEXT

page 2

page 12

page 13

research
04/21/2021

Jacobian Regularization for Mitigating Universal Adversarial Perturbations

Universal Adversarial Perturbations (UAPs) are input perturbations that ...
research
03/09/2020

Manifold Regularization for Adversarial Robustness

Manifold regularization is a technique that penalizes the complexity of ...
research
05/01/2020

Evaluating Robustness to Input Perturbations for Neural Machine Translation

Neural Machine Translation (NMT) models are sensitive to small perturbat...
research
11/02/2022

Isometric Representations in Neural Networks Improve Robustness

Artificial and biological agents cannon learn given completely random an...
research
09/15/2019

Wasserstein Diffusion Tikhonov Regularization

We propose regularization strategies for learning discriminative models ...
research
01/30/2023

Bagging Provides Assumption-free Stability

Bagging is an important technique for stabilizing machine learning model...
research
08/07/2022

Adversarial Robustness Through the Lens of Convolutional Filters

Deep learning models are intrinsically sensitive to distribution shifts ...

Please sign up or login with your details

Forgot password? Click here to reset