A case for new neural network smoothness constraints

12/14/2020
by   Mihaela Rosca, et al.
0

How sensitive should machine learning models be to input changes? We tackle the question of model smoothness and show that it is a useful inductive bias which aids generalization, adversarial robustness, generative modeling and reinforcement learning. We explore current methods of imposing smoothness constraints and observe they lack the flexibility to adapt to new tasks, they don't account for data modalities, they interact with losses, architectures and optimization in ways not yet fully understood. We conclude that new advances in the field are hinging on finding ways to incorporate data, tasks and learning into our definitions of smoothness.

READ FULL TEXT

page 6

page 7

02/14/2021

Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization

Large scale distributed optimization has become the default tool for the...
02/20/2020

MaxUp: A Simple Way to Improve Generalization of Neural Network Training

We propose MaxUp, an embarrassingly simple, highly effective technique f...
12/04/2020

Kernel-convoluted Deep Neural Networks with Data Augmentation

The Mixup method (Zhang et al. 2018), which uses linearly interpolated d...
05/27/2019

Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness

The accuracy of deep learning, i.e., deep neural networks, can be charac...
09/22/2020

Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time

From CNNs to attention mechanisms, encoding inductive biases into neural...
01/12/2022

Smoothness and continuity of cost functionals for ECG mismatch computation

The field of cardiac electrophysiology tries to abstract, describe and f...
05/31/2020

Evaluations and Methods for Explanation through Robustness Analysis

Among multiple ways of interpreting a machine learning model, measuring ...