Stop memorizing: A data-dependent regularization framework for intrinsic pattern learning

05/18/2018
by   Wei Zhu, et al.
1

Deep neural networks (DNNs) typically have enough capacity to fit random data by brute force even when conventional data-dependent regularizations focusing on the geometry of the features are imposed. We find out that the reason for this is the inconsistency between the enforced geometry and the standard softmax cross entropy loss. To resolve this, we propose a new framework for data-dependent DNN regularization, the Geometrically-Regularized-Self-Validating neural Networks (GRSVNet). During training, the geometry enforced on one batch of features is simultaneously validated on a separate batch using a validation loss consistent with the geometry. We study a particular case of GRSVNet, the Orthogonal-Low-rank Embedding (OLE)-GRSVNet, which is capable of producing highly discriminative features residing in orthogonal low-rank subspaces. Numerical experiments show that OLE-GRSVNet outperforms DNNs with conventional regularization when trained on real data. More importantly, unlike conventional DNNs, OLE-GRSVNet refuses to memorize random data or random labels, suggesting it only learns intrinsic patterns by reducing the memorizing capacity of the baseline DNN.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/06/2018

Trained Rank Pruning for Efficient Deep Neural Networks

The performance of Deep Neural Networks (DNNs) keeps elevating in recent...
research
03/28/2017

Coordinating Filters for Faster Deep Neural Networks

Very large-scale Deep Neural Networks (DNNs) have achieved remarkable su...
research
12/05/2017

OLÉ: Orthogonal Low-rank Embedding, A Plug and Play Geometric Loss for Deep Learning

Deep neural networks trained using a softmax layer at the top and the cr...
research
02/21/2018

Detecting Learning vs Memorization in Deep Neural Networks using Shared Structure Validation Sets

The roles played by learning and memorization represent an important top...
research
11/01/2018

Improving Adversarial Robustness by Encouraging Discriminative Features

Deep neural networks (DNNs) have achieved state-of-the-art results in va...
research
10/02/2018

Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning

Random Matrix Theory (RMT) is applied to analyze weight matrices of Deep...
research
11/16/2017

LDMNet: Low Dimensional Manifold Regularized Neural Networks

Deep neural networks have proved very successful on archetypal tasks for...

Please sign up or login with your details

Forgot password? Click here to reset