Neural Networks Regularization Through Class-wise Invariant Representation Learning

09/06/2017
by   Soufiane Belharbi, et al.
0

Training deep neural networks is known to require a large number of training samples. However, in many applications only few training samples are available. In this work, we tackle the issue of training neural networks for classification task when few training samples are available. We attempt to solve this issue by proposing a new regularization term that constrains the hidden layers of a network to learn class-wise invariant representations. In our regularization framework, learning invariant representations is generalized to the class membership where samples with the same class should have the same representation. Numerical experiments over MNIST and its variants showed that our proposal helps improving the generalization of neural network particularly when trained with few samples. We provide the source code of our framework https://github.com/sbelharbi/learning-class-invariant-features .

READ FULL TEXT
research
07/13/2018

Neural Networks Regularization Through Representation Learning

Neural network models and deep models are one of the leading and state o...
research
04/28/2015

Deep Neural Networks Regularization for Structured Output Prediction

A deep neural network model is a powerful framework for learning represe...
research
09/29/2022

Regularizing Neural Network Training via Identity-wise Discriminative Feature Suppression

It is well-known that a deep neural network has a strong fitting capabil...
research
11/28/2020

GradAug: A New Regularization Method for Deep Neural Networks

We propose a new regularization method to alleviate over-fitting in deep...
research
01/02/2018

Learning audio and image representations with bio-inspired trainable feature extractors

Recent advancements in pattern recognition and signal processing concern...
research
10/03/2022

Module-wise Training of Residual Networks via the Minimizing Movement Scheme

Greedy layer-wise or module-wise training of neural networks is compelli...
research
06/28/2023

Time Regularization in Optimal Time Variable Learning

Recently, optimal time variable learning in deep neural networks (DNNs) ...

Please sign up or login with your details

Forgot password? Click here to reset