Foothill: A Quasiconvex Regularization Function

01/18/2019
by   Mouloud Belbahri, et al.
0

Deep neural networks (DNNs) have demonstrated success for many supervised learning tasks, ranging from voice recognition, object detection, to image classification. However, their increasing complexity yields poor generalization error. Adding noise to the input data or using a concrete regularization function helps to improve generalization. Here we introduce foothill function, an infinitely differentiable quasiconvex function. This regularizer is flexible enough to deform towards L_1 and L_2 penalties. Foothill can be used as a loss, as a regularizer, or as a binary quantizer.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/03/2019

An Empirical Study on Regularization of Deep Neural Networks by Local Rademacher Complexity

Regularization of Deep Neural Networks (DNNs) for the sake of improving ...
research
10/18/2017

Stochastic Weighted Function Norm Regularization

Deep neural networks (DNNs) have become increasingly important due to th...
research
02/22/2022

Explicit Regularization via Regularizer Mirror Descent

Despite perfectly interpolating the training data, deep neural networks ...
research
09/15/2019

Wasserstein Diffusion Tikhonov Regularization

We propose regularization strategies for learning discriminative models ...
research
06/08/2021

The Randomness of Input Data Spaces is an A Priori Predictor for Generalization

Over-parameterized models can perfectly learn various types of data dist...
research
04/16/2011

Adding noise to the input of a model trained with a regularized objective

Regularization is a well studied problem in the context of neural networ...
research
03/09/2023

TANGOS: Regularizing Tabular Neural Networks through Gradient Orthogonalization and Specialization

Despite their success with unstructured data, deep neural networks are n...

Please sign up or login with your details

Forgot password? Click here to reset