Regularized deep learning with non-convex penalties

09/11/2019
by   Sujit Vettam, et al.
0

Regularization methods are often employed in deep learning neural networks (DNNs) to prevent overfitting. For penalty based methods for DNN regularization, typically only convex penalties are considered because of their optimization guarantees. Recent theoretical work have shown that non-convex penalties that satisfy certain regularity conditions are also guaranteed to perform well with standard optimization algorithms. In this paper, we examine new and currently existing non-convex penalties for DNN regularization. We provide theoretical justifications for the new penalties and also assess the performance of all penalties on DNN analysis of real datasets.

READ FULL TEXT
research
09/11/2019

Regularized deep learning with a non-convex penalty

Regularization methods are often employed in deep learning neural networ...
research
02/07/2022

Finite-Sum Optimization: A New Perspective for Convergence to a Global Solution

Deep neural networks (DNNs) have shown great success in many machine lea...
research
06/03/2019

Clustering by Orthogonal NMF Model and Non-Convex Penalty Optimization

The non-negative matrix factorization (NMF) model with an additional ort...
research
02/23/2017

Horseshoe Regularization for Feature Subset Selection

Feature subset selection arises in many high-dimensional applications of...
research
01/04/2019

Transformed ℓ_1 Regularization for Learning Sparse Deep Neural Networks

Deep neural networks (DNNs) have achieved extraordinary success in numer...
research
11/13/2020

Neural Network Training Techniques Regularize Optimization Trajectory: An Empirical Study

Modern deep neural network (DNN) trainings utilize various training tech...
research
05/13/2023

Successive Affine Learning for Deep Neural Networks

This paper introduces a successive affine learning (SAL) model for const...

Please sign up or login with your details

Forgot password? Click here to reset