On the Treatment of Optimization Problems with L1 Penalty Terms via Multiobjective Continuation

12/14/2020
by   Katharina Bieker, et al.
0

We present a novel algorithm that allows us to gain detailed insight into the effects of sparsity in linear and nonlinear optimization, which is of great importance in many scientific areas such as image and signal processing, medical imaging, compressed sensing, and machine learning (e.g., for the training of neural networks). Sparsity is an important feature to ensure robustness against noisy data, but also to find models that are interpretable and easy to analyze due to the small number of relevant terms. It is common practice to enforce sparsity by adding the ℓ_1-norm as a weighted penalty term. In order to gain a better understanding and to allow for an informed model selection, we directly solve the corresponding multiobjective optimization problem (MOP) that arises when we minimize the main objective and the ℓ_1-norm simultaneously. As this MOP is in general non-convex for nonlinear objectives, the weighting method will fail to provide all optimal compromises. To avoid this issue, we present a continuation method which is specifically tailored to MOPs with two objective functions one of which is the ℓ_1-norm. Our method can be seen as a generalization of well-known homotopy methods for linear regression problems to the nonlinear case. Several numerical examples - including neural network training - demonstrate our theoretical findings and the additional insight that can be gained by this multiobjective approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/04/2010

Regularizers for Structured Sparsity

We study the problem of learning a sparse linear regression vector under...
research
06/04/2021

Adiabatic Quantum Feature Selection for Sparse Linear Regression

Linear regression is a popular machine learning approach to learn and pr...
research
03/02/2023

Penalising the biases in norm regularisation enforces sparsity

Controlling the parameters' norm often yields good generalisation when t...
research
08/23/2023

A multiobjective continuation method to compute the regularization path of deep neural networks

Sparsity is a highly desired feature in deep neural networks (DNNs) sinc...
research
10/05/2022

Optimization-Informed Neural Networks

Solving constrained nonlinear optimization problems (CNLPs) is a longsta...
research
02/10/2013

Conditional Gradient Algorithms for Norm-Regularized Smooth Convex Optimization

Motivated by some applications in signal processing and machine learning...
research
09/18/2016

Consistent Discretization and Minimization of the L1 Norm on Manifolds

The L1 norm has been tremendously popular in signal and image processing...

Please sign up or login with your details

Forgot password? Click here to reset