Shape Parameter Estimation

06/06/2017
by   Aleksandr Y. Aravkin, et al.
0

Performance of machine learning approaches depends strongly on the choice of misfit penalty, and correct choice of penalty parameters, such as the threshold of the Huber function. These parameters are typically chosen using expert knowledge, cross-validation, or black-box optimization, which are time consuming for large-scale applications. We present a principled, data-driven approach to simultaneously learn the model pa- rameters and the misfit penalty parameters. We discuss theoretical properties of these joint inference problems, and develop algorithms for their solution. We show synthetic examples of automatic parameter tuning for piecewise linear-quadratic (PLQ) penalties, and use the approach to develop a self-tuning robust PCA formulation for background separation.

READ FULL TEXT

page 19

page 20

research
11/09/2019

Influence of single observations on the choice of the penalty parameter in ridge regression

Penalized regression methods, such as ridge regression, heavily rely on ...
research
03/28/2017

Gradient-based Regularization Parameter Selection for Problems with Non-smooth Penalty Functions

In high-dimensional and/or non-parametric regression problems, regulariz...
research
06/07/2020

What needles do sparse neural networks find in nonlinear haystacks

Using a sparsity inducing penalty in artificial neural networks (ANNs) a...
research
03/27/2023

Towards black-box parameter estimation

Deep learning algorithms have recently shown to be a successful tool in ...
research
05/27/2014

Futility Analysis in the Cross-Validation of Machine Learning Models

Many machine learning models have important structural tuning parameters...
research
12/11/2014

Efficient penalty search for multiple changepoint problems

In the multiple changepoint setting, various search methods have been pr...

Please sign up or login with your details

Forgot password? Click here to reset