High-dimensional Penalty Selection via Minimum Description Length Principle

04/26/2018
by   Kohei Miyaguchi, et al.
0

We tackle the problem of penalty selection of regularization on the basis of the minimum description length (MDL) principle. In particular, we consider that the design space of the penalty function is high-dimensional. In this situation, the luckiness-normalized-maximum-likelihood(LNML)-minimization approach is favorable, because LNML quantifies the goodness of regularized models with any forms of penalty functions in view of the minimum description length principle, and guides us to a good penalty function through the high-dimensional space. However, the minimization of LNML entails two major challenges: 1) the computation of the normalizing factor of LNML and 2) its minimization in high-dimensional spaces. In this paper, we present a novel regularization selection method (MDL-RS), in which a tight upper bound of LNML (uLNML) is minimized with local convergence guarantee. Our main contribution is the derivation of uLNML, which is a uniform-gap upper bound of LNML in an analytic expression. This solves the above challenges in an approximate manner because it allows us to accurately approximate LNML and then efficiently minimize it. The experimental results show that MDL-RS improves the generalization performance of regularized estimates specifically when the model has redundant parameters.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/17/2022

Minimum Description Length Control

We propose a novel framework for multitask reinforcement learning based ...
research
08/24/2022

Singleton-optimal (r,δ)-LRCs via some good polynomials of special forms

RS-like locally repairable codes (LRCs) based on polynomial evaluation w...
research
01/11/2018

Exact Calculation of Normalized Maximum Likelihood Code Length Using Fourier Analysis

The normalized maximum likelihood code length has been widely used in mo...
research
12/06/2014

A Likelihood Ratio Framework for High Dimensional Semiparametric Regression

We propose a likelihood ratio based inferential framework for high dimen...
research
05/11/2016

Asymptotic properties for combined L_1 and concave regularization

Two important goals of high-dimensional modeling are prediction and vari...
research
03/25/2021

Robust subgroup discovery

We introduce the problem of robust subgroup discovery, i.e., finding a s...
research
10/19/2012

On Information Regularization

We formulate a principle for classification with the knowledge of the ma...

Please sign up or login with your details

Forgot password? Click here to reset