Takeuchi's Information Criteria as a form of Regularization

03/13/2018
by   Matthew Dixon, et al.
0

Takeuchi's Information Criteria (TIC) is a linearization of maximum likelihood estimator bias which shrinks the model parameters towards the maximum entropy distribution, even when the model is mis-specified. In statistical machine learning, L_2 regularization (a.k.a. ridge regression) also introduces a parameterized bias term with the goal of minimizing out-of-sample entropy, but generally requires a numerical solver to find the regularization parameter. This paper presents a novel regularization approach based on TIC; the approach does not assume a data generation process and results in a higher entropy distribution through more efficient sample noise suppression. The resulting objective function can be directly minimized to estimate and select the best model, without the need to select a regularization parameter, as in ridge regression. Numerical results applied to a synthetic high dimensional dataset generated from a logistic regression model demonstrate superior model performance when using the TIC based regularization over a L_1 and a L_2 penalty term.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/25/2019

The Ridge Path Estimator for Linear Instrumental Variables

This paper presents the asymptotic behavior of a linear instrumental var...
research
02/17/2019

Bayesian Regularization: From Tikhonov to Horseshoe

Bayesian regularization is a central tool in modern-day statistical and ...
research
04/15/2023

On the existence of Firth's modified estimates in logistic regression models

In logistic regression modeling, Firth's modified estimator is widely us...
research
08/19/2020

Structure Learning in Inverse Ising Problems Using ℓ_2-Regularized Linear Estimator

Inferring interaction parameters from observed data is a ubiquitous requ...
research
06/17/2022

Beyond Ridge Regression for Distribution-Free Data

In supervised batch learning, the predictive normalized maximum likeliho...
research
09/14/2023

Approximate co-sufficient sampling with regularization

In this work, we consider the problem of goodness-of-fit (GoF) testing f...
research
07/25/2013

Does generalization performance of l^q regularization learning depend on q? A negative example

l^q-regularization has been demonstrated to be an attractive technique i...

Please sign up or login with your details

Forgot password? Click here to reset