Linear Models are Most Favorable among Generalized Linear Models

06/09/2020
by   Kuan-Yun Lee, et al.
0

We establish a nonasymptotic lower bound on the L_2 minimax risk for a class of generalized linear models. It is further shown that the minimax risk for the canonical linear model matches this lower bound up to a universal constant. Therefore, the canonical linear model may be regarded as most favorable among the considered class of generalized linear models (in terms of minimax risk). The proof makes use of an information-theoretic Bayesian Cramér-Rao bound for log-concave priors, established by Aras et al. (2019).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/23/2022

Minimax Optimal Quantization of Linear Models: Information-Theoretic Limits and Efficient Algorithms

We consider the problem of quantizing a linear model learned from measur...
research
11/11/2017

Minimax estimation in linear models with unknown finite alphabet design

We provide minimax theory for joint estimation of F and ω in linear mode...
research
02/13/2016

A Minimax Theory for Adaptive Data Analysis

In adaptive data analysis, the user makes a sequence of queries on the d...
research
08/29/2013

Universal Approximation Using Shuffled Linear Models

This paper proposes a specific type of Local Linear Model, the Shuffled ...
research
04/08/2014

Notes on Generalized Linear Models of Neurons

Experimental neuroscience increasingly requires tractable models for ana...
research
11/08/2020

The Cost of Privacy in Generalized Linear Models: Algorithms and Minimax Lower Bounds

We propose differentially private algorithms for parameter estimation in...
research
11/07/2016

Distributed Coordinate Descent for Generalized Linear Models with Regularization

Generalized linear model with L_1 and L_2 regularization is a widely use...

Please sign up or login with your details

Forgot password? Click here to reset