The Reflectron: Exploiting geometry for learning generalized linear models

06/15/2020
by   Nicholas M. Boffi, et al.
0

Generalized linear models (GLMs) extend linear regression by generating the dependent variables through a nonlinear function of a predictor in a Reproducing Kernel Hilbert Space. Despite nonconvexity of the underlying optimization problem, the GLM-tron algorithm of Kakade et al. (2011) provably learns GLMs with guarantees of computational and statistical efficiency. We present an extension of the GLM-tron to a mirror descent or natural gradient-like setting, which we call the Reflectron. The Reflectron enjoys the same statistical guarantees as the GLM-tron for any choice of the convex potential function ψ used to define mirror descent. Central to our algorithm, ψ can be chosen to implicitly regularize the learned model when there are multiple hypotheses consistent with the data. Our results extend to the case of multiple outputs with or without weight sharing. We perform our analysis in continuous-time, leading to simple and intuitive derivations, with discrete-time implementations obtained by discretization of the continuous-time dynamics. We supplement our theoretical analysis with simulations on real and synthetic datasets demonstrating the validity of our theoretical results.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset