DeepAI
Log In Sign Up

DebiNet: Debiasing Linear Models with Nonlinear Overparameterized Neural Networks

11/01/2020
by   Shiyun Xu, et al.
0

Recent years have witnessed strong empirical performance of over-parameterized neural networks on various tasks and many advances in the theory, e.g. the universal approximation and provable convergence to global minimum. In this paper, we incorporate over-parameterized neural networks into semi-parametric models to bridge the gap between inference and prediction, especially in the high dimensional linear problem. By doing so, we can exploit a wide class of networks to approximate the nuisance functions and to estimate the parameters of interest consistently. Therefore, we may offer the best of two worlds: the universal approximation ability from neural networks and the interpretability from classic ordinary linear model, leading to valid inference and accurate prediction. We show the theoretical foundations that make this possible and demonstrate with numerical experiments. Furthermore, we propose a framework, DebiNet, in which we plug-in arbitrary feature selection methods to our semi-parametric neural network and illustrate that our framework debiases the regularized estimators and performs well, in terms of the post-selection inference and the generalization error.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/08/2020

Multicarving for high-dimensional post-selection inference

We consider post-selection inference for high-dimensional (generalized) ...
11/18/2022

Universal Property of Convolutional Neural Networks

Universal approximation, whether a set of functions can approximate an a...
06/18/2021

It's FLAN time! Summing feature-wise latent representations for interpretability

Interpretability has become a necessary feature for machine learning mod...
10/15/2020

Deep Conditional Transformation Models

Learning the cumulative distribution function (CDF) of an outcome variab...
02/18/2020

Learning Parities with Neural Networks

In recent years we see a rapidly growing line of research which shows le...
05/23/2019

Parsimonious Deep Learning: A Differential Inclusion Approach with Global Convergence

Over-parameterization is ubiquitous nowadays in training neural networks...
05/24/2022

Semi-Parametric Deep Neural Networks in Linear Time and Memory

Recent advances in deep learning have been driven by large-scale paramet...