DeepAI AI Chat
Log In Sign Up

Learning a high-dimensional classification rule using auxiliary outcomes

by   Muxuan Liang, et al.

Correlated outcomes are common in many practical problems. Based on a decomposition of estimation bias into two types, within-subspace and against-subspace, we develop a robust approach to estimating the classification rule for the outcome of interest with the presence of auxiliary outcomes in high-dimensional settings. The proposed method includes a pooled estimation step using all outcomes to gain efficiency, and a subsequent calibration step using only the outcome of interest to correct both types of biases. We show that when the pooled estimator has a low estimation error and a sparse against-subspace bias, the calibrated estimator can achieve a lower estimation error than that when using only the single outcome of interest. An inference procedure for the calibrated estimator is also provided. Simulations and a real data analysis are conducted to justify the superiority of the proposed method.


page 16

page 17


Doubly Distributed Supervised Learning and Inference with High-Dimensional Correlated Outcomes

This paper presents a unified framework for supervised learning and infe...

Adaptive Sparse Estimation with Side Information

The article considers the problem of estimating a high-dimensional spars...

Machine Learning for Variance Reduction in Online Experiments

We consider the problem of variance reduction in randomized controlled t...

Doubly Robust Semiparametric Inference Using Regularized Calibrated Estimation with High-dimensional Data

Consider semiparametric estimation where a doubly robust estimating func...

Estimating Racial Disparities When Race is Not Observed

The estimation of racial disparities in health care, financial services,...