DeepAI AI Chat
Log In Sign Up

Learning a high-dimensional classification rule using auxiliary outcomes

11/11/2020
by   Muxuan Liang, et al.
0

Correlated outcomes are common in many practical problems. Based on a decomposition of estimation bias into two types, within-subspace and against-subspace, we develop a robust approach to estimating the classification rule for the outcome of interest with the presence of auxiliary outcomes in high-dimensional settings. The proposed method includes a pooled estimation step using all outcomes to gain efficiency, and a subsequent calibration step using only the outcome of interest to correct both types of biases. We show that when the pooled estimator has a low estimation error and a sparse against-subspace bias, the calibrated estimator can achieve a lower estimation error than that when using only the single outcome of interest. An inference procedure for the calibrated estimator is also provided. Simulations and a real data analysis are conducted to justify the superiority of the proposed method.

READ FULL TEXT

page 16

page 17

07/16/2020

Doubly Distributed Supervised Learning and Inference with High-Dimensional Correlated Outcomes

This paper presents a unified framework for supervised learning and infe...
11/29/2018

Adaptive Sparse Estimation with Side Information

The article considers the problem of estimating a high-dimensional spars...
06/14/2021

Machine Learning for Variance Reduction in Online Experiments

We consider the problem of variance reduction in randomized controlled t...
09/25/2020

Doubly Robust Semiparametric Inference Using Regularized Calibrated Estimation with High-dimensional Data

Consider semiparametric estimation where a doubly robust estimating func...
03/05/2023

Estimating Racial Disparities When Race is Not Observed

The estimation of racial disparities in health care, financial services,...