Sample Efficient Stochastic Gradient Iterative Hard Thresholding Method for Stochastic Sparse Linear Regression with Limited Attribute Observation

09/05/2018
by   Tomoya Murata, et al.
0

We develop new stochastic gradient methods for efficiently solving sparse linear regression in a partial attribute observation setting, where learners are only allowed to observe a fixed number of actively chosen attributes per example at training and prediction times. It is shown that the methods achieve essentially a sample complexity of O(1/ε) to attain an error of ε under a variant of restricted eigenvalue condition, and the rate has better dependency on the problem dimension than existing methods. Particularly, if the smallest magnitude of the non-zero components of the optimal solution is not too small, the rate of our proposed Hybrid algorithm can be boosted to near the minimax optimal sample complexity of full information algorithms. The core ideas are (i) efficient construction of an unbiased gradient estimator by the iterative usage of the hard thresholding operator for configuring an exploration algorithm; and (ii) an adaptive combination of the exploration and an exploitation algorithms for quickly identifying the support of the optimum and efficiently searching the optimal parameter in its support. Experimental results are presented to validate our theoretical findings and the superiority of our proposed methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/30/2023

Near Optimal Private and Robust Linear Regression

We study the canonical statistical estimation problem of linear regressi...
research
10/31/2022

Private optimization in the interpolation regime: faster rates and hardness results

In non-private stochastic convex optimization, stochastic gradient metho...
research
03/03/2023

Statistical-Computational Tradeoffs in Mixed Sparse Linear Regression

We consider the problem of mixed sparse linear regression with two compo...
research
03/19/2019

Adaptive Hard Thresholding for Near-optimal Consistent Robust Regression

We study the problem of robust linear regression with response variable ...
research
06/11/2020

Sparse recovery by reduced variance stochastic approximation

In this paper, we discuss application of iterative Stochastic Optimizati...
research
04/20/2023

Linear Convergence of Reshuffling Kaczmarz Methods With Sparse Constraints

The Kaczmarz method (KZ) and its variants, which are types of stochastic...
research
10/23/2014

Attribute Efficient Linear Regression with Data-Dependent Sampling

In this paper we analyze a budgeted learning setting, in which the learn...

Please sign up or login with your details

Forgot password? Click here to reset