Slow Kill for Big Data Learning

05/02/2023
by   Yiyuan She, et al.
0

Big-data applications often involve a vast number of observations and features, creating new challenges for variable selection and parameter estimation. This paper presents a novel technique called “slow kill,” which utilizes nonconvex constrained optimization, adaptive ℓ_2-shrinkage, and increasing learning rates. The fact that the problem size can decrease during the slow kill iterations makes it particularly effective for large-scale variable screening. The interaction between statistics and optimization provides valuable insights into controlling quantiles, stepsize, and shrinkage parameters in order to relax the regularity conditions required to achieve the desired level of statistical accuracy. Experimental results on real and synthetic data show that slow kill outperforms state-of-the-art algorithms in various situations while being computationally efficient for large-scale data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset