Approximate Newton-based statistical inference using only stochastic gradients

05/23/2018
by   Tianyang Li, et al.
0

We present a novel inference framework for convex empirical risk minimization, using approximate stochastic Newton steps. The proposed algorithm is based on the notion of finite differences and allows the approximation of a Hessian-vector product from first-order information. In theory, our method efficiently computes the statistical error covariance in M-estimation, both for unregularized convex learning problems and high-dimensional LASSO regression, without using exact second order information, or resampling the entire data set. In practice, we demonstrate the effectiveness of our framework on large-scale machine learning problems, that go even beyond convexity: as a highlight, our work can be used to detect certain adversarial attacks on neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/26/2018

Stochastic Trust Region Inexact Newton Method for Large-scale Machine Learning

Nowadays stochastic approximation methods are one of the major research ...
research
12/14/2021

SC-Reg: Training Overparameterized Neural Networks under Self-Concordant Regularization

In this paper we propose the SC-Reg (self-concordant regularization) fra...
research
06/06/2022

Stochastic Variance-Reduced Newton: Accelerating Finite-Sum Minimization with Large Batches

Stochastic variance reduction has proven effective at accelerating first...
research
05/22/2017

Large Scale Empirical Risk Minimization via Truncated Adaptive Newton Method

We consider large scale empirical risk minimization (ERM) problems, wher...
research
11/28/2018

First-order Newton-type Estimator for Distributed Estimation and Inference

This paper studies distributed estimation and inference for a general st...
research
03/08/2021

Constrained Learning with Non-Convex Losses

Though learning has become a core technology of modern information proce...
research
10/26/2018

Efficient Distributed Hessian Free Algorithm for Large-scale Empirical Risk Minimization via Accumulating Sample Strategy

In this paper, we propose a Distributed Accumulated Newton Conjugate gra...

Please sign up or login with your details

Forgot password? Click here to reset