Agnostic Learning of a Single Neuron with Gradient Descent

05/29/2020
by   Spencer Frei, et al.
4

We consider the problem of learning the best-fitting single neuron as measured by the expected squared loss E_(x,y)∼D[(σ(w^ x)-y)^2] over an unknown joint distribution of the features and labels by using gradient descent on the empirical risk induced by a set of i.i.d. samples S ∼D^n. The activation function σ is an arbitrary Lipschitz and non-decreasing function, making the optimization problem nonconvex and nonsmooth in general, and covers typical neural network activation functions and inverse link functions in the generalized linear model setting. In the agnostic PAC learning setting, where no assumption on the relationship between the labels y and the features x is made, if the population risk minimizer v has risk OPT, we show that gradient descent achieves population risk O( OPT^1/2 )+ϵ in polynomial time and sample complexity. When labels take the form y = σ(v^ x) + ξ for zero-mean sub-Gaussian noise ξ, we show that gradient descent achieves population risk OPT + ϵ. Our sample complexity and runtime guarantees are (almost) dimension independent, and when σ is strictly increasing and Lipschitz, require no distributional assumptions beyond boundedness. For ReLU, we show the same results under a nondegeneracy assumption for the marginal distribution of the features. To the best of our knowledge, this is the first result for agnostic learning of a single neuron using gradient descent.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/04/2022

Agnostic Learning of General ReLU Activation Using Gradient Descent

We provide a convergence analysis of gradient descent for the problem of...
research
10/01/2020

Agnostic Learning of Halfspaces with Gradient Descent via Soft Margins

We analyze the properties of gradient descent on convex surrogates for t...
research
12/03/2019

Stationary Points of Shallow Neural Networks with Quadratic Activation Function

We consider the problem of learning shallow neural networks with quadrat...
research
06/18/2023

Agnostically Learning Single-Index Models using Omnipredictors

We give the first result for agnostically learning Single-Index Models (...
research
01/15/2020

Learning a Single Neuron with Gradient Methods

We consider the fundamental problem of learning a single neuron x σ(w^ x...
research
05/31/2020

Tree-Projected Gradient Descent for Estimating Gradient-Sparse Parameters on Graphs

We study estimation of a gradient-sparse parameter vector θ^* ∈ℝ^p, havi...
research
06/06/2021

Complexity Analysis of Stein Variational Gradient Descent Under Talagrand's Inequality T1

We study the complexity of Stein Variational Gradient Descent (SVGD), wh...

Please sign up or login with your details

Forgot password? Click here to reset