Robustly Learning a Single Neuron via Sharpness

06/13/2023
by   Puqian Wang, et al.
0

We study the problem of learning a single neuron with respect to the L_2^2-loss in the presence of adversarial label noise. We give an efficient algorithm that, for a broad family of activations including ReLUs, approximates the optimal L_2^2-error within a constant factor. Our algorithm applies under much milder distributional assumptions compared to prior work. The key ingredient enabling our results is a novel connection to local error bounds from optimization theory.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/17/2022

Learning a Single Neuron with Adversarial Label Noise via Gradient Descent

We study the fundamental problem of learning a single neuron, i.e., a fu...
research
05/15/2020

Efficiently Learning Adversarially Robust Halfspaces with Noise

We study the problem of learning adversarially robust halfspaces in the ...
research
10/18/2022

SQ Lower Bounds for Learning Single Neurons with Massart Noise

We study the problem of PAC learning a single neuron in the presence of ...
research
02/13/2020

Learning Halfspaces with Massart Noise Under Structured Distributions

We study the problem of learning halfspaces with Massart noise in the di...
research
06/18/2023

Agnostically Learning Single-Index Models using Omnipredictors

We give the first result for agnostically learning Single-Index Models (...
research
01/15/2020

Learning a Single Neuron with Gradient Methods

We consider the fundamental problem of learning a single neuron x σ(w^ x...
research
06/08/2020

Classification Under Misspecification: Halfspaces, Generalized Linear Models, and Connections to Evolvability

In this paper we revisit some classic problems on classification under m...

Please sign up or login with your details

Forgot password? Click here to reset