Cryptographic Hardness of Learning Halfspaces with Massart Noise

07/28/2022
by   Ilias Diakonikolas, et al.
0

We study the complexity of PAC learning halfspaces in the presence of Massart noise. In this problem, we are given i.i.d. labeled examples (𝐱, y) ∈ℝ^N ×{± 1}, where the distribution of 𝐱 is arbitrary and the label y is a Massart corruption of f(𝐱), for an unknown halfspace f: ℝ^N →{± 1}, with flipping probability η(𝐱) ≤η < 1/2. The goal of the learner is to compute a hypothesis with small 0-1 error. Our main result is the first computational hardness result for this learning problem. Specifically, assuming the (widely believed) subexponential-time hardness of the Learning with Errors (LWE) problem, we show that no polynomial-time Massart halfspace learner can achieve error better than Ω(η), even if the optimal 0-1 error is small, namely OPT = 2^-log^c (N) for any universal constant c ∈ (0, 1). Prior work had provided qualitatively similar evidence of hardness in the Statistical Query model. Our computational hardness result essentially resolves the polynomial PAC learnability of Massart halfspaces, by showing that known efficient learning algorithms for the problem are nearly best possible.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/17/2020

Hardness of Learning Halfspaces with Massart Noise

We study the complexity of PAC learning halfspaces in the presence of Ma...
research
06/14/2021

Boosting in the Presence of Massart Noise

We study the problem of boosting the accuracy of a weak learner in the (...
research
08/29/2019

Nearly Tight Bounds for Robust Proper Learning of Halfspaces with a Margin

We study the problem of properly learning large margin halfspaces in th...
research
10/18/2022

SQ Lower Bounds for Learning Single Neurons with Massart Noise

We study the problem of PAC learning a single neuron in the presence of ...
research
02/13/2023

Near-Optimal Cryptographic Hardness of Agnostically Learning Halfspaces and ReLU Regression under Gaussian Marginals

We study the task of agnostically learning halfspaces under the Gaussian...
research
12/06/2022

A Strongly Polynomial Algorithm for Approximate Forster Transforms and its Application to Halfspace Learning

The Forster transform is a method of regularizing a dataset by placing i...
research
03/09/2023

Efficient Testable Learning of Halfspaces with Adversarial Label Noise

We give the first polynomial-time algorithm for the testable learning of...

Please sign up or login with your details

Forgot password? Click here to reset