Hardness of Agnostically Learning Halfspaces from Worst-Case Lattice Problems

07/28/2022
by   Stefan Tiegel, et al.
0

We show hardness of improperly learning halfspaces in the agnostic model based on worst-case lattice problems, e.g., approximating shortest vectors within polynomial factors. In particular, we show that under this assumption there is no efficient algorithm that outputs any binary hypothesis, not necessarily a halfspace, achieving misclassfication error better than 1/2 - ϵ even if the optimal misclassification error is as small is as small as δ. Here, ϵ can be smaller than the inverse of any polynomial in the dimension and δ as small as exp(-Ω(log^1-c(d))), where 0 < c < 1 is an arbitrary constant and d is the dimension. Previous hardness results [Daniely16] of this problem were based on average-case complexity assumptions, specifically, variants of Feige's random 3SAT hypothesis. Our work gives the first hardness for this problem based on a worst-case complexity assumption. It is inspired by a sequence of recent works showing hardness of learning well-separated Gaussian mixtures based on worst-case lattice problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/06/2022

Continuous LWE is as Hard as LWE Applications to Learning Gaussian Mixtures

We show direct and conceptually simple reductions between the classical ...
research
05/19/2020

Continuous LWE

We introduce a continuous analogue of the Learning with Errors (LWE) pro...
research
09/05/2020

Isotonic regression with unknown permutations: Statistics, computation, and adaptation

Motivated by models for multiway comparison data, we consider the proble...
research
06/20/2021

On the Cryptographic Hardness of Learning Single Periodic Neurons

We show a simple reduction which demonstrates the cryptographic hardness...
research
02/28/2022

A Note on the Hardness of Problems from Cryptographic Group Actions

Given a cryptographic group action, we show that the Group Action Invers...
research
10/04/2016

A Non-generative Framework and Convex Relaxations for Unsupervised Learning

We give a novel formal theoretical framework for unsupervised learning w...
research
06/22/2017

A Note on Learning Algorithms for Quadratic Assignment with Graph Neural Networks

Many inverse problems are formulated as optimization problems over certa...

Please sign up or login with your details

Forgot password? Click here to reset