Hardness of Agnostically Learning Halfspaces from Worst-Case Lattice Problems

07/28/2022
by   Stefan Tiegel, et al.
0

We show hardness of improperly learning halfspaces in the agnostic model based on worst-case lattice problems, e.g., approximating shortest vectors within polynomial factors. In particular, we show that under this assumption there is no efficient algorithm that outputs any binary hypothesis, not necessarily a halfspace, achieving misclassfication error better than 1/2 - ϵ even if the optimal misclassification error is as small is as small as δ. Here, ϵ can be smaller than the inverse of any polynomial in the dimension and δ as small as exp(-Ω(log^1-c(d))), where 0 < c < 1 is an arbitrary constant and d is the dimension. Previous hardness results [Daniely16] of this problem were based on average-case complexity assumptions, specifically, variants of Feige's random 3SAT hypothesis. Our work gives the first hardness for this problem based on a worst-case complexity assumption. It is inspired by a sequence of recent works showing hardness of learning well-separated Gaussian mixtures based on worst-case lattice problems.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset