Hardness of Learning DNFs using Halfspaces

11/14/2019
by   Suprovat Ghoshal, et al.
0

The problem of learning t-term DNF formulas (for t = O(1)) has been studied extensively in the PAC model since its introduction by Valiant (STOC 1984). A t-term DNF can be efficiently learnt using a t-term DNF only if t = 1 i.e., when it is an AND, while even weakly learning a 2-term DNF using a constant term DNF was shown to be NP-hard by Khot and Saket (FOCS 2008). On the other hand, Feldman et al. (FOCS 2009) showed the hardness of weakly learning a noisy AND using a halfspace – the latter being a generalization of an AND, while Khot and Saket (STOC 2008) showed that an intersection of two halfspaces is hard to weakly learn using any function of constantly many halfspaces. The question of whether a 2-term DNF is efficiently learnable using 2 or constantly many halfspaces remained open. In this work we answer this question in the negative by showing the hardness of weakly learning a 2-term DNF as well as a noisy AND using any function of a constant number of halfspaces. In particular we prove the following. For any constants ν, ζ > 0 and ℓ∈N, given a distribution over point-value pairs {0,1}^n ×{0,1}, it is NP-hard to decide whether, YES Case: There is a 2-term DNF that classifies all the points of the distribution, and an AND that classifies at least 1-ζ fraction of the points correctly. NO Case: Any boolean function depending on at most ℓ halfspaces classifies at most 1/2 + ν fraction of the points of the distribution correctly. Our result generalizes and strengthens the previous best results mentioned above on the hardness of learning a 2-term DNF, learning an intersection of two halfspaces, and learning a noisy AND.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/06/2017

Hardness of learning noisy halfspaces using polynomial thresholds

We prove the hardness of weakly learning halfspaces in the presence of a...
research
01/20/2021

From Local Pseudorandom Generators to Hardness of Learning

We prove hardness-of-learning results under a well-studied assumption on...
research
11/15/2022

Shellability is hard even for balls

The main goal of this paper is to show that shellability is NP-hard for ...
research
09/06/2020

Optimal Inapproximability of Satisfiable k-LIN over Non-Abelian Groups

A seminal result of Håstad [J. ACM, 48(4):798–859, 2001] shows that it i...
research
11/20/2018

Minimum Guesswork with an Unreliable Oracle

We study a guessing game where Alice holds a discrete random variable X,...
research
06/07/2023

Free Fermion Distributions Are Hard to Learn

Free fermions are some of the best studied quantum systems. However, lit...
research
10/07/2021

Bad-Policy Density: A Measure of Reinforcement Learning Hardness

Reinforcement learning is hard in general. Yet, in many specific environ...

Please sign up or login with your details

Forgot password? Click here to reset