When Hardness of Approximation Meets Hardness of Learning

08/18/2020
by   Eran Malach, et al.
11

A supervised learning algorithm has access to a distribution of labeled examples, and needs to return a function (hypothesis) that correctly labels the examples. The hypothesis of the learner is taken from some fixed class of functions (e.g., linear classifiers, neural networks etc.). A failure of the learning algorithm can occur due to two possible reasons: wrong choice of hypothesis class (hardness of approximation), or failure to find the best function within the hypothesis class (hardness of learning). Although both approximation and learnability are important for the success of the algorithm, they are typically studied separately. In this work, we show a single hardness property that implies both hardness of approximation using linear classes and shallow networks, and hardness of learning using correlation queries and gradient-descent. This allows us to obtain new results on hardness of approximation and learnability of parity functions, DNF formulas and AC^0 circuits.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/20/2021

From Local Pseudorandom Generators to Hardness of Learning

We prove hardness-of-learning results under a well-studied assumption on...
research
01/31/2021

The Connection Between Approximation, Depth Separation and Learnability in Neural Networks

Several recent works have shown separation results between deep neural n...
research
11/28/2017

Backprop as Functor: A compositional perspective on supervised learning

A supervised learning algorithm searches over a set of functions A → B p...
research
06/05/2020

Hardness of Learning Neural Networks with Natural Weights

Neural networks are nowadays highly successful despite strong hardness r...
research
07/11/2011

Multi-Instance Learning with Any Hypothesis Class

In the supervised learning setting termed Multiple-Instance Learning (MI...
research
04/06/2021

Proof of the Theory-to-Practice Gap in Deep Learning via Sampling Complexity bounds for Neural Network Approximation Spaces

We study the computational complexity of (deterministic or randomized) a...
research
12/17/2018

Multi Instance Learning For Unbalanced Data

In the context of Multi Instance Learning, we analyze the Single Instanc...

Please sign up or login with your details

Forgot password? Click here to reset