Learning Depth-Three Neural Networks in Polynomial Time
We give a polynomial-time algorithm for learning neural networks with one hidden layer of sigmoids feeding into any smooth, monotone activation function (e.g., sigmoid or ReLU). We make no assumptions on the structure of the network, and the algorithm succeeds with respect to any distribution on the unit ball in n dimensions (hidden weight vectors also have unit norm). This is the first assumption-free, provably efficient algorithm for learning neural networks with more than one hidden layer. Our algorithm-- Alphatron-- is a simple, iterative update rule that combines isotonic regression with kernel methods. It outputs a hypothesis that yields efficient oracle access to interpretable features. It also suggests a new approach to Boolean function learning via smooth relaxations of hard thresholds, sidestepping traditional hardness results from computational learning theory. Along these lines, we give improved results for a number of longstanding problems related to Boolean concept learning, unifying a variety of different techniques. For example, we give the first polynomial-time algorithm for learning intersections of halfspaces with a margin (distribution-free) and the first generalization of DNF learning to the setting of probabilistic concepts (queries; uniform distribution). Finally, we give the first provably correct algorithms for common schemes in multiple-instance learning.
READ FULL TEXT