Pareto Frontiers in Neural Feature Learning: Data, Compute, Width, and Luck

09/07/2023
by   Benjamin L. Edelman, et al.
0

This work investigates the nuanced algorithm design choices for deep learning in the presence of computational-statistical gaps. We begin by considering offline sparse parity learning, a supervised classification problem which admits a statistical query lower bound for gradient-based training of a multilayer perceptron. This lower bound can be interpreted as a multi-resource tradeoff frontier: successful learning can only occur if one is sufficiently rich (large model), knowledgeable (large dataset), patient (many training iterations), or lucky (many random guesses). We show, theoretically and experimentally, that sparse initialization and increasing network width yield significant improvements in sample efficiency in this setting. Here, width plays the role of parallel search: it amplifies the probability of finding "lottery ticket" neurons, which learn sparse features more sample-efficiently. Finally, we show that the synthetic sparse parity task can be useful as a proxy for real problems requiring axis-aligned feature learning. We demonstrate improved sample efficiency on tabular classification benchmarks by using wide, sparsely-initialized MLP models; these networks sometimes outperform tuned random forests.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/30/2020

Feature Learning in Infinite-Width Neural Networks

As its width tends to infinity, a deep neural network's behavior under g...
research
07/26/2018

A Parity Game Tale of Two Counters

Parity games have important practical applications in formal verificatio...
research
12/23/2022

The Onset of Variance-Limited Behavior for Networks in the Lazy and Rich Regimes

For small training set sizes P, the generalization error of wide neural ...
research
01/14/2019

A lower bound on the tree-width of graphs with irrelevant vertices

For their famous algorithm for the disjoint paths problem, Robertson and...
research
06/16/2021

Costs and Benefits of Wasserstein Fair Regression

Real-world applications of machine learning tools in high-stakes domains...
research
03/17/2018

Learning Mixtures of Product Distributions via Higher Multilinear Moments

Learning mixtures of k binary product distributions is a central problem...

Please sign up or login with your details

Forgot password? Click here to reset