Learning the mapping x∑_i=1^d x_i^2: the cost of finding the needle in a haystack

02/24/2020
by   Jiefu Zhang, et al.
0

The task of using machine learning to approximate the mapping x∑_i=1^d x_i^2 with x_i∈[-1,1] seems to be a trivial one. Given the knowledge of the separable structure of the function, one can design a sparse network to represent the function very accurately, or even exactly. When such structural information is not available, and we may only use a dense neural network, the optimization procedure to find the sparse network embedded in the dense network is similar to finding the needle in a haystack, using a given number of samples of the function. We demonstrate that the cost (measured by sample complexity) of finding the needle is directly related to the Barron norm of the function. While only a small number of samples is needed to train a sparse network, the dense network trained with the same number of samples exhibits large test loss and a large generalization gap. In order to control the size of the generalization gap, we find that the use of explicit regularization becomes increasingly more important as d increases. The numerically observed sample complexity with explicit regularization scales as O(d^2.5), which is in fact better than the theoretically predicted sample complexity that scales as O(d^4). Without explicit regularization (also called implicit regularization), the numerically observed sample complexity is significantly higher and is close to O(d^4.5).

READ FULL TEXT
research
07/13/2017

A Brief Study of In-Domain Transfer and Learning from Fewer Samples using A Few Simple Priors

Domain knowledge can often be encoded in the structure of a network, suc...
research
01/27/2016

On the Sample Complexity of Learning Graphical Games

We analyze the sample complexity of learning graphical games from purely...
research
06/27/2012

On the Number of Samples Needed to Learn the Correct Structure of a Bayesian Network

Bayesian Networks (BNs) are useful tools giving a natural and compact re...
research
10/12/2021

Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Pruned Neural Networks

The lottery ticket hypothesis (LTH) states that learning on a properly p...
research
03/15/2023

On the Benefits of Leveraging Structural Information in Planning Over the Learned Model

Model-based Reinforcement Learning (RL) integrates learning and planning...
research
10/05/2018

Sample Complexity of Sinkhorn divergences

Optimal transport (OT) and maximum mean discrepancies (MMD) are now rout...
research
08/06/2020

Functional Regularization for Representation Learning: A Unified Theoretical Perspective

Unsupervised and self-supervised learning approaches have become a cruci...

Please sign up or login with your details

Forgot password? Click here to reset