On the Sample Complexity of Learning from a Sequence of Experiments

02/12/2018
by   Longyun Guo, et al.
0

We analyze the sample complexity of a new problem: learning from a sequence of experiments. In this problem, the learner should choose a hypothesis that performs well with respect to an infinite sequence of experiments, and their related data distributions. In practice, the learner can only perform m experiments with a total of N samples drawn from those data distributions. By using a Rademacher complexity approach, we show that the gap between the training and generation error is O((m/N)^0.5). We also provide some examples for linear prediction, two-layer neural networks and kernel methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/24/2019

On sample complexity of neural networks

We consider functions defined by deep neural networks as definable objec...
research
07/12/2018

Automatically Composing Representation Transformations as a Means for Generalization

How can we build a learner that can capture the essence of what makes a ...
research
01/25/2019

On the Statistical Efficiency of Optimal Kernel Sum Classifiers

We propose a novel combination of optimization tools with learning theor...
research
12/02/2018

Efficient Lifelong Learning with A-GEM

In lifelong learning, the learner is presented with a sequence of tasks,...
research
06/01/2023

Provable Benefit of Mixup for Finding Optimal Decision Boundaries

We investigate how pair-wise data augmentation techniques like Mixup aff...
research
08/17/2021

Statistically Near-Optimal Hypothesis Selection

Hypothesis Selection is a fundamental distribution learning problem wher...
research
07/18/2022

On the Study of Sample Complexity for Polynomial Neural Networks

As a general type of machine learning approach, artificial neural networ...

Please sign up or login with your details

Forgot password? Click here to reset