Using More Data to Speed-up Training Time

06/06/2011
by   Shai Shalev-Shwartz, et al.
0

In many recent applications, data is plentiful. By now, we have a rather clear understanding of how more data can be used to improve the accuracy of learning algorithms. Recently, there has been a growing interest in understanding how more data can be leveraged to reduce the required training runtime. In this paper, we study the runtime of learning as a function of the number of available training examples, and underscore the main high-level techniques. We provide some initial positive results showing that the runtime can decrease exponentially while only requiring a polynomial growth of the number of examples, and spell-out several interesting open problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2013

Learning from networked examples in a k-partite graph

Many machine learning algorithms are based on the assumption that traini...
research
12/16/2021

Learning To Retrieve Prompts for In-Context Learning

In-context learning is a recent paradigm in natural language understandi...
research
11/09/2020

A Theory of Universal Learning

How quickly can a given class of concepts be learned from examples? It i...
research
10/08/2018

Effective Parallelisation for Machine Learning

We present a novel parallelisation scheme that simplifies the adaptation...
research
01/19/2021

Anticoncentration versus the number of subset sums

Let w⃗ = (w_1,…, w_n) ∈ℝ^n. We show that for any n^-2≤ϵ≤ 1, if #{ξ⃗...
research
03/31/2016

Extending Detection with Forensic Information

For over a quarter century, security-relevant detection has been driven ...
research
03/01/2023

Scarf's algorithm and stable marriages

Scarf's algorithm gives a pivoting procedure to find a special vertex – ...

Please sign up or login with your details

Forgot password? Click here to reset