Learning Model Bias

11/14/2019
by   Jonathan Baxter, et al.
0

In this paper the problem of learning appropriate domain-specific bias is addressed. It is shown that this can be achieved by learning many related tasks from the same domain, and a theorem is given bounding the number tasks that must be learnt. A corollary of the theorem is that if the tasks are known to possess a common internal representation or preprocessing then the number of examples required per task for good generalisation when learning n tasks simultaneously scales like O(a + b/n), where O(a) is a bound on the minimum number of examples required to learn a single task, and O(a + b) is a bound on the number of examples required to learn each task independently. An experiment providing strong qualitative support for the theoretical results is reported.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/09/2019

Learning Internal Representations

Most machine learning theory and practice is concerned with learning a s...
research
11/09/2019

Learning Internal Representations (PhD Thesis)

Most machine learning theory and practice is concerned with learning a s...
research
11/14/2019

A Bayesian/Information Theoretic Model of Bias Learning

In this paper the problem of learning appropriate bias for an environmen...
research
02/27/2020

Theoretical Models of Learning to Learn

A Machine can only learn if it is biased in some way. Typically the bias...
research
06/01/2011

A Model of Inductive Bias Learning

A major problem in machine learning is that of inductive bias: how to ch...
research
10/26/2020

The Representation Race - Preprocessing for Handling Time Phenomena

Designing the representation languages for the input, L E, and output, L...

Please sign up or login with your details

Forgot password? Click here to reset