How to Shift Bias: Lessons from the Baldwin Effect

12/10/2002
by   Peter D. Turney, et al.
0

An inductive learning algorithm takes a set of data as input and generates a hypothesis as output. A set of data is typically consistent with an infinite number of hypotheses; therefore, there must be factors other than the data that determine the output of the learning algorithm. In machine learning, these other factors are called the bias of the learner. Classical learning algorithms have a fixed bias, implicit in their design. Recently developed learning algorithms dynamically adjust their bias as they search for a hypothesis. Algorithms that shift bias in this manner are not as well understood as classical algorithms. In this paper, we show that the Baldwin effect has implications for the design and analysis of bias shifting algorithms. The Baldwin effect was proposed in 1896, to explain how phenomena that might appear to require Lamarckian evolution (inheritance of acquired characteristics) can arise from purely Darwinian evolution. Hinton and Nowlan presented a computational model of the Baldwin effect in 1987. We explore a variation on their model, which we constructed explicitly to illustrate the lessons that the Baldwin effect has for research in bias shifting algorithms. The main lesson is that it appears that a good strategy for shift of bias in a learning algorithm is to begin with a weak bias and gradually shift to a strong bias.

READ FULL TEXT

Authors

02/09/2022

The no-free-lunch theorems of supervised learning

The no-free-lunch theorems promote a skeptical conclusion that all possi...
06/01/2011

A Model of Inductive Bias Learning

A major problem in machine learning is that of inductive bias: how to ch...
12/11/2002

Technical Note: Bias and the Quantification of Stability

Research on bias in machine learning algorithms has generally been conce...
05/18/2020

Algorithmic Bias and Regularisation in Machine Learning

Often, what is termed algorithmic bias in machine learning will be due t...
10/25/2021

Quantum Boosting using Domain-Partitioning Hypotheses

Boosting is an ensemble learning method that converts a weak learner int...
03/01/2021

Improving the output quality of official statistics based on machine learning algorithms

National statistical institutes currently investigate how to improve the...
09/09/2019

Bias-Variance Games

Firms engaged in electronic commerce increasingly rely on machine learni...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.