DeepAI AI Chat
Log In Sign Up

The Futility of Bias-Free Learning and Search

by   George D. Montanez, et al.
Harvey Mudd College

Building on the view of machine learning as search, we demonstrate the necessity of bias in learning, quantifying the role of bias (measured relative to a collection of possible datasets, or more generally, information resources) in increasing the probability of success. For a given degree of bias towards a fixed target, we show that the proportion of favorable information resources is strictly bounded from above. Furthermore, we demonstrate that bias is a conserved quantity, such that no algorithm can be favorably biased towards many distinct targets simultaneously. Thus bias encodes trade-offs. The probability of success for a task can also be measured geometrically, as the angle of agreement between what holds for the actual task and what is assumed by the algorithm, represented in its bias. Lastly, finding a favorably biasing distribution over a fixed set of information resources is provably difficult, unless the set of resources itself is already favorable with respect to the given task and algorithm.


page 1

page 2

page 3

page 4


The Famine of Forte: Few Search Problems Greatly Favor Your Algorithm

Casting machine learning as a type of search, we demonstrate that the pr...

The Bias-Expressivity Trade-off

Learning algorithms need bias to generalize and perform better than rand...

Technical Note: Bias and the Quantification of Stability

Research on bias in machine learning algorithms has generally been conce...

Fighting Fire with Fire: Contrastive Debiasing without Bias-free Data via Generative Bias-transformation

Despite their remarkable ability to generalize with over-capacity networ...

The Search and Rescue Game on a Cycle

We consider a search and rescue game introduced recently by the first au...