DeepAI AI Chat
Log In Sign Up

The Bias-Expressivity Trade-off

by   Julius Lauw, et al.

Learning algorithms need bias to generalize and perform better than random guessing. We examine the flexibility (expressivity) of biased algorithms. An expressive algorithm can adapt to changing training data, altering its outcome based on changes in its input. We measure expressivity by using an information-theoretic notion of entropy on algorithm outcome distributions, demonstrating a trade-off between bias and expressivity. To the degree an algorithm is biased is the degree to which it can outperform uniform random sampling, but is also the degree to which is becomes inflexible. We derive bounds relating bias to expressivity, proving the necessary trade-offs inherent in trying to create strongly performing yet flexible algorithms.


page 1

page 2

page 3

page 4


Adaptive Trade-Offs in Off-Policy Learning

A great variety of off-policy learning algorithms exist in the literatur...

The Futility of Bias-Free Learning and Search

Building on the view of machine learning as search, we demonstrate the n...

Statistical discrimination in learning agents

Undesired bias afflicts both human and algorithmic decision making, and ...

Nullstellensatz Size-Degree Trade-offs from Reversible Pebbling

We establish an exactly tight relation between reversible pebblings of g...

Preventing Discriminatory Decision-making in Evolving Data Streams

Bias in machine learning has rightly received significant attention over...

Biasing Boolean Functions and Collective Coin-Flipping Protocols over Arbitrary Product Distributions

The seminal result of Kahn, Kalai and Linial shows that a coalition of O...