DeepAI AI Chat
Log In Sign Up

The Bias-Expressivity Trade-off

11/09/2019
by   Julius Lauw, et al.
0

Learning algorithms need bias to generalize and perform better than random guessing. We examine the flexibility (expressivity) of biased algorithms. An expressive algorithm can adapt to changing training data, altering its outcome based on changes in its input. We measure expressivity by using an information-theoretic notion of entropy on algorithm outcome distributions, demonstrating a trade-off between bias and expressivity. To the degree an algorithm is biased is the degree to which it can outperform uniform random sampling, but is also the degree to which is becomes inflexible. We derive bounds relating bias to expressivity, proving the necessary trade-offs inherent in trying to create strongly performing yet flexible algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/16/2019

Adaptive Trade-Offs in Off-Policy Learning

A great variety of off-policy learning algorithms exist in the literatur...
07/13/2019

The Futility of Bias-Free Learning and Search

Building on the view of machine learning as search, we demonstrate the n...
10/21/2021

Statistical discrimination in learning agents

Undesired bias afflicts both human and algorithmic decision making, and ...
01/08/2020

Nullstellensatz Size-Degree Trade-offs from Reversible Pebbling

We establish an exactly tight relation between reversible pebblings of g...
02/16/2023

Preventing Discriminatory Decision-making in Evolving Data Streams

Bias in machine learning has rightly received significant attention over...
02/20/2019

Biasing Boolean Functions and Collective Coin-Flipping Protocols over Arbitrary Product Distributions

The seminal result of Kahn, Kalai and Linial shows that a coalition of O...