The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning

04/11/2023
by   Micah Goldblum, et al.
0

No free lunch theorems for supervised learning state that no learner can solve all problems or that all learners achieve exactly the same accuracy on average over a uniform distribution on learning problems. Accordingly, these theorems are often referenced in support of the notion that individual problems require specially tailored inductive biases. While virtually all uniformly sampled datasets have high complexity, real-world problems disproportionately generate low-complexity data, and we argue that neural network models share this same preference, formalized using Kolmogorov complexity. Notably, we show that architectures designed for a particular domain, such as computer vision, can compress datasets on a variety of seemingly unrelated domains. Our experiments show that pre-trained and even randomly initialized language models prefer to generate low-complexity sequences. Whereas no free lunch theorems seemingly indicate that individual problems require specialized learners, we explain how tasks that often require human intervention such as picking an appropriately sized model when labeled data is scarce or plentiful can be automated into a single learning algorithm. These observations justify the trend in deep learning of unifying seemingly disparate problems with an increasingly small set of machine learning models.

READ FULL TEXT

page 2

page 10

research
02/09/2022

The no-free-lunch theorems of supervised learning

The no-free-lunch theorems promote a skeptical conclusion that all possi...
research
06/19/2020

Learning to Prove from Synthetic Theorems

A major challenge in applying machine learning to automated theorem prov...
research
05/15/2021

Are Convolutional Neural Networks or Transformers more like human vision?

Modern machine learning models for computer vision exceed humans in accu...
research
03/21/2023

Building artificial neural circuits for domain-general cognition: a primer on brain-inspired systems-level architecture

There is a concerted effort to build domain-general artificial intellige...
research
02/21/2023

On Inductive Biases for Machine Learning in Data Constrained Settings

Learning with limited data is one of the biggest problems of machine lea...
research
07/21/2020

What is important about the No Free Lunch theorems?

The No Free Lunch theorems prove that under a uniform distribution over ...
research
06/26/2020

What they do when in doubt: a study of inductive biases in seq2seq learners

Sequence-to-sequence (seq2seq) learners are widely used, but we still ha...

Please sign up or login with your details

Forgot password? Click here to reset