Intelligence, physics and information – the tradeoff between accuracy and simplicity in machine learning

by   Tailin Wu, et al.

How can we enable machines to make sense of the world, and become better at learning? To approach this goal, I believe viewing intelligence in terms of many integral aspects, and also a universal two-term tradeoff between task performance and complexity, provides two feasible perspectives. In this thesis, I address several key questions in some aspects of intelligence, and study the phase transitions in the two-term tradeoff, using strategies and tools from physics and information. Firstly, how can we make the learning models more flexible and efficient, so that agents can learn quickly with fewer examples? Inspired by how physicists model the world, we introduce a paradigm and an AI Physicist agent for simultaneously learning many small specialized models (theories) and the domain they are accurate, which can then be simplified, unified and stored, facilitating few-shot learning in a continual way. Secondly, for representation learning, when can we learn a good representation, and how does learning depend on the structure of the dataset? We approach this question by studying phase transitions when tuning the tradeoff hyperparameter. In the information bottleneck, we theoretically show that these phase transitions are predictable and reveal structure in the relationships between the data, the model, the learned representation and the loss landscape. Thirdly, how can agents discover causality from observations? We address part of this question by introducing an algorithm that combines prediction and minimizing information from the input, for exploratory causal discovery from observational time series. Fourthly, to make models more robust to label noise, we introduce Rank Pruning, a robust algorithm for classification with noisy labels. I believe that building on the work of my thesis we will be one step closer to enable more intelligent machines that can make sense of the world.



There are no comments yet.


page 16

page 22

page 26

page 30

page 31

page 32

page 35

page 41


Phase Transitions for the Information Bottleneck in Representation Learning

In the Information Bottleneck (IB), when tuning the relative strength be...

Systematic Evaluation of Causal Discovery in Visual Model Based Reinforcement Learning

Inducing causal relationships from observations is a classic problem in ...

Revisiting Causality Inference in Memory-less Transition Networks

Several methods exist to infer causal networks from massive volumes of o...

Interpretable Phase Detection and Classification with Persistent Homology

We apply persistent homology to the task of discovering and characterizi...

Efficient human-like semantic representations via the Information Bottleneck principle

Maintaining efficient semantic representations of the environment is a m...

Transfer learning of phase transitions in percolation and directed percolation

The latest advances of statistical physics have shown remarkable perform...

Toward an AI Physicist for Unsupervised Learning

We investigate opportunities and challenges for improving unsupervised m...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.