Thermodynamic Machine Learning through Maximum Work Production

06/27/2020
by   A. B. Boyd, et al.
0

Adaptive thermodynamic systems – such as a biological organism attempting to gain survival advantage, an autonomous robot performing a functional task, or a motor protein transporting intracellular nutrients – can improve their performance by effectively modeling the regularities and stochasticity in their environments. Analogously, but in a purely computational realm, machine learning algorithms seek to estimate models that capture predictable structure and identify irrelevant noise in training data by optimizing performance measures, such as a model's log-likelihood of having generated the data. Is there a sense in which these computational models are physically preferred? For adaptive physical systems we introduce the organizing principle that thermodynamic work is the most relevant performance measure of advantageously modeling an environment. Specifically, a physical agent's model determines how much useful work it can harvest from an environment. We show that when such agents maximize work production they also maximize their environmental model's log-likelihood, establishing an equivalence between thermodynamics and learning. In this way, work maximization appears as an organizing principle that underlies learning in adaptive thermodynamic systems.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

05/28/2020

Joint Stochastic Approximation and Its Application to Learning Discrete Latent Variable Models

Although with progress in introducing auxiliary amortized inference mode...
09/21/2020

A Survey on Machine Learning Applied to Dynamic Physical Systems

This survey is on recent advancements in the intersection of physical mo...
07/29/2019

An adaptive architecture for portability of greenhouse models

This work deals with the portability of greenhouse models, as we believe...
04/12/2019

Information Theoretic Lower Bounds on Negative Log Likelihood

In this article we use rate-distortion theory, a branch of information t...
02/05/2021

Multi-Sample Online Learning for Spiking Neural Networks based on Generalized Expectation Maximization

Spiking Neural Networks (SNNs) offer a novel computational paradigm that...
03/11/2019

Deep Log-Likelihood Ratio Quantization

In this work, a deep learning-based method for log-likelihood ratio (LLR...
10/27/2021

Dream to Explore: Adaptive Simulations for Autonomous Systems

One's ability to learn a generative model of the world without supervisi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.