A Separation Principle for Control in the Age of Deep Learning

11/09/2017
by   Alessandro Achille, et al.
0

We review the problem of defining and inferring a "state" for a control system based on complex, high-dimensional, highly uncertain measurement streams such as videos. Such a state, or representation, should contain all and only the information needed for control, and discount nuisance variability in the data. It should also have finite complexity, ideally modulated depending on available resources. This representation is what we want to store in memory in lieu of the data, as it "separates" the control task from the measurement process. For the trivial case with no dynamics, a representation can be inferred by minimizing the Information Bottleneck Lagrangian in a function class realized by deep neural networks. The resulting representation has much higher dimension than the data, already in the millions, but it is smaller in the sense of information content, retaining only what is needed for the task. This process also yields representations that are invariant to nuisance factors and having maximally independent components. We extend these ideas to the dynamic case, where the representation is the posterior density of the task variable given the measurements up to the current time, which is in general much simpler than the prediction density maintained by the classical Bayesian filter. Again this can be finitely-parametrized using a deep neural network, and already some applications are beginning to emerge. No explicit assumption of Markovianity is needed; instead, complexity trades off approximation of an optimal representation, including the degree of Markovianity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/30/2022

Lagrangian Density Space-Time Deep Neural Network Topology

As a network-based functional approximator, we have proposed a "Lagrangi...
research
04/14/2021

Deep Data Density Estimation through Donsker-Varadhan Representation

Estimating the data density is one of the challenging problems in deep l...
research
07/21/2022

Deep Sufficient Representation Learning via Mutual Information

We propose a mutual information-based sufficient representation learning...
research
06/26/2023

Deep Bayesian Experimental Design for Quantum Many-Body Systems

Bayesian experimental design is a technique that allows to efficiently s...
research
12/23/2021

Optimal learning of high-dimensional classification problems using deep neural networks

We study the problem of learning classification functions from noiseless...
research
01/28/2016

Revealing Fundamental Physics from the Daya Bay Neutrino Experiment using Deep Neural Networks

Experiments in particle physics produce enormous quantities of data that...
research
02/08/2019

Invariant-equivariant representation learning for multi-class data

Representations learnt through deep neural networks tend to be highly in...

Please sign up or login with your details

Forgot password? Click here to reset