The Distributed Information Bottleneck reveals the explanatory structure of complex systems

04/15/2022
by   Kieran A. Murphy, et al.
0

The fruits of science are relationships made comprehensible, often by way of approximation. While deep learning is an extremely powerful way to find relationships in data, its use in science has been hindered by the difficulty of understanding the learned relationships. The Information Bottleneck (IB) is an information theoretic framework for understanding a relationship between an input and an output in terms of a trade-off between the fidelity and complexity of approximations to the relationship. Here we show that a crucial modification – distributing bottlenecks across multiple components of the input – opens fundamentally new avenues for interpretable deep learning in science. The Distributed Information Bottleneck throttles the downstream complexity of interactions between the components of the input, deconstructing a relationship into meaningful approximations found through deep learning without requiring custom-made datasets or neural network architectures. Applied to a complex system, the approximations illuminate aspects of the system's nature by restricting – and monitoring – the information about different components incorporated into the approximation. We demonstrate the Distributed IB's explanatory utility in systems drawn from applied mathematics and condensed matter physics. In the former, we deconstruct a Boolean circuit into approximations that isolate the most informative subsets of input components without requiring exhaustive search. In the latter, we localize information about future plastic rearrangement in the static structure of a sheared glass, and find the information to be more or less diffuse depending on the system's preparation. By way of a principled scheme of approximations, the Distributed IB brings much-needed interpretability to deep learning and enables unprecedented analysis of information flow through a system.

READ FULL TEXT

page 5

page 7

page 11

research
11/30/2022

Interpretability with full complexity by constraining feature information

Interpretability is a pressing issue for machine learning. Common approa...
research
07/05/2023

Machine learning at the mesoscale: a computation-dissipation bottleneck

The cost of information processing in physical systems calls for a trade...
research
05/30/2023

How Does Information Bottleneck Help Deep Learning?

Numerous deep learning algorithms have been inspired by and understood v...
research
07/10/2023

Information decomposition to identify relevant variation in complex systems with machine learning

One of the fundamental steps toward understanding a complex system is id...
research
05/04/2020

Off-the-shelf deep learning is not enough: parsimony, Bayes and causality

Deep neural networks ("deep learning") have emerged as a technology of c...
research
01/10/2013

Multivariate Information Bottleneck

The Information bottleneck method is an unsupervised non-parametric data...

Please sign up or login with your details

Forgot password? Click here to reset