DeepAI AI Chat
Log In Sign Up

The Distributed Information Bottleneck reveals the explanatory structure of complex systems

by   Kieran A. Murphy, et al.
University of Pennsylvania

The fruits of science are relationships made comprehensible, often by way of approximation. While deep learning is an extremely powerful way to find relationships in data, its use in science has been hindered by the difficulty of understanding the learned relationships. The Information Bottleneck (IB) is an information theoretic framework for understanding a relationship between an input and an output in terms of a trade-off between the fidelity and complexity of approximations to the relationship. Here we show that a crucial modification – distributing bottlenecks across multiple components of the input – opens fundamentally new avenues for interpretable deep learning in science. The Distributed Information Bottleneck throttles the downstream complexity of interactions between the components of the input, deconstructing a relationship into meaningful approximations found through deep learning without requiring custom-made datasets or neural network architectures. Applied to a complex system, the approximations illuminate aspects of the system's nature by restricting – and monitoring – the information about different components incorporated into the approximation. We demonstrate the Distributed IB's explanatory utility in systems drawn from applied mathematics and condensed matter physics. In the former, we deconstruct a Boolean circuit into approximations that isolate the most informative subsets of input components without requiring exhaustive search. In the latter, we localize information about future plastic rearrangement in the static structure of a sheared glass, and find the information to be more or less diffuse depending on the system's preparation. By way of a principled scheme of approximations, the Distributed IB brings much-needed interpretability to deep learning and enables unprecedented analysis of information flow through a system.


page 5

page 7

page 11


Interpretability with full complexity by constraining feature information

Interpretability is a pressing issue for machine learning. Common approa...

Explaining a black-box using Deep Variational Information Bottleneck Approach

Briefness and comprehensiveness are necessary in order to give a lot of ...

Unpacking Information Bottlenecks: Unifying Information-Theoretic Objectives in Deep Learning

The information bottleneck (IB) principle offers both a mechanism to exp...

A guide to convolution arithmetic for deep learning

We introduce a guide to help deep learning practitioners understand and ...

Off-the-shelf deep learning is not enough: parsimony, Bayes and causality

Deep neural networks ("deep learning") have emerged as a technology of c...

A Survey Of Regression Algorithms And Connections With Deep Learning

Regression has attracted immense interest lately due to its effectivenes...

Code Repositories


Code for The Distributed Information Bottleneck reveals the explanatory structure of complex systems (

view repo