Deep Information Networks

03/06/2018
by   Giulio Franzese, et al.
0

We describe a novel classifier with a tree structure, designed using information theory concepts. This Information Network is made of information nodes, that compress the input data, and multiplexers, that connect two or more input nodes to an output node. Each information node is trained, independently of the others, to minimize a local cost function that minimizes the mutual information between its input and output with the constraint of keeping a given mutual information between its output and the target (information bottleneck). We show that the system is able to provide good results in terms of accuracy, while it shows many advantages in terms of modularity and reduced complexity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/02/2020

An Upgrading Algorithm with Optimal Power Law

Consider a channel W along with a given input distribution P_X. In certa...
research
04/14/2021

Mutual Information Preserving Back-propagation: Learn to Invert for Faithful Attribution

Back propagation based visualizations have been proposed to interpret de...
research
10/03/2021

A Class of Nonbinary Symmetric Information Bottleneck Problems

We study two dual settings of information processing. Let 𝖸→𝖷→𝖶 be a Mar...
research
04/01/2016

The deterministic information bottleneck

Lossy compression and clustering fundamentally involve a decision about ...
research
09/17/2023

Conditional Mutual Information Constrained Deep Learning for Classification

The concepts of conditional mutual information (CMI) and normalized cond...
research
08/13/2018

Stealth Attacks on the Smart Grid

Random attacks that jointly minimize the amount of information acquired ...
research
06/24/2022

Mutual-Information Based Optimal Experimental Design for Hyperpolarized ^13C-Pyruvate MRI

A key parameter of interest recovered from hyperpolarized (HP) MRI measu...

Please sign up or login with your details

Forgot password? Click here to reset