On the Maximum Mutual Information Capacity of Neural Architectures

06/10/2020
by   Brandon Foggo, et al.
0

We derive the closed-form expression of the maximum mutual information - the maximum value of I(X;Z) obtainable via training - for a broad family of neural network architectures. The quantity is essential to several branches of machine learning theory and practice. Quantitatively, we show that the maximum mutual information for these families all stem from generalizations of a single catch-all formula. Qualitatively, we show that the maximum mutual information of an architecture is most strongly influenced by the width of the smallest layer of the network - the "information bottleneck" in a different sense of the phrase, and by any statistical invariances captured by the architecture.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset