The Minimum Information Principle for Discriminative Learning

07/11/2012
by   Amir Globerson, et al.
0

Exponential models of distributions are widely used in machine learning for classiffication and modelling. It is well known that they can be interpreted as maximum entropy models under empirical expectation constraints. In this work, we argue that for classiffication tasks, mutual information is a more suitable information theoretic measure to be optimized. We show how the principle of minimum mutual information generalizes that of maximum entropy, and provides a comprehensive framework for building discriminative classiffiers. A game theoretic interpretation of our approach is then given, and several generalization bounds provided. We present iterative algorithms for solving the minimum information problem and its convex dual, and demonstrate their performance on various classiffication tasks. The results show that minimum information classiffiers outperform the corresponding maximum entropy models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/16/2013

Maximum Entropy and the Glasses You Are Looking Through

We give an interpretation of the Maximum Entropy (MaxEnt) Principle in g...
research
03/09/2019

Strengthened Information-theoretic Bounds on the Generalization Error

The following problem is considered: given a joint distribution P_XY and...
research
02/25/2021

Inductive Mutual Information Estimation: A Convex Maximum-Entropy Copula Approach

We propose a novel estimator of the mutual information between two ordin...
research
01/17/2020

Exact Information Bottleneck with Invertible Neural Networks: Getting the Best of Discriminative and Generative Modeling

The Information Bottleneck (IB) principle offers a unified approach to m...
research
05/08/2016

On-Average KL-Privacy and its equivalence to Generalization for Max-Entropy Mechanisms

We define On-Average KL-Privacy and present its properties and connectio...
research
01/30/2013

Measure Selection: Notions of Rationality and Representation Independence

We take another look at the general problem of selecting a preferred pro...
research
11/15/2011

Maximum Joint Entropy and Information-Based Collaboration of Automated Learning Machines

We are working to develop automated intelligent agents, which can act an...

Please sign up or login with your details

Forgot password? Click here to reset