Information Theoretic Learning with Infinitely Divisible Kernels

01/16/2013
by   Luis G. Sanchez Giraldo, et al.
0

In this paper, we develop a framework for information theoretic learning based on infinitely divisible matrices. We formulate an entropy-like functional on positive definite matrices based on Renyi's axiomatic definition of entropy and examine some key properties of this functional that lead to the concept of infinite divisibility. The proposed formulation avoids the plug in estimation of density and brings along the representation power of reproducing kernel Hilbert spaces. As an application example, we derive a supervised metric learning algorithm using a matrix based analogue to conditional entropy achieving results comparable with the state of the art.

READ FULL TEXT
research
11/11/2012

Measures of Entropy from Data Using Infinitely Divisible Kernels

Information theory provides principled ways to analyze different inferen...
research
07/24/2023

On the information-theoretic formulation of network participation

The participation coefficient is a widely used metric of the diversity o...
research
01/01/2020

Fast Estimation of Information Theoretic Learning Descriptors using Explicit Inner Product Spaces

Kernel methods form a theoretically-grounded, powerful and versatile fra...
research
08/23/2018

Multivariate Extension of Matrix-based Renyi's α-order Entropy Functional

The matrix-based Renyi's α-order entropy functional was recently introdu...
research
06/03/2022

Angle Based Dependence Measures in Metric Spaces

In this article, we introduce a general framework of angle based indepen...
research
02/04/2021

Undecidability of Underfitting in Learning Algorithms

Using recent machine learning results that present an information-theore...
research
03/13/2013

An Entropy-based Learning Algorithm of Bayesian Conditional Trees

This article offers a modification of Chow and Liu's learning algorithm ...

Please sign up or login with your details

Forgot password? Click here to reset