DeepAI AI Chat
Log In Sign Up

Generalized Bregman Divergence and Gradient of Mutual Information for Vector Poisson Channels

by   Liming Wang, et al.
Duke University

We investigate connections between information-theoretic and estimation-theoretic quantities in vector Poisson channel models. In particular, we generalize the gradient of mutual information with respect to key system parameters from the scalar to the vector Poisson channel model. We also propose, as another contribution, a generalization of the classical Bregman divergence that offers a means to encapsulate under a unifying framework the gradient of mutual information results for scalar and vector Poisson and Gaussian channel models. The so-called generalized Bregman divergence is also shown to exhibit various properties akin to the properties of the classical version. The vector Poisson channel model is drawing considerable attention in view of its application in various domains: as an example, the availability of the gradient of mutual information can be used in conjunction with gradient descent methods to effect compressive-sensing projection designs in emerging X-ray and document classification applications.


page 1

page 2

page 3

page 4


Heuristic Sensing Schemes for Four-Target Detection in Time-Constrained Vector Poisson and Gaussian Channels

In this work, we investigate the different sensing schemes for the detec...

Divergence radii and the strong converse exponent of classical-quantum channel coding with constant compositions

There are different inequivalent ways to define the Rényi mutual informa...

Fisher Information and Mutual Information Constraints

We consider the processing of statistical samples X∼ P_θ by a channel p(...

Computing Unique Information for Poisson and Multinomial Systems

Bivariate Partial Information Decomposition (PID) describes how the mutu...

Sensing Method for Two-Target Detection in Time-Constrained Vector Poisson Channel

It is an experimental design problem in which there are two Poisson sour...

End-to-End Learning of Geometrical Shaping Maximizing Generalized Mutual Information

GMI-based end-to-end learning is shown to be highly nonconvex. We apply ...

Inter-Annotator Agreement Networks

This work develops a simple information theoretic framework that capture...