Strong data processing constant is achieved by binary inputs

09/16/2020
by   Or Ordentlich, et al.
0

For any channel P_Y|X the strong data processing constant is defined as the smallest number η_KL∈[0,1] such that I(U;Y)≤η_KL I(U;X) holds for any Markov chain U-X-Y. It is shown that the value of η_KL is given by that of the best binary-input subchannel of P_Y|X. The same result holds for any f-divergence, verifying a conjecture of Cohen, Kemperman and Zbaganu (1998).

READ FULL TEXT
research
04/13/2019

Tensorization of the strong data processing inequality for quantum chi-square divergences

Quantifying the contraction of classical and quantum states under noisy ...
research
04/23/2022

Chain rules for quantum channels

Divergence chain rules for channels relate the divergence of a pair of c...
research
05/23/2018

Guessing with a Bit of Help

What is the value of a single bit to a guesser? We study this problem in...
research
03/09/2022

Geometric Aspects of Data-Processing of Markov Chains

We consider data-processing of Markov chains through the lens of informa...
research
10/09/2019

Monogamy of Temporal Correlations: Witnessing non-Markovianity Beyond Data Processing

The modeling of natural phenomena via a Markov process — a process for w...
research
04/02/2020

Strong Converse for Testing Against Independence over a Noisy channel

A distributed binary hypothesis testing (HT) problem over a noisy channe...
research
12/02/2018

Koji: Automating pipelines with mixed-semantics data sources

We propose a new result-oriented semantic for defining data processing w...

Please sign up or login with your details

Forgot password? Click here to reset