DeepAI
Log In Sign Up

Strong Asymptotic Composition Theorems for Sibson Mutual Information

We characterize the growth of the Sibson mutual information, of any order that is at least unity, between a random variable and an increasing set of noisy, conditionally independent observations of the random variable. The Sibson mutual information increases to an order-dependent limit exponentially fast, with an exponent that is order-independent. The result is contrasted with composition theorems in differential privacy.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

12/27/2018

On mutual information estimation for mixed-pair random variables

We study the mutual information estimation for mixed-pair random variabl...
11/25/2020

Bounds for Algorithmic Mutual Information and a Unifilar Order Estimator

Inspired by Hilberg's hypothesis, which states that mutual information b...
02/22/2022

Error Exponent and Strong Converse for Quantum Soft Covering

How well can we approximate a quantum channel output state using a rando...
03/15/2022

On Suspicious Coincidences and Pointwise Mutual Information

Barlow (1985) hypothesized that the co-occurrence of two events A and B ...
03/10/2020

Sharp Composition Bounds for Gaussian Differential Privacy via Edgeworth Expansion

Datasets containing sensitive information are often sequentially analyze...
03/23/2021

Pairwise Adjusted Mutual Information

A well-known metric for quantifying the similarity between two clusterin...

1 Introduction

In the context of information leakage, composition theorems characterize how leakage increases as a result of multiple, independent, noisy observations of the sensitive data. Equivalently, they characterize how security (or privacy) degrades under the “composition” of multiple observations (or queries). In practice, attacks are often sequential in nature, whether the application is side channels in computer security [8, 15, 16] or database privacy [7, 2, 10]. Thus composition theorems are practically useful. They also raise theoretical questions that are interesting in their own right.

Various composition theorems for differential privacy and its variants have been established [7, 2, 10]. For the information-theoretic metrics of mutual information and maximal leakage [6, 3, 5, 4] (throughout we assume discrete alphabets and base-2 logarithms)

(1)
(2)

and -maximal leakage [9], less is known. While similar theorems have been studied in the case that not known  [13], we assume it is known. For the metrics in (1)-(2) it is straightforward to show the “weak” composition theorem that if are conditionally independent given , then

These bounds are indeed weak in that if are conditionally i.i.d. given , then as , the right-hand sides tend to infinity while the left-hand sides remain bounded. A “strong” (asymptotic) composition theorem would identify the limit and characterize the speed of convergence.

We prove such a result for both mutual information and maximal leakage. The limits are readily identified as the entropy and -support size, respectively, of the minimal sufficient statistic of given . In both cases, the speed of convergence to the limit is exponential, and the exponent turns out to the same. Specifically, it is the minimum Chernoff information among all pairs of distributions and , where and are distinct realizations of .

Mutual information and maximal leakage are both instances of Sibson mutual information [12, 14, 4], the former being order and the latter being order . The striking fact that the exponents governing the convergence to the limit are the same at these two extreme points suggests that Sibson mutual information of all orders satisfies a strong asymptotic composition theorem, with the convergence rate (but not the limit) being independent of the order. We show that this is indeed the case.

The composition theorems proven here are different in nature from those in the differential privacy literature. Here we assume that the relevant probability distributions are known, and characterize the growth of leakage with repeated looks in terms of those distributions. We also assume that

are conditionally i.i.d. given . Composition theorems in differential privacy consider the worst-case distributions given leakage levels for each of individually, assuming only conditional independence.

Although our motivation is averaging attacks in side channels, the results may have some use in capacity studies of channels with multiple conditionally i.i.d. outputs given the input [1, Prob. 7.20].

2 Sibson, Rényi, and Chernoff

The central quantity of this study is the Sibson mutual information.

Definition 1 ([12, 14]).

The Sibson mutual information of order between random variables and is defined by

(3)

for and for and by its continuous extensions. These are

defined in (1)-(2) above.

We are interested in how grows with when are conditionally i.i.d. given for . The question for is meaningful but is not considered here. For , we shall see that the limit is given by a Rényi entropy.

Definition 2.

The Rényi entropy of order of a random variable is given by:

(4)

for and for and by its continuous extensions. These are

(5)
(6)

where is the regular Shannon entropy.

The speed of convergence of to its limit will turn out to be governed by a Chernoff information.

Definition 3 ([1]).

The Chernoff information between two probability mass functions, and , over the same alphabet is given as follows. First, for all and , let:

(7)

Then, the Chernoff information is given by:

(8)

where is the value of such that the above two relative entropies are equal.

3 Main Result

Let be a random variable with finite alphabet . Let

be a vector of discrete random variables with a shared alphabet

. We assume that are conditionally i.i.d. given . We assume, without loss of generality, that and have full support. We may also assume, without loss of generality, that the distributions are unique over , which we call the unique row assumption. For if this is not the case, we can divide into equivalence classes based on their respective distributions and define to be the equivalence class of

. Then both Markov chains

and hold, so

by the data processing inequality for Sibson mutual information [11]. We may then work with in place of . Thus the unique row assumption is without loss of generality.

Note that, again by the data processing inequality, we have

for all and all . Our main result is the following.

Theorem 1.

Under the unique row assumption,

(9)

for any and the speed of convergence is independent of in the sense that for all ,

We prove the result separately for the cases , , and in the next three sections. For this, the following alternate characterization of the exponent is useful. Let denote the distribution of given for a given , and let denote the set of all possible probability distributions over . For any , let denote such that is the smallest relative entropy across all elements of . Ties can be broken by the ordering of .

Lemma 2.
(10)
Proof.

We will prove that:

(11)
(12)

To prove the upper bound, fix and consider and define such that . Then, certainly

(13)

since we know of two -values whose corresponding distributions are equidistant to . Note that only depends on and and this inequality holds for any . Hence,

(14)

Furthermore, since we know of at least one such that , it must also be true that

(15)

For the lower bound, we first define subsets of :

(16)
(17)

Note that and are convex sets since is convex and that achieves the minimum distance to in and the minimum distance to in (Cover and Thomas Section 11.8).

Choose any . There are three cases to consider, depending on the location of in the -space.

Case 1: and . By construction, and .

Case 2: . Using the Pythagorean theorem for relative entropy (Cover and Thomas Thm 11.6.1),

(18)

Case 3: . By the same argument,

(19)

Hence, for any ,

(20)

Since ,

(21)

Other Notation: We use to denote the set of all possible empirical distributions of . For any , let

where is the empirical distribution of . Note that may be empty if . We use to denote true distributions of and .

4 Proof for Mutual Information ()

We derive separate upper and lower bounds for mutual information. Since , we can equivalently upper and lower bound . For the lower bound,

(22)
(23)
(24)
(25)
(26)

due to the convention that . Then, replacing weighted sums over with their largest summand gives

(27)

Note that the entire expression inside the summation over is 0 if . Letting and using for the term,

(28)
(29)

where

(30)
(31)
(32)
(33)

Hence,

(34)

where

(35)

and is its minimizer.

For the upper bound,

(36)
(37)
(38)
(39)
(40)
(41)
(42)
(43)

As we have now shown that mutual information is upper and lower bounded by expressions of the form for some subexponential sequence , it remains to be shown that this exponent approaches the minimum Chernoff information as .

First, it can be shown using standard continuity arguments that

(44)

since is a continuous function of . Finally, we arrive at the desired result using Lemma 2.

5 Proof for Maximal Leakage ()

While the lower bound on can be proven directly, due to space constraints we will instead note that the desired bound can be obtained from (73) to follow by letting . For the upper bound, for fixed , let

(45)
(46)

Note that for any and , and for all since

(47)

Fix and a and let be a sequence such that for each and . Then eventually and

(48)
(49)
(50)
(51)

eventually. Thus for sufficiently large ,

(52)
(53)

Thus,

(54)

Since and were arbitrary, the result follows by Lemma 2.

6 Proof for ()

To lower bound , we use the sets defined in the previous proof:

(55)
(56)
(57)
(58)
(59)
(60)

Letting

(61)

we have

(62)
(63)

Note that

(64)
(65)

for . Hence,

(66)

Next we derive an upper bound for .

(67)
(68)
(69)
(70)
(71)