Semantic Security and the Second-Largest Eigenvalue of Biregular Graphs

11/19/2018 ∙ by Moritz Wiese, et al. ∙ 0

It is investigated how to achieve semantic security for the wiretap channel. It is shown that asymptotically, every rate achievable with strong secrecy is also achievable with semantic security if the strong secrecy information leakage decreases sufficiently fast. If the decrease is slow, this continues to hold with a weaker formulation of semantic security. A new type of functions called biregular irreducible (BRI) functions, similar to universal hash functions, is introduced. BRI functions provide a universal method of establishing secrecy. It is proved that the known secrecy rates of any discrete and Gaussian wiretap channel are achievable with semantic security by modular wiretap codes constructed from a BRI function and an error-correcting code. A concrete universal hash function given by finite-field arithmetic can be converted into a BRI function for certain parameters. A characterization of BRI functions in terms of edge-disjoint biregular graphs on a common vertex set is derived. New BRI functions are constructed from families of Ramanujan graphs. It is shown that BRI functions used in modular schemes which achieve the semantic security capacity of discrete or Gaussian wiretap channels should be nearly Ramanujan. Moreover, BRI functions are universal hash functions on average.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

I-a Motivation

In the wiretap channel problem, a sender has a set of messages and would like to transmit one of these messages to a receiver. To this end, the message is encoded and then sent through a noisy channel to the intended receiver, who decodes the channel output. An eavesdropper observes a different noisy version of the sent codeword. The goal is to find an encoding of the messages which allows a reliable transmission to the intended message recipient, whereas the eavesdropper obtains no information about the transmitted message.

What it means that “no information” is obtained by the eavesdropper can be defined in multiple ways using secrecy measures. In this paper, the secrecy measure of semantic security is focused on. It is defined asymptotically as the coding blocklength tends to infinity. For every , a finite message set is given. Semantic security holds if tends to zero, where

ranges over all random variables on

, describes the eavesdropper’s observation generated by and is the mutual information between random variables and .

This paper studies modular schemes which enhance ordinary error-correcting codes (ECCs) in order to provide semantic security against an eavesdropper. In a modular UHF scheme, the randomized inverse of a universal hash function (UHF) is prefixed to the ECC encoder and the UHF itself is postfixed to the ECC decoder, see Fig. 1. Recall that a universal hash function is a function such that for uniformly distributed on and for all . For a given seed and any , the randomized inverse uniformly at random chooses an element of the set . Reliable transmission of messages chosen from is possible due to the ECC if the seed is known to the sender and the receiver.

Fig. 1: The modular UHF or BRI scheme. denotes the physical channel between sender and receiver. is an error-correcting code and is a universal hash function or a BRI function (to be defined). denotes the randomized inverse of . The seed has to be known to sender and receiver beforehand.

It is shown by Bellare and Tessaro111[3] and [4] are unpublished extended versions of [5]. We only cite the more detailed unpublished papers. [3] and Tal and Vardy [39] that the ordinary secrecy capacity of any discrete, degraded and symmetric wiretap channel as derived by Wyner [44] and Csiszár and Körner [17] is achievable by modular UHF schemes such that semantic security is guaranteed. The proofs make heavy use of the symmetry of the wiretap channel. Since the seed may be known to the eavesdropper, it can be generated and sent beforehand by the sender, which however reduces the achievable rate. It is shown in [3] that by reusing the seed not too often, the rate loss due to seed transmission can be made negligible while preserving semantic security.

Bellare and Tessaro also give the example of a finite-field arithmetic UHF which is suitable for being used in modular UHF schemes and is efficiently computable and invertible. In combination with an efficient linear code, the resulting modular UHF scheme is efficient as well and provides a very practical and flexible way of achieving optimal rates and high security for discrete, degraded and symmetric wiretap channels.

Tyagi and Vardy222[40] is an unpublished, extended version of [41]. Since the latter does not provide all the results and details needed here, we only cite [40]. [40] show a leftover hash lemma for the modular UHF scheme. This lemma bounds the mutual information between a uniformly distributed message on on the one hand and the pair of eavesdropper’s output and uniformly distributed seed on the other hand. Using this lemma, it is shown that the modular UHF scheme achieves the wiretap capacity with strong secrecy if the wiretap channel is degraded, discrete and symmetric or Gaussian.

By Section II of the present paper, for any strongly secure wiretap code there must exist a large subset of the message set such that the transmission of messages from this subset is semantically secure. In this paper, UHFs are replaced by a new type of functions in the modular coding scheme, called biregular irreducible (BRI) functions. A modular BRI scheme provides semantic security for a message set which is a subset of the image set of the BRI function. On the message set, a BRI function has to have a certain regularity structure.

I-B Contributions and overview

Section II

In this section, an abstract setting is considered. There is a sender and an eavesdropper, both having access to a seed random variable . The sender uses to encode a message, which is a random variable independent of . The eavesdropper’s observation is described by the outputs of a channel whose inputs are the message and . The channel could be, e.g., an encoder concatenated with the -fold use of a physical channel to the eavesdropper, but no such assumptions are necessary. It is shown that if is small for uniformly distributed message and generated by and via , then there is a subset with such that is small for any message random variable on and generated by and via . Asymptotically, this means that a message rate which is achievable with strong secrecy is also achievable with semantic security if decreases sufficiently fast. If the decrease is too slow, then semantic security holds in terms of total variation distance instead of mutual information.

Section III

Instead of going deeper into the analysis of UHFs in order to find the semantically secure message subset , we introduce a new class of functions called BRI functions which replace the UHFs in modular coding schemes, thus giving rise to modular BRI schemes. A BRI function is a function together with a regularity set which serves as the message set of the modular BRI scheme and on which has to satisfy certain regularity conditions. In particular, to every and , a stochastic matrix is associated whose entry is divided by a normalizing constant.

Now assume that some channel describes the eavesdropper’s output given that the message is first passed into the randomized inverse of the BRI function using seed , and then transmitted through some channel . ( could be the concatenation of error-correcting encoder and physical channel as in Fig. 1.) One of the central results of this paper, inspired by the proof of the leftover hash lemma of Taygi and Vardy [40], is an upper bound on for every

and some probability measure

on , where

denotes Kullback-Leibler divergence and

is the expectation with respect to . Using this bound, the mutual information can be upper-bounded for any random variable on and generated by and via . The main term of this upper bound is the product of the second-largest eigenvalue modulus of on the one hand and on the other hand, where is the -smooth max-information of introduced in [40]. In particular, BRI functions provide a universal method of establishing secrecy for wiretap channels.

A last result of this section which is not needed in the rest of the paper is that BRI functions are UHFs on average.

Section IV

This section contains the proof of the upper bound stated in Section III. It first upper-bounds Kullback-Leibler divergence by Rényi 2-divergence. An upper bound on the latter yields the result.

Section V

It is shown that for certain parameters, the finite-field arithmetic UHF mentioned before and used in [3] and [40] is a BRI function with a large and completely known regularity set and small for every . The analysis rests in large part on the analysis of the eigenvalues of a family of Cayley sum graphs determined by .

Section VI

Here it is shown that in fact, every BRI function can be described in terms of a family of biregular (bipartite) graphs. Every element in the image of induces a graph . The vertex set of is and are adjacent if . The definition of BRI functions can be given an equivalent formulation by requiring to be biregular and connected for every . This allows the construction of new examples of BRI functions. An important case is where every is -biregular and Ramanujan, which means that the second-largest eigenvalue modulus of is at most . Based on results of Marcus, Spielman and Srivastava [35], a BRI function can be constructed from a biregular connected Ramanujan graph using repeated -lifts of graphs. This construction allows for more flexible parameters than the BRI function of the previous section, but so far is not explicit because it relies on a refinement of the probabilistic method.

Section VII

Here some asymptotic consequences of the results of the previous sections are drawn. It is shown that the wiretap capacity of arbitrary discrete and Gaussian wiretap channels is achievable with semantic security by modular BRI schemes. The BRI functions applied in these schemes should be nearly Ramanujan, and the maximum of the associated degree pair has to grow exponentially in the blocklength. Suitable BRI functions constructed from Ramanujan graphs are proven to exist. A consequence of the wiretap coding theorem is an upper bound on the ratio of the size of the message set of a BRI function and the maximal over .

Section VIII

This section concludes the paper with a summary and a discussion on some practical aspects of BRI functions and modular BRI schemes, like the constructibility and complexity of BRI functions.

I-C Related literature

Semantic security was introduced in information theory by Bellare, Tessaro and Vardy [4], [5]. It is a stronger requirement than strong secrecy as defined by Maurer [37] and Ahlswede and Csiszár [2], where the message is uniformly distributed. It is argued in [4] that semantic security should be adopted as the standard secrecy measure in information-theoretic security, not least because it is the information-theoretic analog to the cryptographic definition of semantic security introduced by Goldwasser and Micali [25] (see also Goldreich’s book [24]).

Semantic security is shown implicitly in resolvability-based proofs of strong secrecy like in Hayashi [26], Devetak [19] and Bloch and Laneman [9]. It is an explicit goal of random coding in the resolvability-based works of Bunin et al. [12], Frey, Bjelaković and Stańczak [21] and Goldfeld, Cuff and Permuter [22, 23].

The concept of universal hash function is due to Carter and Wegman [13]. UHFs were first used in the context of information theory by Bennett, Brassard and Robert [6]. Modular UHF schemes were proposed as a technique for wiretap coding by Hayashi [27]. The finite-field arithmetic UHF mentioned before which is the main example of a UHF in [3] and [40] seems to go back to the work of Bennett et al. [7].

Liu, Yan and Ling [33] use polar codes to prove that the secrecy capacity of Gaussian wiretap channels is achievable with semantic security. To the authors’ knowledge, no other codes apart from modular UHF schemes, polar codes and random codes have been shown to achieve semantic security for specific scenarios.

The first Ramanujan graphs were constructed independently by Lubotzky, Phillips and Sarnak [34] and Margulis [36]. Ramanujan and nearly Ramanujan graphs are optimal or very good expander graphs, respectively. Expanders are a very active field of research and have many applications in mathematics, computer science and engineering. A good overview is by Hoory, Linial and Wigderson [28].

I-D Basic definitions and notation

For a set and a subset , by we mean the set difference of and . If is a function and , then denotes the preimage of under , i.e., . The randomized inverse of a BRI function (to be defined later) has the same notation, but it should always be clear what is meant. If is any event, then equals if the event occurs and otherwise. The logarithm and the exponential function will always be taken to base 2, the natural logarithm is denoted by .

The distribution of a random variable is denoted by . If

are random variables with the joint distribution

, the conditional distribution of given is written . The distribution obtained by fixing a realization of is denoted by . If maps realizations of to the real numbers, then is the expectation of the random variable .

If is any finite set, then denotes the set of real-valued functions on . is isomorphic to . Similarly, we will work with matrices from . A matrix is called stochastic if it has nonnegative entries and the entries of every row sum to . A symmetric matrix is diagonalizable with real eigenvalues . In this situation, the algebraic multiplicity of an eigenvalue is the same as its geometric multiplicity, and one can just speak of its multiplicity. If also has nonnegative entries and constant row sums (e.g., if it is stochastic), then

is an eigenvalue to the eigenvector

, the all-one vector. For such a matrix, its

second-largest eigenvalue modulus is , and if this is smaller than , then is a simple eigenvalue of .

Ii Semantic Security from Strong Secrecy

In this section we first define the necessary probabilistic concepts like information measures and channels. We then analyze the relation between strong secrecy and semantic security. Finally, we introduce wiretap channels and wiretap codes.

Ii-a Basic probability definitions

Let be a measurable space, i.e., is equipped with a sigma algebra, which is suppressed in the notation. We will always assume that a probability measure on has a density with respect to , i.e.,

for measurable .

Example 1.

If is a discrete set, then we will always assume that is the counting measure defined by

. Every probability distribution on

has a density with respect to and

Example 2.

The Gaussian distribution on

with mean

and variance

has the usual density

(1)

with respect to Lebesgue measure.

Example 3.

If has -density and has -density , then the product of and has density with respect to the product measure determined by the rule .

The total variation distance of and is defined by

where the supremum is over measurable sets. Since we assume that and both have a density with respect to a measure , an alternative expression for is

(2)

i.e., total variation distance is the distance of the densities. The Kullback-Leibler divergence of and is given by

the Rényi 2-divergence of and by

If and have joint distribution , then the mutual information of and is given by

We also introduce the entropy

where is the -density of , and conditional entropy

where is the -density of and the random variable has distribution . Then

With an additional correlated random variable , such that the joint distribution of is , let denote the mutual information of the random variables and with joint distribution . Then the conditional mutual information of and given is

i.e., the mean of with respect to .

Pinsker’s inequality states that

If is a random variable on the finite set and an arbitrary random variable such that has a density, then using (2) it was shown in [32] that

(3)

Ii-B Channels

A channel333This definition of a channel does not encompass all concepts called “channel” in information theory. For example, channels with states (random or arbitrary) are not channels in the sense of this paper. with input alphabet and output alphabet assigns to every a probability measure on . To indicate the input and output alphabets of a channel , we will often write . This should not lead to confusion with the analogous notation for functions. We will always assume that has a density with respect to some reference measure on , i.e.,

for every measurable .

Example 4.

If both and are finite, then is a discrete channel. Like for probability measures, the density is always taken with respect to the counting measure (see Example 1).

then is determined by the stochastic matrix

satisfying

for every subset of .

Example 5.

The additive Gaussian noise channel with noise variance has . If is the Lebesgue measure on and the density of with respect to , then is given by (1).

Example 6.

If the channel has density with respect to the measure on , then the blocklength- memoryless extension of has density

with respect to the -fold product measure , where and .

Example 7.

The conditional probability of a random variable with respect to the random variable is a channel.

Example 8.

Any deterministic function is a channel.

If is a pair of random variables on and can be described by a channel , then we say that is generated by via . If , then we often write .

If is a discrete channel with density , then the concatenation of with the arbitrary channel is the channel which has the -density

if has the -density . The concatenation can be defined analogously if is infinite, but there is a finite set such that for all .

Ii-C Semantic security and strong secrecy

We will now give meaning to the claim that any strong secrecy rate also is a semantic security rate, perhaps with a weak form of semantic security. We consider an abstract setting not necessarily arising from the wiretap channel problem. Security is an asymptotic property, the index usually signifies blocklength. For each , there is a finite set of messages, a set to which the eavesdropper has access and a set of seeds which both the sender of the message and the eavesdropper have access to. Every choice of a message and a seed generates an eavesdropper observation in the set via the channel .

Definition 9.

For every , let be uniformly distributed on .

  1. Strong secrecy holds if is uniformly distributed on , independent of and as , where is generated by and via .

  2. Semantic security holds if as , where the maximum ranges over all probability distributions on , the message is independent of and is generated by and via .

Clearly, semantic security implies strong secrecy. The next theorem is the nonasymptotic core of the proof that strong secrecy implies semantic security with the same asymptotic message rate. The underlying message set is denoted by and the seed set by . A channel determines the eavesdropper’s observations, who also knows the seed. The theorem is slightly more general than needed since is not required to be uniformly distributed here.

Theorem 10.

Let be uniformly distributed on and independent of , which may have an arbitrary distribution on . Assume that is generated by and via . If for some , then there exists a subset of with and

where the maximum is over all probability distributions on , is independent of and is generated by and through .

Proof.

The theorem immediately follows from Lemmas 12 and 13 below. ∎

The next corollary is an immediate consequence of Theorem 10 and formulates a necessary condition under which the same asymptotic message rate is achievable with semantic security as with strong secrecy.

Corollary 11.

For every positive integer let be a message set, a seed set and a channel to some measurable set . Let be uniformly distributed on and generated by and via . If strong secrecy holds such that and

(4)

as tends to infinity, then every contains a subset such that

and semantic security holds for the messages restricted to , more precisely,

where ranges over all possible random variables on the smaller message set and is generated by and via .

Condition (4) yields conditions on the rate of decrease of which depends on the growth rate of . For example, if the asymptotic message rate is positive, i.e., grows exponentially, then (4) is satisfied if goes to zero. Generally, if the rate of decrease of is not sufficiently large, then semantic security still holds if it is formulated in terms of total variation distance instead of mutual information. More precisely, a slightly weaker definition of semantic security is to require

to tend to zero. The proof of Lemma 13 shows that this can be inferred from strong secrecy no matter what the rate of convergence to zero of is.

Theorem 10 is proved with the following two lemmas.

Lemma 12.

Let be independent random variables on such that is uniformly distributed and assume that is generated by and via . If for some , then there exists a subset of such that and

for every .

Proof.

The independence of and implies . Moreover

By choosing to be the set of those which satisfy

it is straightforward to see that . ∎

Lemma 13.

If

(5)

for some channel and every , then

(6)

for all pairs . In particular, if is independent of with an arbitrary distribution on and is generated by and via , then

(7)
Proof.

To show that (5) implies (6), observe that

(8)

where is an application of Pinsker’s inequality. Therefore

where is an application of the triangle inequality for total variation distance, is the Cauchy-Schwarz inequality, and follows from . This proves (6).

Next we show (7). We have

where and follow from the convexity of norms and follows from (6). (7) now follows using (3). This completes the proof. ∎

Lemma 13 not only is part of the proof of Theorem 10. It will also be applied to derive the semantic security of modular BRI schemes from a bound of the form (5).

The arguments of this subsection, in particular Lemma 13, suggest that semantic security is a property of messages: The set of messages which satisfy (5) is a message set with (nonasymptotic) semantic security. In contrast, for strong secrecy it is sufficient that the average over (5) be small, hence it is a property of the message set.

Ii-D Wiretap channels and capacities

The security results presented so far were formulated in an abstract scenario. The focus of the rest of this work is on the wiretap channel problem, where the capacity of a wiretap channel has to be found. In this subsection, wiretap channels and wiretap codes are defined.

A wiretap channel is determined by

  1. a pair of channels , where is an arbitrary set and and are measurable spaces, and

  2. a sequence of sets such that , called the input constraint sets. We say a channel has no input constraints if for all .

is the physical channel between the sender and the intended receiver and is the physical channel between the sender and the eavesdropper. The sequence will usually be omitted in the notation.

Given a wiretap channel , a (seeded) wiretap code with blocklength consists of

  1. a discrete channel called the encoder channel, and

  2. a measurable mapping , the decoder.

In the special case , the seeded wiretap code also is called an ordinary wiretap code.

The code rate of a seeded wiretap code is given by . The (maximal) error incurred by the wiretap code is defined as

where for , is the preimage of under and is the blocklength- memoryless extension of . The semantic security information leakage of is

where the maximum ranges over all probability distributions on , the seed is uniformly distributed on and independent of , and is generated by and via the channel (recall that denotes the blocklength- memoryless extension of ).

A number is called an achievable semantic security rate if there exists an increasing sequence of positive integers and for every a seeded wiretap code

of blocklength and code rate such that

(9)
(10)
(11)

The supremum of all achievable semantic security rates is called the semantic security capacity of the wiretap channel .

The definitions of achievable strong secrecy rate and strong secrecy capacity are analogous to the above with the exception that in (11), the is replaced by the strong secrecy information leakage , where is uniformly distributed on and independent of , and is generated by and via .

The fact that the codes achieving a semantic security rate only have to be defined along a blocklength subsequence means that this is an optimistic rate definition in Ahlswede’s terminology [1]. A pessimistic formulation would require the codes to exist for every blocklength and properties (9)-(11) to hold along the complete sequence of positive integers (possibly only requiring strong secrecy instead of (11)). A pessimistic formulation is usual in information theory, but the optimistic one is more appropriate when the codes are required to exhibit a certain structure. For example, wiretap codes used in practice might come from a code family equipped with a structure which is available for infinitely many, but not all blocklengths. This will also be the case in our analysis of modular coding schemes.

Another possible variant in the definition of achievable rates and capacities is to allow only ordinary codes instead of seeded ones. Definitions of wiretap channel capacity usually require codes to be ordinary. The definition of achievable rate for seeded wiretap codes assumes that the seed is known both to sender and intended receiver. If the seed is generated by the sender and transmitted to the receiver before the actual message transmission, the achievable rate can be reduced considerably. We will see in Section VII that this rate reduction can be overcome by reusing the seed.

Example 14.

If both and are discrete channels, then the pair is called a discrete wiretap channel. Its pessimistic strong secrecy capacity without input constraints achieved by ordinary codes is given by

(12)

where the maximum is over all finite sets , channels and random variables on such that is generated by via and is generated by via (Csiszár [16]). An analysis of the converse to the coding theorem of the discrete wiretap channel shows that the optimistic strong secrecy capacity with ordinary codes cannot exceed the pessimistic one, so formula (12) remains true also for that case. Moreover, it follows from the results of Wiese, Nötzel and Boche [43] that even if seeded wiretap codes are allowed, no higher rate is achievable.

Since the strong secrecy information leakage can be shown to tend to zero at exponential speed for every achievable strong secrecy rate, condition (4) is satisfied for ordinary codes. Thus by Corollary 11, (12) also equals the pessimistic as well as optimistic semantic security capacity of achievable by both seeded and ordinary codes.

Example 15.

Let and be Gaussian channels with noise variances and , respectively. For any , the pair is called a Gaussian wiretap channel with input power constraint if for every blocklength , the input constraint set is given by the ball

where here denotes the Euclidean norm. The Gaussian wiretap channel with input power constraint has the pessimistic, ordinary-codes, strong secrecy capacity

(13)

as was shown, e.g., in [40]. Like in the previous example, the optimistic strong secrecy capacity is also given by (13) if ordinary codes are applied. We are not aware of any results upper-bounding the strong secrecy rates for the Gaussian wiretap channel achievable with seeded wiretap codes, but we conjecture them to be no larger than (13) similar to the discrete case.

The strong secrecy information leakage can also be shown to tend to zero at exponential speed [40], and Corollary 11 can be applied to conclude that (13) is the largest achievable pessimistic as well as optimistic semantic security rate when ordinary codes are used.

For the discrete wiretap channel, we will henceforth speak of the secrecy capacity given by (12). For the Gaussian wiretap channel, we will call (13) the ordinary secrecy capacity of the Gaussian wiretap channel.

Iii BRI Functions

By Corollary 11, every sequence of wiretap codes which ensures strong secrecy ensures semantic security at the same asymptotic rate if condition (4) is satisfied and the message set of every code in the sequence is reduced to a suitable large subset. However, the theorem only is an existence statement, since it does not answer the question how to choose the semantically secure message subsets. This is unsatisfactory from a practical point of view.

In this section, we introduce BRI functions which replace UHFs in the modular coding scheme of Fig. 1

. We formulate one of the central results of this paper, which is an upper bound on the degree of security which BRI functions can offer in a modular BRI scheme. This bound will be used in the asymptotic analysis to ensure semantic security with wiretap codes constructed from BRI functions and error-correcting codes. The discussion of the efficiency of BRI functions is postponed to the last section. The final result of this section is that BRI functions are UHFs on average. This is noted just for comparison, it will not be used anywhere in this paper.

Iii-a BRI functions and modular BRI schemes

Definition 16.

A biregular irreducible (BRI) function is a function , where are finite sets, for which there exists a subset of such that for every

  1. -regularity: for some positive integer independent of ,

  2. -regularity: for some positive integer independent of ,

  3. Irreducibility: is a simple eigenvalue of the stochastic matrix on defined by

    (14)

    (see Lemma 17 for a proof that really is a stochastic matrix).

The second-largest eigenvalue modulus of is denoted by . is called the regularity set of , is called the rate of . For fixed , we will sometimes write instead of .

To prove that BRI functions are well-defined, we note the following lemma.

Lemma 17.

For any BRI function with regularity set , the matrix as defined in (14) is a stochastic matrix for every .

Proof.

Thus every row sum of equals . ∎

A BRI function can be used together with an ECC to construct a wiretap code. Assume is a wiretap channel. and can be blocklength- memoryless extensions of other channels like in the definition of wiretap codes, but the construction is nonasymptotic and further structure of and can be ignored. (The case that and are memoryless extensions is considered in Section VII.) Let be an ECC444We use the term error-correcting code to emphasize the difference to wiretap codes. Our use of the term implies that the mapping of codewords to channel input sequences (channel modulation) is part of the code. with message set , i.e., is a mapping from to and maps elements of back to . Assume that incurs a transmission error at most , meaning that

Thus, elements of can be transmitted from the sender to the intended receiver with a small error probability. Additionally, let be a BRI function with regularity set . By some abuse of notation, for each we introduce a new channel denoted by and called the randomized inverse of whose transition probabilities are defined by

Thus given and , it chooses an element of the preimage set at random.

The ECC and the BRI function together define a seeded wiretap code by

This wiretap code is called a modular BRI scheme and denoted by . Modular BRI schemes are a formalization of the modular scheme depicted in Fig. 1 with UHFs replaced by BRI functions. Clearly, the maximal error incurred by satisfies

(15)

We also define and .

Clearly, the rate of the modular BRI scheme is determined by the rate which the pair achieves over together with the rate of . Therefore the regularity set of should be large. On the other hand, it will be seen in Theorem 19 that the degree of security which can be achieved by a BRI-prefix scheme depends on , more precisely, should be small. Thus we will be interested in making as large as possible, but at the same time to ensure a small . Observe that

(16)

which implies the upper bound