1 Introduction
Message authentication allows the receiver to verify that the message comes from the legitimate sender and not from an adversary. Along with secrecy, authentication is one of the most fundamental properties in cryptography. It has direct real world applications, for example in ensuring that the order for a financial transaction comes from somebody authorized to perform the transaction, and not a criminal. Authentication is also used as a primitive in many other cryptographic protocols, for example key exchange protocols, where it serves to protect against maninthemiddle and impersonation attacks.
When defining and proving the security of an authentication scheme, we distinguish between computational and unconditional security. In the first case, the definition and proof rely on the assumption that the adversary has limited computational resources, and often also on the conjecture that a certain problem cannot be solved within the specified resource bound. On the other hand, unconditional security makes no assumption on the computational resources available to an adversary; the scheme is guaranteed to be secure against adversaries with unbounded resources.
Another important aspect of the definition of security is whether it provides composability guarantees or not. It is known that certain definitions of security, although intuitively appealing, fail to guarantee that a cryptographic scheme remains secure in an arbitrary context. One of the known examples is a criterion based on the accessible information used in early security proofs for Quantum Key Distribution. Reference [9]
shows that it is possible for a protocol to satisfy this security criterion, but nevertheless the resulting key cannot be used for onetime pad encryption of a message whose header is known to the adversary. Examples such as this one motivate the introduction of frameworks for composable security such as
[4, 2, 13]; protocols proven secure in such a framework are guaranteed to compose safely with other protocols in the framework, and to remain secure in an arbitrary context.In this paper, we consider the strongest possible type of message authentication: we focus on composable, unconditionally secure schemes. It is known that for message authentication to work, Alice and Bob need to have some initial advantage over their adversary Eve; otherwise, Eve can impersonate Alice to Bob and Bob to Alice. What can the initial advantage be?
Previous research on authentication in information theoretic cryptography has focused on the scenario in which Alice and Bob share randomness that is secret from their adversary Eve. Information theoretically secure authentication can be achieved using universal classes of hash functions [31]: Alice and Bob share a secret key that encodes a particular function from a suitable class of hash functions. To send message to Bob, Alice computes the tag and sends . To verify that comes from Alice, Bob checks that . Research on this scenario has focused on finding suitable classes of functions for authentication and on proving lower bounds on the secret key size needed for a given level of security; see, for example, [26, 27, 29, 30, 19]. A variant of the basic scheme for authentication by universal hashing involves recycling part of the key when authenticating multiple messages; this was proposed in [31, Section 4]. Recently, the composable security of authentication by universal hashing both with and without key recycling has been established [22].
A strong motivation for exploring different possibilities to obtain a composable, unconditionally secure authenticated channel comes from the study of informationtheoretically secure key distribution protocols in classical [18, 1] and quantum [3, 7] cryptography. These key distribution protocols require interaction between the honest participants over an authenticated channel. If Alice and Bob need an initial secret key for authentication, these protocols become key expansion rather than key distribution protocols. The investigation of whether the requirements for authentication can be lowered [14, 15, 16, 24, 25] led to the development of interactive authentication protocols, in which only partially secret and partially correlated strings suffice. In [24], an interactive protocol for authentication is proposed that works even if the adversary knows a substantial fraction of the secret key. In [25], it was shown that this interactive authentication protocol, combined with an information reconciliation protocol, can work even in the case when the randomness initially given to Alice and Bob is not perfectly but only partly correlated.
In this paper, we depart from the model of common randomness shared by Alice and Bob. The inspiration for this comes from the work Wyner [32] on the wiretap channel and Csizar and Korner [6] on the broadcast channel with confidential messages. In these papers, it is shown that if the channel from Alice to Bob is less noisy than the channel from Alice to Eve, suitable encoding and decoding exist which accomplish error correction and secrecy simultaneously: Bob can correctly decode Alice’s message, but Eve remains ignorant of it. In the present paper, we ask whether a similar phenomenon is possible for authentication instead of secrecy, and we give an affirmative answer.
Another motivation for the present work is the study of authentication in the context of quantum key distribution. The results we prove in this paper, combined with the analysis of the composition of QKD and authentication [21, 23], show that QKD can be performed over insecure classical and quantum channels, between two parties who share no randomness initially, provided that the classical channel between them is less noisy than the channel between them and the adversary.
As far as the present author is aware, the idea of using an advantage in channel noise for composable, unconditionally secure message authentication has not been explored before. The closest that the present author has been able to find in the cryptography literature is [10], which considers the problem of running a traditional, keybased authentication protocol over a wiretap channel. Also interesting are a number of methods for authentication used in physical layer security for wireless networks. These methods exploit unique characteristics of the software or hardware of different devices, or unique characteristics of the channel between two locations, to identify legitimate from malicious signals. An overview of these techniques can be found in the surveys [20, Section VIIID] and [33].
The multiple access channel from network information theory [5, Section 15.3] is also related to the present paper in that the multiple access channel has many senders and one receiver. However, all the senders and the receiver in the multiple access channel cooperate; they choose their encoding and decoding rules together so as to achieve certain rates of transmission from each sender to the receiver. In our setup, Alice and Bob cooperate, but Eve is malicious. She observes the encoding and decoding rules that Alice and Bob have agreed upon, and tries her best to fool Bob into accepting a message from her as if it is a genuine message from Alice.
The rest of the paper is structured as follows. In Section 2, we introduce the notation and certain basic results that we will use. In Section 3, we introduce Abstract Cryptography, the framework for composable security that we will use. In Section 4 we formally explain how an authenticated channel can be constructed from an advantage in channel noise, and we prove that the construction is composable and provides information theoretic security. In Section 5 we discuss some extensions of the results from the previous section, and in Section 6 we conclude the paper and note some possible directions for future work.
2 Preliminaries
We will often treat the set
as a vector space over the field with two elements; thus, for
, and for a vector and a subset , .By a Bernoulli random variable with parameter
we mean a random variable such that and . We will often work with sequences of i.i.d. Bernoulli random variables, where the abbreviation i.i.d. stands for independent, identically distributed.The notion of typical sequences plays a central role in information theory:
Definition 1.
A sequence is called typical for a sequence of i.i.d. random variables if
where is the binary entropy function.^{1}^{1}1All logarithms are taken to base 2. We denote the set of all typical sequences for i.i.d. random variables by .
An important result in information theory is the Theorem of Typical Sequences [5, Theorems 3.1.13.1.2]:
Theorem 1.
Let be a sequence of i.i.d random variables. Then,
In addition, we have the bound
on the number of typical sequences
A simple but fruitful model for a noisy communication channel is given by the Binary Symmetric Channel. The Binary Symmetric Channel with parameter acts on each input bit independently, transmitting it faithfully with probability and flipping it with probability . Thus, when the vector is input into this channel, the output is where is a vector of i.i.d. random variables.
3 Abstract Cryptography
In this section, we introduce Abstract Cryptography, the framework for composable security that we will use. The general case of Abstract Cryptography was introduced in [13]; however, for our purposes, it is sufficient to consider the special case for honest Alice and Bob and malicious Eve as developed in [12].
3.1 An algebra of resources and converters
By a resource, we mean a system with three interfaces where Alice, Bob and Eve can enter inputs and receive outputs. We will denote resources by calligraphic letters, for example . It will be convenient to specify the functionality of a resource by giving pseudocode for it; for example, a channel from Alice to Bob that provides authentication but no secrecy can be described as ”on input from Alice, output to Bob and Eve. On input from Eve, output to Bob.” where we use to denote an error message.
On the set of resources, we have a parallel composition operation, denoted by , which takes two resources and returns another resource. Thus, is a resource that provides Alice, Bob and Eve with access to the interfaces of both and .
By converter, we mean a system with an inside and an outside interface, where the inside interface interacts with a resource and the outside interface interacts with a user. If is a converter, is a resource and is an interface, then is another resource, where user has the interface of the converter , and the other two users have their usual interfaces to .
3.2 Distinguishers, distance, construction
By a distinguisher, we mean a system with four interfaces, three of which connect to the interfaces of a resource, and the fourth one outputs or . Thus, a distinguisher connected to a resource is a system that outputs a single bit.
We use distinguishers to define a notion of distance between resources:
Definition 2.
The distance between two resources is
We take the supremum over all distinguishers , placing no restriction on their computational resources. This corresponds to choosing to consider unconditional security (in another terminology information theoretic security).
From the definition, we can prove that has the properties of a pseudometric on the set of resources:
Proposition 1.
For all resources

(Identity) .

(Symmetry) .

(Triangle inequality) .
We can also prove that has two additional useful properties, which formally capture the intuition ”if are close, then they remain close in an arbitrary context”:
Proposition 2.
For all resources , converters and interfaces

(Nonincreasing under a converter)

(Nonincreasing under a resource in parallel)
To prove this proposition, observe that a subset of all distinguishers apply the converter or add the resource in parallel.
Before we can proceed to the definition of construction, we need to introduce protocols, filters, and simulators. By a protocol, we mean a pair of converters, one for Alice and one for Bob. By a filter, we mean a converter for Eve’s interface of a resource which blocks malicious actions from Eve; we will use symbols such as to denote the filters for different resources. By a simulator, we mean a converter for Eve’s interface of a resource; the goal of a simulator is to make the interface of one resource appear as the interface of another.
Now we are ready to define construction.
Definition 3.
We say that a protocol constructs resource from resource within , denoted , if

(close with Eve blocked)

(close with full access for Eve) There exists a simulator such that .
The typical interpretation of the definition of construction is the following: is the goal, the ideal functionality that Alice and Bob want to achieve. is the real resource that they have available. The combination of and is required to be indistinguishable from in two scenarios: with Eve blocked and with Eve present.
Since Eve’s interfaces to and may be different, we need to allow for the simulator in the second condition of the definition. If is considered secure, then should be considered at least as secure; this is because a subset of all strategies for Eve against apply the converter .
3.3 General composition theorem
The notion of construction provides both parallel and sequential composition, as captured in the following theorem [12, Theorem 1]:
Theorem 2.

(Parallel Composition) If and then .

(Sequential Composition) If and then .

(Identity) For the identity protocol and any resource , .
This theorem captures formally the idea that if an ideal resource can be constructed from a real resource and a protocol, then the construction can safely be used instead of the ideal resource in an arbitrary context.
4 Constructing an authenticated channel from an advantage in channel noise
In this section, we show how Alice and Bob can use an advantage in channel noise to construct an authenticated channel.
First, we look at the goal: the ideal authenticated channel that Alice and Bob want to construct. The resource for transmitting bit authenticated messages from Alice to Bob is defined by the pseudocode:

On input from Alice, output to Bob and Eve.

On input from Eve, output to Bob.
Thus, Bob gets the guarantee: if anything other than is output by the channel, then it must have come from Alice.
Next, we look at the noisy channel that Alice and Bob have available. Let and consider the resource defined by the pseudocode:

On input from Alice, draw i.i.d. random variables and output to Bob. Also output to Eve.

On input from Eve, draw i.i.d. random variables and output to Bob.
Thus, bit messages from Alice go through a binary symmetric channel with parameter , while bit messages from Eve go through a binary symmetric channel with parameter .
To construct the ideal from the real resource, Alice and Bob use suitable encoding and decoding of messages. We will denote by Alice’s encoding for transmission over , and by Bob’s corresponding decoding. Our main result is the following:
Theorem 3.
Let . Then, for any , for any and for all sufficiently large , there exists a protocol such that .
To prove this theorem, we observe that there are two ways that the real system can fail:

Alice sends a message to Bob, which he decodes incorrectly or rejects. We call this decoding error and denote the maximum probability of it occurring by .

Eve sends a message to Bob, which he accepts and decodes. We call this false acceptance and denote the maximum probability of it occurring by
Then, in the first part of the proof, we show that there exist suitable encoding for Alice and decoding for Bob such that are both small. This is stated formally in the following proposition, which we prove in subsection 4.1:
Proposition 3.
Let . Then, for any , any and all sufficiently large , there exist encoding and decoding of bit messages to bit codewords such that and .
In the second part of the proof, we show that if a real system has small probability of decoding error and of false acceptance, then this real system constructs the ideal system in the sense of Definition 3. In section 4.2, we show the following:
Proposition 4.
Let be a protocol encoding bit messages into bit codewords. Suppose the real system has probability of decoding error and probability of false acceptance . Then,

.

There is a simulator such that
Now, we can complete the proof of Theorem 3: it follows immediately from Propositions 3 and 4. All that is left to do is to prove the two propositions, which we do in the following subsections.
4.1 Good encoding and decoding exist
In this section, we show that encoding for Alice and decoding for Bob exist that make the probabilities of decoding error and false acceptance both small, thereby proving Proposition 3. We follow the proof of the noisy channel coding theorem [5, Chapter 7] to bound the probability of decoding error, and perform an additional analysis to bound also the probability of false acceptance.
The encoding for Alice consists of selecting codewords . The decoding for Bob will be typical sequence decoding: Bob will decode the set of output sequences to message . More precisely, Bob’s decoding can be described by the pseudocode ”on input , if there is a unique such that then output , otherwise output .”
Now, given and , we choose and we use the probabilistic method to show the existence of two codebooks for Alice: a codebook of codewords achieving an average probability of decoding error at most , and a codebook of codewords achieving a maximum probability of decoding error at most .
We focus on the first codebook. We choose random variables independently, uniformly from and let this be our codebook. Now suppose Alice inputs into the channel, and Bob gets . By the union bound, the probability of decoding error is then
The first term goes to zero as goes to infinity, by the theorem of typical sequences 1. The second term is bounded by
which also goes to zero as goes to infinity.
Thus, for a random codebook
goes to zero as goes to infinity. Therefore, for any and for all sufficiently large , there exist particular codebooks such that
Picking the best codewords of such a codebook, we obtain a codebook of size such that the maximum probability of decoding error is at most .
Next, we need to analyze the probability that Bob accepts a message coming form Eve. The set of channel outputs that Bob accepts is
What is the probability that Eve’s message is corrupted to an output in this set?
Suppose Eve inputs into the channel, resulting in output for Bob. Then
Both of these terms go to zero as goes to infinity. Thus, for all sufficiently large the probability of false acceptance will be below .
4.2 Construction in the sense of Abstract Cryptography.
In the previous subsection, we established that it is possible for a real system to achieve simultaneously low probabilities of decoding error and of false acceptance. In this subsection, we show that these low probabilities imply that the real system constructs the ideal system in the sense of Abstract Cryptography. We will do this by proving Proposition 4.
First, it is helpful to take a step back and develop some general tools for evaluating the distance between resources. Our first lemma shows that we can restrict attention to distinguishers following a deterministic strategy:
Lemma 1.
Let be two resources. Then, for any , there is a deterministic distinguisher such that
Proof.
Let be any distinguisher such that
If is deterministic we are done. Otherwise, is a probabilistic mixture of deterministic distinguishers, and there must exist a deterministic in this mixture such that
∎
Next, we focus on evaluating the distance between resources that provide no interaction or only one round of interaction. It is known that for resources that provide a single output, the distinguishing advantage is half the
distance between the output probability distributions:
Lemma 2.
Let be two resources that take no input and provide an output in some discrete set. Then, using to denote the probability distributions over outputs, we have
Proof.
Let be the distinguisher given by pseudocode ”On input , if output 1, else output 0.” Then,
Now let be any other distinguisher. Without loss of generality, assume (otherwise flip the output bit of ). Let be the probability that outputs 1 on input . Then
∎
Now we extend this result to resources that take one input and return one output.
Lemma 3.
Let be two resources that take an input in some discrete set and provide an output in some (possibly different) discrete set. Let be the respective conditional probabilities over outputs given inputs. Then,
Proof.
From Lemma 1 we know that we can restrict attention to deterministic distinguishers. Now, we consider a deterministic distinguisher between and whose strategy is to enter input . The distinguisher is now in a position to try to tell the difference between the output distributions and ; by Lemma 2 we know that the best advantage of such a distinguisher is
To complete the proof, it remains to observe that the best distinguishing advantage between and is obtained by the deterministic distinguisher that uses the optimal input. ∎
Now, we can complete the proof of Proposition 4:
Proof.
First, we show that . Both resources take a single input at Alice’s interface and return a single output at Bob’s interface. The ideal resource always has , while the real resource occasionally makes an error in the transmission; thus, from Lemma 3, we have
Next, we consider the second part of Proposition 4. First, we have to choose a suitable simulator. When Alice inputs a message to the real resource , the codeword comes out uncorrupted at Eve’s interface. On the other hand, when Alice inputs to the ideal resource , itself appears at Eve’s interface. Therefore, we want to take and convert it to the corresponding codeword . Further, the real resource expects inputs of size at Eve’s interface, while the ideal resource expects inputs of size . Therefore, the simulator has to convert Eve’s inputs of size into inputs of size . Since outputs an error to Bob on any input from Eve, it does not matter how maps to ; thus, we can assume for simplicity that maps any bit input from Eve to a sequence of zeros. To summarize, we choose the simulator given by the pseudocode: ”On input at the inside interface, output at the outside interface. On input at the outside interface, output zeros at the inside interface.”
Now, we have to evaluate . From the point of view of a distinguisher, both the real and the ideal resources are single input single output devices: the inputs are of the form or and the outputs are of the form or . Thus, Lemma 3 applies. If the distinguisher chooses an input of the form , then his maximum advantage is the probability that the real system makes decoding error on input from Alice. If the distinguisher chooses an input of the form , then his maximum advantage is the probability that the real system does not output an error to Bob. Thus, we obtain
as needed. ∎
5 Extensions
In this section we consider some extensions of the results of the previous section. First, we consider an extension to more general models of a noisy channel. Then, we consider the possibility of proving a converse result. Next, we consider an extension that allows the adversary to block messages from Alice to Bob. Finally, we consider the computational efficiency of encoding and decoding.
5.1 More general models of a noisy channel
Let be finite alphabets for Alice’s input, Bob’s output and Eve’s input respectively. Let and be two sets of conditional probabilities and consider the real resource for transmitting symbol words given by the pseudocode:

On input from Alice, output to Bob, where is drawn independently according to the distribution . Also output to Eve.

On input from Eve, output to Bob, where is drawn independently according to the distribution .
Thus, Alice’s messages pass through a discrete memoryless channel with transition probabilities and Eve’s messages pass through a discrete memoryless channel with transition probabilities .
Theorem 4.
For every
(1) 
for every and for all sufficiently large , there exist a protocol such that .
In equation (1), denotes the mutual information, denotes the conditional Shannon entropy, and the subscript or denotes the probability mass function which is used to compute the corresponding entropic quantities. The supremum is taken over all probability mass functions on , where each choice of , combined with the transition probabilities induces a joint probability mass function on .
For the case when both and are weakly symmetric [5, Section 7.2] (i.e. the vectors for different are permutations of each other and the sums are the same for all , and similarly for ), the right hand side of equation (1) simplifies to an expression with a nice intuitive interpretation:
Thus, if both channels are weakly symmetric, Alice can transmit information to Bob at any rate up to the difference between the capacity of the channel from Alice to Bob and the capacity of the channel from Eve to Bob.
We proceed to prove Theorem 4. Again, we look at the two cases of decoding error and false acceptance. Proposition 4 from Section 4 carries over to this setting as well, because its proof does not rely on the size of the alphabets at the three terminals. What remains to be done is to show that low probabilities of decoding error and false acceptance are simultaneously achievable. We have the following:
Proposition 5.
For any
any and all sufficiently large , there exists encoding and decoding of bit messages into symbol codewords such that and .
Proof.
We need the notion of joint typicality [5, Section 7.6]:
Definition 4.
Let be a pair of random variables taking values in with joint probability mass function . Let be a sequence of i.i.d. pairs, each pair having the same distribution as . Let . An element is jointly typical if
The set of all jointly typical sequences for length and probability mass function is denoted
Theorem 5.
In the setup from the definition above, we have
Moreover, if are independent and have the same marginals as , then
Now, we can proceed to prove Proposition 5. As in Section 4, we follow the proof of the noisy channel coding theorem [5, Chapter 7] to bound the probability of decoding error, and perform an additional analysis to bound also the probability of false acceptance. Let and be such that
Alice chooses codewords at random, with each symbol of each codeword being independent with probability mass function . Bob uses jointlytypical decoding: ”On input , if there is a unique such that then decode to , otherwise output .”
If Alice inputs into the channel and Bob gets output , then Bob’s probability of decoding error is
and both terms go to zero as goes to infinity, by Theorem 5 and the choice of .
Thus, for any and all sufficiently large , there exist particular codebooks such that
Picking the best codewords of such a codebook, we obtain a codebook of size and maximum probability of decoding error at most .
Next, we need to bound the probability that Bob accepts a message coming from Eve. Let be the set of channel outputs that Bob decodes to . We will bound the number of elements of : using the definition of joint typicality we get
so .
Now suppose that Eve inputs in the channel and Bob gets output . What is the probability that Bob doesn’t decode to ?
and both terms go to zero as goes to infinity.^{2}^{2}2Note that in bounding the probability that is not typical, we have used an extension of the theorem of typical sequences to handle the case of a sequence of random variables that are independent but not necessarily identically distributed; this extension has the same proof: Chebyshev’s Inequality Law of Large Numbers Theorem of Typical Sequences. This completes the proof of Proposition 5. ∎
5.2 A converse result?
A natural question is whether one can prove a converse result; that is, whether one can prove that if Alice and Bob attempt to transmit information at a rate
bits per channel use, then they must necessarily sacrifice either error correction or authentication. We give an example showing that this is not the case.
Let the alphabet for Alice be , the alphabet for Bob be , and the alphabet for Eve be . Let the transition probabilities from Alice to Bob be
all other probabilities being zero. Thus, the channel from Alice to Bob is a binary symmetric channel with parameter , that only uses the first two symbols of Bob’s alphabet. Let the transition probabilities from Eve to Bob be
all other probabilities being zero. Thus, the channel from Eve to Bob is a perfect binary channel that uses only the second two symbols of Bob’s alphabet. Then, the upper bound from equation (1) is . Nevertheless, it is clear that Alice and Bob can transmit at any rate up to the capacity of the binary symmetric channel between them and can achieve both authentication and error correction. Indeed, Bob can tell that a message comes from Eve by the presence of output symbols from the channel.
This example shows that the upper bound on the rate given by equation (1) is not a fundamental limit but is an artifact of the particular proof technique used. It also shows that it is possible to simultaneously achieve error correction and authentication in certain cases where the channel from Alice to Bob is more noisy than the channel from Eve to Bob.
5.3 Adversaries that can block messages
Certain treatments of authenticated channels, for example [22], allow the adversary to block Alice’s messages from reaching Bob for both the real and the ideal resource. We can model this by adding the following line to the pseudocode of both the real resource and the ideal resource :

On input at a (separate) Eve interface, if then do not output anything to Bob in line 1.
This extra option for the adversary Eve necessitates a small modification in the proof of Proposition 4: the filters have to always input to their respective resources, the simulator has to convey the bit from the outside to the inside interface, and the distinguisher has to consider inputs of the form where is a string or , is a string or , , and if then at least one of has to be .
5.4 Efficient encoding and decoding
In this subsection, we return to the Binary Symmetric Channel model from Section 4. At first sight, Theorem 3 looks like an existential result: it states the existence of good encoding and decoding, but does not give an explicit construction, neither does it specify the required computational resources for good encoding and decoding.
However, if we look closely at the proof, we see that it depends only on the following: the set of all channel outputs that Bob accepts is too small from the point of view of Eve, so that an input from Eve is unlikely to be corrupted into this set. Thus, we can take any class of error correcting codes with efficient encoding and decoding, for example low density parity check codes [8, 28, 11], and within that class we can choose a code with the number of codewords and the radius of the hamming balls decoded to each codeword as required for the proof of Theorem 3.
6 Conclusion and future work
We have shown that if the channel from Alice to Bob is less noisy than the channel from Eve to Bob, then Alice and Bob can accomplish error correction and message authentication simultaneously. The intuition behind the result is that for long sequences, there is a subset of the channel outputs for Bob such that is large when measured by the probability that a codeword from Alice is corrupted into it, and is also small when measured by the probability that any input from Eve is corrupted into it.
To ensure seamless integration of the authentication scheme proposed here with other cryptographic protocols, we have proved it provides composable, information theoretic security using the Abstract Cryptography framework. We have also shown that error correcting codes with efficient encoding and decoding can be used, as long as the set of outputs that Bob accepts is small from the point of view of Eve.
The present paper raises a number of interesting questions that can be the subject of future work; we list some of them here. First, what is the set of all rates such that Alice and Bob can transmit information at rate bits per channel use and achieve both error correction and authentication? In the present paper, we have shown that rates up to a certain bound are always achievable, but have also given an example where a rate higher than the bound is possible. Thus, the complete characterization of the achievable rates is still not known. Second, would allowing two way communication and interaction between Alice and Bob give further possibilities, as was the case for secrecy in the wiretap channel [17], and for authentication in the shared randomness model [24, 25]? Third, is it possible to combine the coding for the broadcast channel and for the authentication channel to achieve error correction, authentication and secrecy simultaneously?
Acknowledgments
This work was supported by the Luxembourg National Research Fund (CORE project AToMS).
References
 [1] Rudolf Ahlswede and Imre Csiszár. Common randomness in information theory and cryptography. i. secret sharing. IEEE Transactions on Information Theory, 39(4):1121–1132, 1993.
 [2] Michael Backes, Birgit Pfitzmann, and Michael Waidner. A general composition theorem for secure reactive systems. In Theory of Cryptography Conference, pages 336–354. Springer, 2004.
 [3] Charles H Bennett and Gilles Brassard. Quantum cryptography: Public key distribution and coin tossing. In International Conference on Computers, Systems and Signal Processing (Bangalore, India, Dec. 1984), pages 175–9, 1984.
 [4] Ran Canetti. Universally composable security: A new paradigm for cryptographic protocols. In Foundations of Computer Science, 2001. Proceedings. 42nd IEEE Symposium on, pages 136–145. IEEE, 2001.
 [5] Thomas M Cover and Joy A Thomas. Elements of information theory 2nd edition. Wileyinterscience, 2006.
 [6] Imre Csiszár and Janos Korner. Broadcast channels with confidential messages. IEEE transactions on information theory, 24(3):339–348, 1978.
 [7] Artur K Ekert. Quantum cryptography based on bell’s theorem. Physical review letters, 67(6):661, 1991.
 [8] Robert Gallager. Lowdensity paritycheck codes. IRE Transactions on information theory, 8(1):21–28, 1962.
 [9] Robert König, Renato Renner, Andor Bariska, and Ueli Maurer. Small accessible quantum information does not imply security. Physical Review Letters, 98(14):140502, 2007.
 [10] Lifeng Lai, Hesham El Gamal, and H Vincent Poor. Authentication over noisy channels. IEEE Transactions on Information Theory, 55(2):906–916, 2009.
 [11] Michael G Luby, Michael Mitzenmacher, Mohammad Amin Shokrollahi, and Daniel A Spielman. Improved lowdensity paritycheck codes using irregular graphs. IEEE Transactions on Information Theory, 47(2):585–598, 2001.
 [12] Ueli Maurer. Constructive cryptographya new paradigm for security definitions and proofs. TOSCA, 6993:33–56, 2011.
 [13] Ueli Maurer and Renato Renner. Abstract cryptography. In In Innovations in Computer Science. Citeseer, 2011.
 [14] Ueli Maurer and Stefan Wolf. Secretkey agreement over unauthenticated public channels. i. definitions and a completeness result. IEEE Transactions on Information Theory, 49(4):822–831, 2003.
 [15] Ueli Maurer and Stefan Wolf. Secretkey agreement over unauthenticated public channels. ii: The simulatability condition. IEEE Transactions on Information Theory, 49(4):832–838, 2003.
 [16] Ueli Maurer and Stefan Wolf. Secretkey agreement over unauthenticated public channels. iii. privacy amplification. IEEE Transactions on Information Theory, 49(4):839–851, 2003.

[17]
Ueli M Maurer.
Perfect cryptographic security from partially independent channels.
In
Proceedings of the twentythird annual ACM symposium on Theory of computing
, pages 561–571. ACM, 1991.  [18] Ueli M Maurer. Secret key agreement by public discussion from common information. IEEE transactions on information theory, 39(3):733–742, 1993.
 [19] Ueli M Maurer. Authentication theory and hypothesis testing. IEEE Transactions on Information Theory, 46(4):1350–1356, 2000.
 [20] Amitav Mukherjee, S Ali A Fakoorian, Jing Huang, and A Lee Swindlehurst. Principles of physical layer security in multiuser wireless networks: A survey. IEEE Communications Surveys & Tutorials, 16(3):1550–1573, 2014.
 [21] Jörn MüllerQuade and Renato Renner. Composability in quantum cryptography. New Journal of Physics, 11(8):085006, 2009.
 [22] Christopher Portmann. Key recycling in authentication. IEEE Transactions on Information Theory, 60(7):4383–4396, 2014.
 [23] Christopher Portmann and Renato Renner. Cryptographic security of quantum key distribution. arXiv preprint arXiv:1409.3525, 2014.
 [24] Renato Renner and Stefan Wolf. Unconditional authenticity and privacy from an arbitrarily weak secret. In Annual International Cryptology Conference, pages 78–95. Springer, 2003.
 [25] Renato Renner and Stefan Wolf. The exact price for unconditionally secure asymmetric cryptography. In EUROCRYPT, volume 3027, pages 109–125. Springer, 2004.
 [26] Gustavus J Simmons. Authentication theory/coding theory. In Workshop on the Theory and Application of Cryptographic Techniques, pages 411–431. Springer, 1984.
 [27] Gustavus J Simmons. A survey of information authentication. Proceedings of the IEEE, 76(5):603–620, 1988.
 [28] Daniel Alan Spielman. Computationally efficient errorcorrecting codes and holographic proofs. PhD thesis, Massachusetts Institute of Technology, 1995.
 [29] Douglas R Stinson. Universal hashing and authentication codes. In Annual International Cryptology Conference, pages 74–85. Springer, 1991.
 [30] Douglas R. Stinson. Combinatorial techniques for universal hashing. Journal of Computer and System Sciences, 48(2):337–346, 1994.
 [31] Mark N Wegman and J Lawrence Carter. New hash functions and their use in authentication and set equality. Journal of computer and system sciences, 22(3):265–279, 1981.
 [32] Aaron D Wyner. The wiretap channel. Bell Labs Technical Journal, 54(8):1355–1387, 1975.
 [33] Kai Zeng, Kannan Govindan, and Prasant Mohapatra. Noncryptographic authentication and identification in wireless networks [security and privacy in emerging wireless networks]. IEEE Wireless Communications, 17(5), 2010.
Comments
There are no comments yet.