## I Introduction

Access points like WiFi hot spots and cellular base stations are, for wireless devices, the gateway to the network. Unfortunately, access points are also the network’s most critical bottleneck. As more kinds of devices become network-reliant, both the number of communicating devices and the diversity of their communication needs grow. Little is known about how to code under high variation in the number and variety of communicators.

Multiple-transmitter channels are well understood in information theory only when the number and identity of transmitters are fixed and known. Even in this known-transmitter regime, information-theoretic solutions are prohibitively complex to implement. As a result, orthogonalization methods, such as TDMA, FDMA, or orthogonal CDMA are used instead. Orthogonalization strategies simplify coding by scheduling the transmitters, but such methods can at best attain the single-transmitter capacity of the channel, which is significantly smaller than the multi-transmitter capacity. As a result, most random access protocols currently in use rely on collision avoidance, which again cannot surpass the single-transmitter capacity of the channel and may be significantly worse since the unknown transmitter set makes it difficult to schedule or coordinate among transmitters. Collision avoidance is achieved either through variations of the legacy (slotted) ALOHA or carrier sense multiple access (CSMA). ALOHA, which uses random transmission times and back-off schedules, achieves only of the single-transmitter capacity of the channel [1]. In CSMA, each transmitter tries to avoid collisions by verifying the absence of other traffic before starting a transmission over the shared channel; when collisions do occur, for example because two transmitters begin transmission at the same time, they are handled by aborting the transmission, sending a jamming signal to be sure that all transmitters are aware of the collision, and then restarting the procedure at a random time, which again introduces inefficiencies. The state of the art in random access coding is “treating interference as noise,” which is part of newer CDMA-based standards. While this strategy can deal with random access better than ALOHA, it is still far inferior to the theoretical limits.

Even from a purely theoretical perspective, a satisfactory solution to *random* access remains to be found. The MAC model in which out of transmitters are active was studied by
D’yachkov-Rykov [2] and Mathys [3] for zero-error coding on a noiseless adder MAC and Bassalygo and Pinsker [4] for an asynchronous model in which the information is considered erased if more that one transmitter is active at a time. See [5] for more. While zero-error code designs are mathematically elegant, they are also combinatorial in nature, and their complexity scales exponentially with the number of transmitters.
Two-layer MAC decoders, with outer layer codes that work to remove channel noise and inner layer codes that work to resolve conflicts, are proposed in [6, 7]. Like the codes in [2, 3],
the codes in [4, 6] are designed for a predetermined number of transmitters, ;
it is not clear how robust they are to randomness in the transmitters’ arrivals and departures. Minero et al. [8] studied a random access model in which the receiver knows the transmitter activity pattern, and the transmitters opportunistically send data at the highest possible rate. The receiver recovers only a portion of the messages sent, depending on the current level of activity in the channel.

This paper poses the question of whether it is possible, in a scenario where no one knows how many transmitters are active, for the receiver to almost always recover the messages sent by all active transmitters. Surprisingly, we find that not only is reliable decoding possible in this regime, but, for the class of permutation-invariant channels [5], it is possible to attain both the capacity and the dispersion of the MAC in operation; that is, we do as well in first- and second-order performance as if the transmitter activity were known everywhere a priori. Since the capacity region of a MAC varies with the number of transmitters, it is tempting to believe that the transmitters of a random access system must somehow vary their codebook size in order to match their transmission rate to the capacity region of the MAC in operation. Instead, we here allow the decoder to vary its decoding time depending on the observed channel output – thereby adjusting the rate at which each transmitter communicates by changing not the size but the blocklength of each transmitter’s codebook.

Codes that can accommodate variable decoding times are called *rateless codes*. They were originally analyzed by Burnashev [9], who computed the error exponent of variable-length coding over a known point-to-point channel. Polyanskiy et al. [10]

provided a dispersion-style analysis of the same scenario. A practical implementation of rateless codes for an erasure channel with an unknown erasure probability appeared in

[11]. An analysis of rateless coding over an unknown binary symmetric channel appeared in [12] and was extended to an arbitrary discrete memoryless channel in [13, 14] using a decoder that tracks Goppa’s empirical mutual information and decodes once that quantity passes a threshold. In [15], Jeffrey’s prior is used to weight unknown channels.Unlike the codes described in [9, 10, 11, 12, 13, 14, 15], which allow truly arbitrary decoding times, in this paper we allow decoding only at a predetermined list of possible times . This strategy both eases practical implementation and reduces feedback. In particular, the schemes in[9, 10, 11, 12, 13, 14, 15] transmit a single-bit acknowledgment message from the decoder to the encoder(s) once the decoder is ready to decode. Because the decoding time is random, this so-called “single-bit” feedback forces the transmitter(s) to listen to the channel constantly, at every time step trying to discern whether or not a transmission was received, which requires full-duplex devices or doubles the effective blocklength and can be quite expensive. Thus while the receiver technically sends only “one bit” of feedback, the transmitters receive one bit of feedback (with the alphabet ) in every time step, giving a feedback rate of 1 bit per channel use rather than a total of 1 bit. In our framework, acknowledgment bits are sent only at times , and thus the transmitters must tune in only occasionally.

In this paper, we view the random access channel as a collection of all possible MACs that might arise as a result of the transmitter activity pattern. Barring the intricacies of multiuser decoding, viewing an unknown channel as a collection of possible channels, without assigning an a priori probability to each, is known as the *compound channel* model [16]. In the context of single-transmitter compound channels, it is known that if the decoding time is fixed, the transmission rate cannot exceed the capacity of the weakest channel from the collection [16], while the dispersion may be better (smaller) [17]. With feedback and allowing a variable decoding time, one can do much better [12, 13, 14, 15].

Recently, Polyanskiy [5] argued for removing the transmitter identification task from the physical layer encoding and decoding procedures. As he pointed out, such a scenario was previously discussed by Berger [18] in the context of conflict resolution. Polyanskiy further suggested studying MACs whose conditional channel output distributions are insensitive to input permutations. For such channels, provided that all transmitters use the same codebook, the receiver can at best hope to recover the messages sent, but not the transmitter identity.

In this paper, we build a random access communication model from a family of such permutation-invariant MACs and employ identity-blind decoding at the receiver. Although not critical for the feasibility of our approach, these assumptions lead to a number of pleasing simplifications of both our scheme and its analysis. For example, the collection of MACs comprising our compound random access channel model can be parameterized by the number of active transmitters, rather than by the full transmitter activity pattern. If the maximum number of transmitters is finite, the analysis of identity-blind decoding differs little from traditional analyses that use independent realizations of a random codebook at each transmitter.

We provide a second-order analysis of the rate achieved by our multiuser scheme universally over all transmitter activity patterns, taking into account the possibility that the decoder may misdetect the current activity pattern and decode for a wrong channel. Leveraging our observation that for a symmetric MAC, the fair rate point is not a corner point of the capacity region, we are able to show that a single-threshold decoding rule attains the fair rate point. This differs significantly from traditional MAC analyses, in which simultaneous threshold rules are used. In the context of a MAC with a known number of transmitters, second-order analyses of multiple-threshold decoding rules were obtained in [19, 20, 21, 22] (finite alphabet MAC), and in [23]

(Gaussian MAC). A non-asymptotic analysis of variable-length coding with single-bit feedback over a (known) Gaussian MAC was given in

[24].Other relevant recent works on MAC include the following. To account for massive numbers of transmitters, Chen and Guo [25, 26] introduced a notion of capacity for the multiple access scenario in which the maximal number of transmitters grows with blocklength and an unknown subset of transmitters is active at a given time. They show that time sharing, which achieves conventional MAC capacity, is inadequate to achieve capacity in that regime. On the effect of limited feedback on capacity of MAC, Sarwate and Gastpar showed in [27] that rate-0 feedback does not increase the no-feedback capacity of the discrete memoryless MAC whereas in compound MACs, it is possible to increase the capacity with a limited feedback by using a simple training phase to estimate the channel state.

In short, this paper develops a random access architecture with theoretical performance guarantees that can handle uncoordinated transmissions of a large and random number of transmitters. Our system model and the proposed communication strategy are laid out in Section II. The main result is presented in Section III. The proofs are found in Section IV.

## Ii Problem Setup

For any positive integers let and , giving when . For any sequence and any ordered set

, vector

. For any vectors and with the same dimension, we write if and only if there exists a permutation of such that .A *memoryless symmetric random access channel* (henceforth called simply a RAC)
is a memoryless channel with 1 receiver and an unknown number of
transmitters. It is described by a family of stationary, memoryless MACs

(1) |

each indexed by a number of transmitters, ; the maximal number of transmitters is for some .
The -transmitter MAC has input alphabet , output alphabet ,
and conditional distribution .
When transmitters are active, the RAC output is .
To capture the property that the impact of a channel input
on the channel output is independent of the transmitter from which it comes,
each channel in (1)
is assumed to be *permutation-invariant*;
that is,

(2) | ||||

Since, for any ,
MAC- is physically identical to MAC-
operated with active and silent transmitters,
we use to represent transmitter silence and require *reducibility*:

(3) |

for all , and . An immediate consequence of this notion of reducibility is that for any .

We here propose a new RAC communication strategy. In the proposed strategy, communication occurs in epochs, with each epoch beginning in the time step following the previous epoch’s end. Each epoch ends with a single acknowledgment bit (ACK), which the receiver broadcasts to all transmitters as described below. At the beginning of each epoch, each transmitter independently decides whether to be active or silent in that epoch; the decision is binding for the length of the epoch, meaning that a transmitter must either actively transmit for all time steps in the epoch or remain silent for the same period. Thus while the total number of transmitters is potentially unlimited, the number of active transmitters, , stays constant during the entire transmission period between two ACKs.

Each transmitter uses the epoch to describe a message from the alphabet ; when the active transmitters are , the messages

are independent and uniformly distributed. The receiver considers decoding at each time

, choosing to decode at time only if it believes at that time that the number of active transmitters is . The transmitters are informed of the decoder’s decision about when to stop transmitting through a single-bit acknowledgment (ACK) broadcasted at each time with ; here for all and , with “1” signaling the end of one epoch and the beginning of the next.It is important to stress that in this domain,
each transmitter knows nothing about the set of active
transmitters beyond its own membership
and what it learns from the receiver’s feedback,
and the receiver knows nothing about beyond what it learns
from the channel output . (We call this *agnostic* random access.)
In addition, since designing a different encoder
for each transmitter is expensive from the perspective
of both code design and code operation, as in [5],
we assume that every transmitter employs the same encoder.
(We call this *symmetrical encoding*.) Under these assumptions,
what the transmitters and receiver can learn about is quite limited.
In particular, the reducibility, permutation invariance,
and symmetrical encoding properties
together imply that the decoder can at best hope to distinguish
which messages were transmitted rather than by whom they were sent. In practice, transmitter identity could be included
in the header of each -bit message
or at some other layer of the stack;
transmitter identity is not, however, handled by the RAC code.
Instead, since the channel output statistics depend on
the dimension of the channel input
but not the identity of the active transmitters,
the receiver’s task is to decode the messages transmitted
but not the identities of their senders.
We therefore assume without loss of generality
that implies ,
and thus the family of -transmitter MACs in (2) indeed fully describes the behavior of a RAC.

The single-bit feedback strategy described above
allows us to use *rateless codes* to deal
with the agnostic nature of random access. Specifically, prior to the
transmission, the decoder fixes the blocklengths ,
where is the decoding blocklength when the decoder decides that
the number of active transmitters is equal to . As we show in Section IV below, with an
appropriately designed decoding rule, with high probability, correct
decoding is performed at time . Naturally, the greater the
number of active transmitters the longer it takes to decode. The following
definition formalizes such rateless codes for agnostic random access.

###### Definition 1.

An code for a
RAC is the (rateless) encoding function^{1}^{1}1The maximum number of transmitters is permitted, in which case in (4) is replaced by .

(4) |

and a collection of decoding functions:

(5) |

such that if transmitters are active, then, with probability at least , the messages are correctly decoded
at time . That is,^{2}^{2}2Recall that / denote equality/inequality up to a permutation.

(6) |

where are the transmitters’ messages, independent and equiprobable on , , , and .

Under symmetrical encoding, each transmitter uses the same encoder, , to form a codeword of length (which might be ), which is fed into the channel symbol-by-symbol. According to Definition 1, if transmitters are active then the decoder recovers the sent messages correctly after observing the first channel outputs, with probability at least . The decoder does not attempt to recover transmitter identity; successful decoding means that the list of messages it outputs coincides with the list of messages sent.

The following definitions are useful for the discussion that follows. When transmitters are active, marginal distribution is determined by the input distribution . The information density and conditional information density are then defined as

(7) | |||||

(8) |

for any ; here when and when or . The corresponding mutual informations are

(9) | |||||

(10) |

Throughout, we also denote for brevity

(11) | ||||

(12) |

To ensure the existence of codes satisfying the error constraints in Definition 1, we assume that there exists a such that when are distributed i.i.d. , then the conditions in (13)–(17) below are satisfied.

The *friendliness* assumption states that for all ,

(13) |

Friendliness implies that a transmitter that remains silent is at least as good from the perspective of the decoder as a transmitter that reveals its transmission to the decoder. Naturally, (13) can always be satisfied with an appropriate designation of the “silence” symbol.

Next, the *interference* assumption states that
and are conditionally dependent given for any , i.e.

(14) |

This interference assumption (14) eliminates a trivial RAC in which there is no interference between different transmitters.

Finally, the following *moment* assumptions enable the second-order analysis presented in Theorem 1 below:

(15) | ||||

(16) | ||||

(17) |

All discrete memoryless channels (DMCs) satisfy (16)–(17) [28, Lemma 46] as do Gaussian noise channels. Further, common channel models from the literature typically satisfy (15) as well.

For example, channels meeting (2), (3), (13), (14), (15)–(17) include the AWGN RAC,

(18) |

where operates under a power constraint and for some , and the adder-erasure RAC,

(19) |

where , .

We conclude this section with a series of lemmata that describe the natural orderings possessed by the collection of channels in (1). These properties are key to the feasibility of our achievability scheme, presented in the next section. They are a consequence of our assumptions in (2), (3) (13), and (14). Proofs are relegated to the Appendix.

The first lemma describes a natural property of the collection of channels in (1): the quality of the channel for each transmitter deteriorates as more transmitters are added (even though the sum capacity increases).

###### Lemma 1.

Furthermore, the following inequalities hold.

###### Lemma 2.

## Iii Main Result

Theorem 1 bounds the performance of a finite blocklength RAC code.
For any number of active transmitters,
the code achieves a rate vector , ,
with sum-rate converging as to
for some input distribution
with independent of .
Thus for any family of MACs for which a single maximizes for all ,
the proposed sequence of codes converges to the symmetrical rate point
on the capacity region of the MAC with the same number of transmitters.^{3}^{3}3It
is important to note here that we are comparing the RAC achievable rate with rate-0 feedback
to the MAC capacity without feedback.
While rate-0 feedback does not change the capacity region of a discrete memoryless MAC [27],
its impact more broadly remains an open problem.

###### Theorem 1.

(Achievability) For any RAC

satisfying (2), (3), any and any fixed satisfying (13)–(17), there exists an code provided

(23) |

for all , where

is the Gaussian complementary cumulative distribution function.

To shed light on the statement of Theorem 1, suppose that the channel is such that the same distribution satisfying (13)–(17) achieves the maximum of , for all . For example, for the adder-erasure RAC in (19), Bernoulli-1/2 attains . Thanks to Lemma 1, for large enough and any , one can pick so that the right side of (23) is equal to , for all . Therefore, somewhat counter-intuitively, Theorem 1 certifies that using rateless codes with acknowledgments, it is in some cases possible to transmit over the RAC using a transmission scheme that is completely agnostic to the transmitter activity pattern and to perform as well (in terms of both first and second order terms in (23)) as the optimal transmission scheme designed with complete knowledge of transmitter activity.

Theorem 1 follows by an application of Theorem 2, which bounds the error probability of the finite-blocklength RAC code defined in Section IV. When transmitters are active, the error probability is , which captures both errors in the estimate of and errors in the reproduction of when .

###### Theorem 2.

In the operational regime of interest, the dominating term is (28), which is the probability that the true codeword set produces a low information density. The remaining terms are all negligible, as seen in the refined asymptotic analysis of the bound in Theorem 2 (see Section IV-B, below). The remaining terms bound the probability that two or more transmitters pick the same codeword (28), the probability that the decoder estimates the number of active transmitters as for some and decodes those messages correctly (28), and the probability that the decoder estimates the number of active transmitters as for some and decodes the messages from of those transmitters incorrectly and the messages from the remaining of those transmitters correctly (28)–(28). For , the expression in (28) particularizes as

(29) | ||||

(30) | ||||

(31) | ||||

(32) | ||||

(33) | ||||

(34) | ||||

(35) | ||||

## Iv The RAC Code and Its Performance

The finite-blocklength RAC code used in the proofs of Theorem 1 and Theorem 2 is defined as follows.

Encoder Design: As described in Section II, an RAC code employs the same encoder at every transmitter. For any , we use to denote the encoded description of , giving .

Using Shannon’s random coding argument, in the analysis that follows in Section IV-A we assume that codewords are drawn i.i.d. as

(36) |

for some fixed on alphabet .

Decoder Design: For each , after observing the output , decoder employs a single threshold rule

(37) |

for some constant , chosen before the transmission starts. By permutation-invariance (2) and symmetrical encoding, all permutations of the message vector give the same mutual information density. We use the ordered permutation specified in (37) as a representative of the equivalence class with respect to the binary relation . The choice of a representative is immaterial since decoding is identity-blind.

When there is more than one ordered that satisfies the threshold condition, decoder chooses among them uniformly at random. All such events are included in the error probability bound below.

The proof of Theorem 2, below, bounds the error probability of the proposed code.

### Iv-a Proof of Theorem 2

In the discussion that follows, we bound the error probability of the code
defined above.
The core of the analysis relies on the independence of codewords and
from distinct transmitters and .
By the given code design, this assumption is valid provided that ;
we therefore count events of the form among our error events.^{4}^{4}4It
is interesting to notice that the event for distinct
is not uniformly bad over all channels.
For example, in a Gaussian channel, if two transmitters send the same codeword,
then the power of the transmission effectively doubles.
In contrast, in a channel where interference is modeled as the binary sum of a collection of binary codewords,
if two transmitters send the same codeword, then the codewords cancel.
Let denote the probability of such a repetition; then

(38) |

(39) | |||||

(40) | |||||

(42) | |||||

(44) | |||||

The discussion that follows uses as an example instance of a message vector in which for all and, as the set of all message vectors for which for all and for all , giving . Note that we need to include only ordered vectors in in view of our identity-blind decoding rule in (37). The resulting error bound proceeds as (39)–(44), displayed at the top of the next page, where is the vector of transmitted codewords and represents the codeword for , which was not transmitted. Line (40) separates the case where distinct transmitters send the same message from the case where there is no repetition. Lines (42)–(42) enumerate the error events in the no-repetition case; these include all cases where the transmitted codeword fails to meet the threshold (42), all cases where a prefix of the transmitted codeword meets the threshold for some (42), and all case where a codeword that is wrong in dimensions and right in dimensions meets the threshold for (42). We apply the union bound and the symmetry of the code design to represent the probability of each case by the probability of an example instance times the number of instances. Equations (44)-(44) replace decoders by the threshold rules in their definitions. The delay in applying the union bound in the final line is deliberate. Applying the following observation before applying the union bound yields a tighter bound.

(46) | |||||

Therefore

(49) | |||||

which gives the desired result. ∎

### Iv-B Proof of Theorem 1

We begin by enumerating our choice of parameters:

(50) | ||||

(51) | ||||

(52) |

for every .

The definition of (50) follows the approach established for the point-to-point channel in [28]; here , is the Berry-Esséen constant [29, Chapter XVI.5], which is finite by the moment assumptions (15) and (16), , and is a constant to be chosen later in (70). The constants used in the error probability bound (28) are set in (51) to ensure when (see Lemma 2) and when . The blocklengths in (

Comments

There are no comments yet.