# Communicating over the Torn-Paper Channel

We consider the problem of communicating over a channel that randomly "tears" the message block into small pieces of different sizes and shuffles them. For the binary torn-paper channel with block length n and pieces of length Geometric(p_n), we characterize the capacity as C = e^-α, where α = lim_n→∞ p_n log n. Our results show that the case of Geometric(p_n)-length fragments and the case of deterministic length-(1/p_n) fragments are qualitatively different and, surprisingly, the capacity of the former is larger. Intuitively, this is due to the fact that, in the random fragments case, large fragments are sometimes observed, which boosts the capacity.

## Authors

• 8 publications
• 14 publications
• ### Arıkan meets Shannon: Polar codes with near-optimal convergence to channel capacity

Let W be a binary-input memoryless symmetric (BMS) channel with Shannon ...
11/10/2019 ∙ by Venkatesan Guruswami, et al. ∙ 0

• ### Deepzzle: Solving Visual Jigsaw Puzzles with Deep Learning andShortest Path Optimization

We tackle the image reassembly problem with wide space between the fragm...
05/26/2020 ∙ by Marie-Morgane Paumard, et al. ∙ 33

• ### Learning Lenient Parsing Typing via Indirect Supervision

Both professional coders and teachers frequently deal with imperfect (fr...
10/14/2019 ∙ by Toufique Ahmed, et al. ∙ 0

• ### An Internal Arc Fixation Channel and Automatic Planning Algorithm for Pelvic Fracture

Fixating fractured pelvis fragments with the sacroiliac screw is a commo...
07/25/2021 ∙ by Qing Yang, et al. ∙ 0

• ### The Strongly Asynchronous Massive Access Channel

This paper considers a Strongly Asynchronous and Slotted Massive Access ...
07/26/2018 ∙ by Sara Shahi, et al. ∙ 0

• ### Efficient Solution to the 3D Problem of Automatic Wall Paintings Reassembly

This paper introduces a new approach for the automated reconstruction - ...
10/10/2012 ∙ by Constantin Papaodysseus, et al. ∙ 0

• ### Covert Identification over Binary-Input Memoryless Channels

This paper considers the covert identification problem in which a sender...
07/27/2020 ∙ by Qiaosheng Zhang, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

Consider the problem of transmitting a message by writing it on a piece of paper, which will be torn into small pieces of random sizes and randomly shuffled. This coding problem is illustrated in Figure 1. We refer to it as the torn-paper coding, in allusion to the classic dirty-paper coding problem [dirtypaper].

This problem is mainly motivated by macromolecule-based (and in particular DNA-based) data storage, which has recently received significant attention due to several proof-of-concept DNA storage systems [church_next-generation_2012, goldman_towards_2013, grass_robust_2015, bornholt_dna-based_2016, erlich_dna_2016, organick_scaling_2017]. In these systems, data is written onto synthesized DNA molecules, which are then stored in solution. During synthesis and storage, molecules in solution are subject to random breaks and, due to the unordered nature of macromolecule-based storage, the resulting pieces are shuffled [heckel_characterization_2018]. Furthermore, the data is read via high-throughput sequencing technologies, which is typically preceded by physical fragmentation of the DNA with techniques like sonication [pomraning2012library]. In addition, the torn-paper channel is related to the DNA shotgun sequencing channel, studied in [MotahariDNA, BBT, gabrys2018unique], but in the context of variable-length reads, which are obtained in nanopore sequencing technologies [laver2015assessing, mao2018models].

We consider the scenario where the channel input is a length- binary string, which is then torn into pieces of lengths , each of which has a distribution. The channel output is the unordered set of these pieces. As we will see, even this noise-free version of the torn-paper coding problem is non-trivial.

To obtain some intuition, notice that , and hence it is reasonable to compare our problem to the case where the tearing points are evenly separated, and for

with probability

. In this case, the channel becomes a shuffling channel, similar to the one considered in [noisyshuffling], but with no noise. Coding for the case of deterministic fragments of length is easy: since the tearing points are known, we can prefix each fragment with a unique identifier, which allows the decoder to correctly order the fragments. From the results in [noisyshuffling], such an index-based coding scheme is capacity-optimal, and any achievable rate in this case must satisfy, for large ,

 R<(1−pnlogn)+. (1)

If we let , the capacity for this case becomes .

It is not clear a priori whether the capacity of the torn-paper channel should be higher or lower than . The fact that the tearing points are not known to the encoder makes it challenging to place a unique identifier in each fragment, suggesting that the torn-paper channel is “harder” and should have a lower capacity. The main result of this paper contradicts this intuition and shows that the capacity of the torn-paper channel with -length fragments is higher than . More precisely, we show that the capacity of the torn-paper channel is

. Intuitively, this boost in capacity comes from the tail of the geometric distribution, which guarantees that a fraction of the fragments will be significantly larger than the mean

. This allows the capacity to be positive even for , in which case the capacity of the deterministic-tearing case in (1) becomes .

## Ii Problem Setting

We consider the problem of coding for the torn-paper channel, illustrated in Figure 1. The transmitter encodes a message into a length- binary codeword . The channel output is a set of binary strings

 Y={→Y1,→Y2,…,→YK}. (2)

The process by which is obtained is described next.

1. [wide]

2. The channel tears the input sequence into segments of -length for a tearing probability . More specifically, let be i.i.d. Geometricrandom variables. Let be the smallest index such that Notice that is also a random variable.

The channel tears into segments , where

 →Xi=[X1+∑i−1j=1Nj,...,X∑ij=1Nj],

for and

 →XK=[X1+∑K−1j=1Nj,...,Xn].

We note that this process is equivalent to independently tearing the message in between consecutive bits with probability . More precisely, let be binary indicators of whether there is a cut between and . Then, letting s be i.i.d.  random variables results in independent fragments of length . Also, , implying that .

3. Given , let

be a uniformly distributed random permutation on

. The output segments are then obtained by setting, for ,

We note that there are no bit-level errors, e.g., bit flips, in this process. We also point out that we allow the tearing probability to be a function of the block length , thus, including subscript in .

A code with rate for the torn-paper channel is a set of binary codewords, each of length , together with a decoding procedure that maps a set of variable-length binary strings to an index . The message is assumed to be chosen uniformly at random from , and the error probability of a code is defined accordingly. A rate is said to be achievable if there exists a sequence of rate- codes , with blocklength , whose error probability tends to as . The capacity is defined as the supremum over all achievable rates. Notice that should be a function of the sequence of tearing probabilities .

Notation: Throughout the paper, represents the logarithm base , while represents the natural logarithm. For functions and , we write if as . For an event , we let or be the binary indicator of .

## Iii Main Results

If the encoder had access to the tearing locations ahead of time, a natural coding scheme would involve placing unique indices on every fragment, and using the remaining bits for encoding a message. In particular, if the message block broke evenly into pieces of length , results from [noisyshuffling] imply that placing a unique index of length in each fragment is capacity optimal. In this case, the capacity is , where (assuming the limit exists). If , no positive rate is achievable.

However, in our setting, the fragment lengths are random and the same index-based approach cannot be used. Because we do not know the tearing points, we cannot place indices at the beginning of each fragment. Furthermore, while the expected fragment length may be long, some fragments may be shorter than and a unique index could not be placed in them even if we knew the tearing points. Our main result shows that, surprisingly, the random tearing locations and fragment lengths in fact increases the channel capacity.

###### Theorem 1.

The capacity of the torn-paper channel is

 C=e−α,

where .

In Sections IV and V we prove Theorem 1. To prove the converse to this result, we exploit the fact that, for large ,

has an approximately exponential distribution. This, together with several concentration results, allows us to partition the set of fragments into multiple bins of fragments with roughly the same size and view the torn-paper coding, in essence, as parallel channels with fixed-size fragments. Our achievability is based on random coding arguments and does not provide much insight into efficient coding schemes. This opens up interesting avenues for future research.

## Iv Converse

In order to prove the converse, we first partition the input and output strings based on length. This allows us to view the torn-paper channel as a set of parallel channels, each of which involves fragments of roughly the same size. More precisely, for an integer parameter , we will let

 Xk={→Xi:k−1Llogn≤Ni

for , and we will think of the transformation from to as a separate channel. Notice that the th channel is intuitively similar to the shuffling channel with equal-length pieces considered in [DNAStorageISIT].

We will use the fact that the number of fragments in concentrates as . More precisely, we let

 qk,n=Pr(k−1L≤N1logn

and we have the following lemma, proved in Section VI.

###### Lemma 1.

For any and large enough,

 Pr(||Yk|−npnqk,n|>ϵnpn)≤4e−np2nϵ2/4, (5)

Notice that, since , as . Moreover, asymptotically, approaches an distribution. This known fact is stated as the following lemma, which we also prove in Section VI.

###### Lemma 2.

If is a random variable and , then

 limn→∞Pr(N(n)≥βlogn)=e−αβ. (6)

Lemma 1 implies that , and

 limn→∞E[|Yk|]npn =limn→∞npnqk,n+o(npn)npn =limn→∞Pr(k−1L≤N1logn

where the last equality follows from Lemma 2. Next, we define event , where , which guarantees that, as , and from Lemma 1. Then,

 H(Yk) ≤H(Yk,1Ek,n)≤1+H(Yk|1Ek,n) ≤1+2nPr(Ek,n)+H(Yk|¯Ek,n), (8)

where we loosely upper bound with , since can be fully described by the binary string and the tearing points indicators .

In order to bound , i.e., the entropy of given that its size is close to , we first note that the number of possible distinct sequences in is

 kLlogn∑i=k−1Llogn2i<2⋅2kLlogn=2nk/L.

Moreover, given ,

 |Yk| ≤npnqk,n+ϵnpn =npn[ϵ+Pr(k−1L≤N1logn

and the set can be seen as a histogram over all possible strings with . Notice that we can view the last element of the histogram as containing “excess counts” if . Hence, from Lemma 1 in [DNAStorageISIT],

 H(Yk|¯Ek,n) ≤log(2nk/L+M−1M) ≤Mlog(e(2nk/L+M−1)M) =M[log(2nk/L+M−1)+log(e)−logM] =M[max(kLlogn,logM)−logM+o(logn)] =M[(kLlogn−logM)++o(logn)] =Mlogn[(kL−logM/logn)++o(1)]. (10)

From (9), we have as . Combining (8) and (10), dividing by , and letting yields

 limn→∞ ≤limn→∞Mlognn(kL−1)+ =limn→∞pnlogn(qk,n+ϵn)(kL−1)+ =α(e−α(k−1)/L−e−αk/L)(kL−1)+. (11)

In order to bound an achievable rate , we use Fano’s inequality to obtain

 nR ≤I(Xn;Y)+o(n)≤H(Y)+o(n), (12)

and we conclude that any achievable rate must satisfy In order to connect (12) and (11), we state the following lemma, which allows us to move the limit inside the summation. The proof is in Section VI.

###### Lemma 3.

If is defined as in (3) for ,

 limn→∞H(Y)n≤∞∑k=1limn→∞H(Yk)n.

Using this lemma and (11), we can upper bound any achievable rate as

 R ≤limn→∞H(Y)n≤∞∑k=1limn→∞H(Yk)n =∞∑k=L+1α(e−α(k−1)/L−e−αk/L)(kL−1) =αL∞∑k=L+1k(e−α(k−1)/L−e−αk/L) −α∞∑k=L+1(e−α(k−1)/L−e−αk/L) =αL∞∑k=L+1k(e−α(k−1)/L−e−αk/L)−αe−α, (13)

where the last equality is due to a telescoping sum. The remaining summation can be computed as

 ∞∑k=L+1 k(e−α(k−1)/L−e−αk/L) =(L+1)e−α+∞∑k=L+2e−α(k−1)/L =Le−α+e−α∞∑k=0e−αk/L=Le−α+e−α1−e−α/L.

We conclude that any achievable rate must satisfy

 R <αL(Le−α+e−α1−e−α/L)−αe−α=αe−αL(1−e−α/L),

for any positive integer . Since

 limL→∞L(1−e−α/L)=α,

we obtain the outer bound .

## V Achievability via Random Coding

A random coding argument can be used to show that any rate is achievable. Consider generating a codebook with codewords, by independently picking each symbol as . Let , where is the random codeword associated with message . Notice that optimal decoding can be obtained by simply finding an index such that corresponds to a concatenation of the strings in . If more than one such codewords exist, an error is declared.

Suppose message is chosen and is the random set of output strings. To bound the error probability we consider a suboptimal decoder that throws out all fragments shorter than , for some to be determined, and simply tries to find a codeword that contains all output strings as non-overlapping substrings. If we let be the error event averaged over all codebook choices, we have

 Pr(E)=Pr(E|W=1) =Pr(some xj, j≠1, contains all % strings in Yγ|W=1).

Using a similar approach to the one used in Section IV, it can be shown that . From Lemma 2, we thus have

 limn→∞E[|Yγ|]n⋅pn=limn→∞Pr(N1≥γlogn)=e−αγ. (14)

If we let be the binary indicator of the event , then . In Section VI, we prove the following concentration result.

###### Lemma 4.

For any , as ,

 Pr(||Yγ|−e−αγnpn|>ϵnpn)→0. (15)

In addition to characterizing asymptotically, we will also be interested in the total length of the sequences in . Intuitively, this determines how well the fragments in cover their codeword of origin .

###### Definition 1.

The coverage of is defined as

 cγ=1nK∑i=1Ni1{Ni≥γlogn}. (16)

Notice that with probability .

In order to characterize asymptotically, we will again resort to the exponential approximation to a geometric distribution, through the following lemma.

###### Lemma 5.

If is a random variable and , then, for any ,

 limn→∞ E[N(n)1{N(n)≥γlogn}]/logn =E[~N1{~N≥γ}]=(γ+1α)e−αγ, (17)

where is an random variable.

Using Lemma 5, we can characterize the asymptotic value of and show that concentrates around this value. More precisely, we show the following lemma in Section VI.

###### Lemma 6.

For any , as ,

 Pr (∣∣cγ−(αγ+1)e−αγ∣∣>ϵ)→0. (18)

In particular, Lemma 6 implies that

 limn→∞E[cγ]=(αγ+1)e−αγ, (19)

and that cannot deviate much from this value with high probability. If we let and , and we define the event

 B ={|Yγ|>B1}∪{cγ

then (15) and (18) imply that as . Since is independent of , we can upper bound the probability of error as

 Pr(E) ≤Pr(some xj contains all strings in Yγ|W=1) ≤Pr(some xj contains all strings in Yγ|¯B,W=1) +Pr(B) (i)≤|C|nB12nB2+Pr(B) ≤2nR2B1logn2−nB2+o(1) =2nR2(1+ϵ)e−αγnpnlogn−n(1−ϵ)(αγ+1)e−αγ+o(1) =2−n((1−ϵ)(αγ+1)e−αγ−(1+ϵ)e−αγpnlogn−R)+o(1).

Inequality follows from the union bound and from the fact that thre are at most ways to align the strings in to a codeword in a non-overlapping way and, given this alignment, bits in must be specified. Since as , we see that we can a rate as long as

 R<(1−ϵ)(1+αγ)e−αγ−(1+ϵ)αe−αγ,

for some and . Letting , yields

 R<(1+αγ−α)e−αγ

for some . The right-hand side is maximized by setting , which implies that we can achieve any rate .

## Vi Proofs of Lemmas

###### Lemma 1.

The number of fragments in satisfies

 Pr(||Yk|−npnqk,n|>ϵnpn)≤4e−np2nϵ2/4,

for any and large enough.

###### Proof of Lemma 1.

First notice that, since , where are i.i.d.  random variables, and using Hoeffding’s inequality,

 Pr( |K−npn|>δnpn) =Pr(|K−E[K]+(1−pn)|>δnpn) ≤Pr(|K−E[K]|>δnpn−(1−pn)) =Pr(∣∣ ∣∣n∑i=2(Ti−pn)∣∣ ∣∣>(n−1)δnpn−(1−pn)n−1) ≤2e−2(n−1)(δnpn−(1−pn)n−1)2≤2e−2n(δnpn−(1−pn)n)2 ≤2e−np2nδ2, (21)

where the last inequality holds for large enough.

Now suppose the sequence of independent random variables is an infinite sequence (and does not stop at ). Let be the binary indicator of the event , and . Intuitively, and should be close. In particular, Moreover, . If and , by the triangle inequality, . Therefore,

 Pr (||Yk|−npnqk,n|>ϵnpn) ≤Pr(|~Z−npnqk,n|>12ϵnpn) +Pr(|K−npn|>12ϵnpn) ≤2e−npnϵ2/2+2e−np2nϵ2/4≤4e−np2nϵ2/4

where we used Hoeffding’s inequality and (21). ∎

###### Lemma 2.

If is a random variable and , then

 limn→∞Pr(N(n)≥βlogn)=e−αβ.
###### Proof of Lemma 2.

By definition,

 Pr(N(n)≥βlogn)=(1−pn)βlogn =(1−1E[N(n)])E[N(n)](βlogn/E[N(n)]).

As , and . Hence, , implying the lemma. ∎

###### Lemma 3.

If is defined as in (3) for ,

 limn→∞H(Y)n≤∞∑k=1limn→∞H(Yk)n.
###### Proof of Lemma 3.

For a fixed integer , we define and we have

 limn→∞H(Y)n ≤limn→∞A∑k=1H(Yk)n+limn→∞H(Y≥A)n =A∑k=1limn→∞H(Yk)n+limn→∞H(Y≥A)n. (22)

If we define as in Definition 1, from Lemma 6, we have

 limn→∞E[cA/L]=(αA/L+1)e−αA/L.

Moreover, from Lemma 6, the event

 A={cA/L>(αA/L+1)e−αA/L+δ}

has vanishing probability as . This allows us to write

 H(Y≥A) ≤H(Y≥A|¯A)+H(Y≥A|A)Pr(A)+1 ≤H(Y≥A|¯A)+2nPr(A)+1 ≤2n[(αA/L+1)e−αA/L+δ]+o(n).

Hence, from (22), we have that for every and ,

 limn→∞H(Y)n≤A∑k=1limn→∞H(Yk)n+2(αA/L+1)e−αA/L+2δ.

Notice that as . Therefore, we can let and , and we conclude that

 limn→∞H(Y)n≤∞∑k=1limn→∞H(Yk)n.

###### Lemma 4.

The number of fragments in satisfies

 Pr(||Yγ|−e−αγnpn|>ϵnpn)≤4e−np2nϵ2/9

for any and large enough.

###### Proof of Lemma 4.

Let , for . Then . Since is random (and not independent of the s), we need to follow similar steps to those in the proof of Lemma 1.

Let us assume that the sequence of independent random variables is an infinite sequence and let . Notice that is a sum of i.i.d. Bernoulli random variables with

 E[~Z]=npnPr(N1≥γlogn), (23)

and the standard Hoeffding’s inequality can be applied. Moreover, from Lemma 2,

 limn→∞E[~Z]/(npn)=e−αγ

and, for any , for large enough. If we set and, for large enough, we have . Moreover, if and , by the triangle inequality (applied twice), . Hence,

 Pr(||Yγ|−e−αγnpn|>ϵnpn) ≤Pr(|~Z−E[~Z]|>13ϵnpn)+Pr(∣∣|Yγ|−~Z∣∣>13ϵnpn) ≤Pr(∣∣~Z−E|Z|∣∣>13ϵnpn)+Pr(|K−npn|>13ϵnpn) ≤2e−2npnϵ2/9+2e−np2nϵ2/9≤4e−np2nϵ2/9

where we used Hoeffding’s inequality and (21). ∎

###### Lemma 5.

If is a random variable and , then, for any ,

 limn→∞ E[N(n)1{N(n)≥γlogn}]/logn =E[~N1