1 Introduction
Deletion channel is one of the most fundamental of the channels, and is still not well understood. Most of you would know what a deletion channel/error looks like. But, to give an example, this is what a single deletion looks like:
Here the decoder receives the message and needs to recover the original message. This corresponds to a single deletion, either in position 3 or 4. We cannot say what position the deletion occurred.
In relation to our favourite erasure channel, where such a single error might look like:
Here, we know what positions the erasure happened. The deletion channel is in fact a strictly worse channel, as we can convert the erasure channel output into a deletion channel output, by simply removing all the ‘’ symbols from the output.
We will also see that it has connections to our second favourite channel the Binary Symmetric Channel (BSC) channel.
1.1 Capacity of Deletion Channel
We define a binary deletion channel, BDC with deletion rate , where each symbol in input
can get deleted with probability
. This definition has similarities with the BEC and BSC, but that’s where the similarity ends. Surprisingly unlike BEC and BSC, we know quite less about BDC.
We still do not know the capacity of the BDC channel. One reason is that BDC is not really a discrete memoryless channel (DMC). For a DMC, you should be able to write:
We cannot write the same for deletion channels, one reason: as the output does not have “length”
, and is in fact a random variable. Shannon theory gives us a nice characterization for the capacity of DMCs:
(1) But, without this nice expression, finding the capacity is a much more difficult task.

We can of course get some bounds on the deletion channel capacity. For example: we know that binary erasure channel is strictly better than the BDC. Thus:
(2) This has been an open problem since quite some time now, but there has been some recent progress on this, which I will briefly talk about.
Most of the results are a part of the survey by Mitzmacher [10]. Kalai, Mitzenmacher, and Sudan [7] showed that when the capacity is almost equal to the BSC. Best known capacity lower bounds I am aware of are [5]: . Recently, improved capacity upper bounds were obtained by Cheraghchi [4] which state that:
(3) for , where is the golden ratio. For those of you who are interested, they essentially find capacity bounds for a channel known as Poisson repeat channel (the number of times each symbol is repeated is a Poisson random variable), which are then ported over for deletion channel as a special case.
This is mainly to give a sense of how difficult the deletion channel analysis is, and how little we really know about it. Typically, not knowing the capacity directly translates to not knowing good codes. But, surprisingly we know some nice code constructions in specific cases, which we will discuss next.
Deletion channel is also related to the insertion channel, or the indel channel, where both insertions and deletions happen, or general edit distance channels: indels + substitutions/reversals. A better understanding of the deletion channel is useful not just for communication, but for the problem of denoising, which is quite common (say when we type things and miss a character).
2 Adversarial deletion error
We spoke about “Shannontype” random deletion error capacity. But, for majority of the talk, we will mainly talk about the adversarial error, where when we say errors, we mean at max symbols are deleted. In this context let us define an deletion error correction code.
Definition 1
For example:
For ,
For ,
Thus the descendant balls need not be of the same sizes unlike the Hamming balls we are familiar with. And this will lead to some interesting scenarios as we will see. In fact we can analyze the size of 1deletion balls:
Lemma 1
The size of 1deletion Descendant ball of a vector is equal to the number of “runs” in .
For example:
has runs in total . Thus, . It is easy to see why this is true: Any deletion in a run essentially leads to the same length sequence in the descendant ball.
Definition 2
We call a subset to be a error correction code, if for any ,
We will mainly deal with during the talk.
2.1 Repetition coding
Lets start with the simplest of the deletion correction codes you can think of: “repetition codes”. We repeat every symbol
times. Is that sufficient? Let us say, we observe some symbol an odd number of times, then we know that a symbol was deleted in that runs of
’s or ’s. Note that, we still cannot figure out the location of the deletion, but can figure out what was deleted.This idea can in fact be extended to correct deletions by repeating every symbol times. But, this is quite bad: For correcting error, our communication rate is . Still better that the BSC case, where we had to repeat things 3 times.
Cool! Can we do better? We will next look at a cool class of codes known as the VTcodes; or the VarshamovTenengolts code. But before doing let’s look at a puzzle.
2.2 A puzzle
I believe the puzzle has some connection with VTcodes, might help with the understanding; but if not, it is a cool simple puzzle! So lets say, Mary is the Queen of the seven kingdoms, and she had ordered 100 big barrels filled with gold coins, where each coin has a weight of 10gm. But, she knows from her secret agency that one of the barrels contains counterfeit coins weighing only 9gm. She has an electric weighing scale which she can use; so the question is:
How can she determine which barrel contains the counterfeit coins with a single measurement
The solution is simple: she takes coins from the barrel and places them on the electronic weighing scale. Now if the weight is less that expected by grams, then it is the barrel which is counterfeit! We will come back to this puzzle :)
3 VTcodes
Allright! We are all set to define VTcodes.
Definition 3
VarshamovTenengolts code is defined as:
(4) 
Some historical context on these codes: These codes were first proposed as error correction codes for 1bit Zchannel error! Which means essentially , but not the other way round. Zchannel errors are known as asymmetric bitflips. Varshamov and Tenengolts proposed these codes in 1965 [14], and then Levenshtein discovered that these codes in fact work well also for the deletion channel as well!
Zchannel correction
So before we look into 1 deletion correction, let us see how they can correct one Zchannel error, i.e. one of the ’s can flip to a .
Let be the received symbol: then we can still compute:
In case, there is a flip at position , then . Thus, we can correct the Zchannel error! Here is where the similarity with the puzzle can be seen.
3.1 VTcodes decoding
We are all set to discuss the decoding for deletion channel:

First of all, note that if it is only a deletion channel, then “error detection” comes for free from the length of the code, unlike the bitflipping error

Let be the received erroneous codeword after deletion. Define:
(5) (6) 
Let be the position at which deletion occurred, as in was deleted. Let be the counts of 1,0 to the left of position . and be the counts to the right. In that case: if and if .
Thus, as , if , is deleted at a position with 1’s to its right, otherwise, is deleted such that there are zeros to its left.

Note that we can thus uniquely determine the sequence, but we cannot determine the exact location of the deletion here, as it can be any or in the run we identified.
3.2 VTcodes rate
As the VTcodes are combinatorial, we can get exact formulae for their sizes. Exact combinatorial formulate can be found in [12]. Here are few interesting things:

Lemma 2
For some :
As every lies in exactly one of sets:
This leads to the lemma.

It can be in fact shown that leads to the largest code size, and the smallest size.

For , all the VTcode sizes are in fact equal to .
3.3 Optimality of VTcodes
We say a error correction code is “optimal” if it has the smallest size amongst all error correction codes. Let us analyze the “optimality” of VTcodes.

Levenshtein [8] showed that, optimal deletion correction codes have asymptotic sizes . This makes VTcodes asymptotically optimal.

People have not been able to prove that VTcodes are optimal nonasymptotically. Finding the “optimal” 1deletion code is a NPhard problem, as it involves finding the independent set on the graph where vertices are connected if they lead to the same deletion descendants [12]. But for it is known that they are optimal, using computer programs. Sizes of these codes are:
For higher due to the exponential nature of the algorithms, we cannot say anything yet.

VTcodes also have the property that they are “perfect codes” [9] which implies that their descendent sets which are disjoint cover the entire sized. For example:
Levenshtein showed that surprisingly, this is true for all , which is quite cool in itself! The perfect codes analogy comes from Hamming codes being Perfect. But, unlike the Hamming distortion case, here perfect codes does not imply optimal codes? Why? For example, code is not perfect but is perhaps a better code that , potentially there might be a larger code for larger . Why does this happen? Because, the number of descendants are not fixed, some have and some have more.

Linearity: VT codes are linear until but never after! [12]. Variants of VTcode (restrictions on VTcodes) can be made linear by considering redundancy to be as against . Althought I am not sure how is linearity of codes useful, if the decoding is still nonlinear (linear time complexity, but nonlinear in nature).
3.4 Systematic Encoding
Now that we have taken a look at the lineartime decoding of VTcodes, it is a natural question to ask if there exist a nice way to encode data. This problem surprisingly remained open for more than 30 years until 1998 when AbdelGaffar et al. [1] provided a very convenient way of in fact “systematic encoding” of data.

For , let be the number of data bits, and are the “parity” bits. Let the data bits be , and the codeword to be formed in .

Fill in the data except in positions . Thus codeword looks like:
We can compute: . As most of the positions of (except ) are decided, to obtain , we need:
We can now conveniently choose as the bit binary expansion of .

Note that for 1deletion correction is exactly same as the rate of Hamming code. Not sure if this is a coincidence or something more!
4 Insertion + Deletion + Substitution codes
We looked at deletion correction channels in depth. In the next part of the tutorial, we will extend this understanding to more general scenarios. The first scenario is insertion instead of deletion error.
4.1 General indel error codes
Levenshtein [8] showed this general lemma:
Lemma 3
Any deletion correction code can also correct deletions and insertion errors where .
Note that here we do not need to know beforehand. The general proof is quite simple. Here, we will prove the simpler version of insertion error, as that is sufficient to get an intuitive understanding.

Let us assume that is a deletion correction code. We want to show that can correct insertion errors as well.

Let us assume that on the contrary, there exist codewords such that after one insertion error in them the resulting noisy codewords are equal. Let the insertion occured at position in and at position in .

As , even after deleting symbols in position in both , results in vectors which are equal. However, the length codewords are in fact deletion descendants of the codewords . This is contradictory to the definition of deletion correction codes, as no descendants can be equal. Thus, has to be insertion correction as well.
Note that, this is more of an existential result, and efficient deletion error decoding algorithms might not translate into efficient insertion detection algorithms.
4.2 VTcodes for 1insertion, deletion, substitution correction
Levenshtein showed the surprising fact that with a simple modification standard VTcodes can be converted into 1insertion, deletion and substitution correction codes. The modification is as follows:
is defined as:
(7) 
Let us try to understand why work:

First of all, from the length of the code, we know whether there is an insertion, deletion or a substitution.

Recollect that deletion error correction in codes only depends on distinct remainders modulus . This should still hold true if the modulus is taken . Thus, with , 1deletion correction still holds.

1insertion code ability was already shown from the general lemma earlier. However, using a similar remainder trick , insertions can in fact be corrected efficiently using the codes.

The only case remaining to analyze is the 1substitution or 1bitflip case. Let at position in the codeword resulting in the noisy codework . Clearly:
If , then:
As all these values are distinct, we can correct for 1bitflip. Note that, this construction is not optimal just for 1bitflip, as it essentially encodes 1 bit less than Hamming codes.
5 VTcodes for larger alphabet
Creating deletion codes for larger alphabets becomes a bit tricky. Of course, repetition coding still works on nonbinary alphabets.
Code which does not work
When I started thinking about this problem, I came up with this code, which has a bug! Let us still take a look at it, as it gives some understanding as to the intricacies of codedesign:
Define for as:
Then we consider code as:
Let us look at the argument as to why the code works, and try to find the bug!
Argument: The first equation, similar to the binary VT code will tell us the position of the deletion, and the second equation tells us the value.
Why does the above argument not work. The reason is that binary VTcodes not not actually tell us the position of the deletion correctly. They can tell us in which “run” the deletion happened, and hence obtain the codeword correctly, but not the exact position.
How do we solve for that?
Code which works
This code appears in the work of Tenengolts in 1984. [13]
Define for as:
Then we consider the code:
Let us try to analyze the decoding for this code:

First of all, as in the previous code, from the second constraint , we figure out what is the value of the deleted symbol. What remains to determine is its position.

The sequence is essentially capturing the monotonous regions of the sequence. , when the sequence is increasing (nondecreasing), and when it is decreasing. Thus, a deletion in sequence will in fact lead to exactly deletion in the sequence (the position of deletion might be shifted by 1, but it does not matter as VTcodes do not correct for position).

As the first equation is a VTcode, it can determine the run in which the deletion occurred. As every run in sequence corresponds to monotonic increasing/decreasing subsequence in , from the value of the deleted symbol, we can correctly place it and complete the decoding!
Efficient systematic encoding for this code was discovered recently in a work by Abroshan et al. [2] and is included in the implementation at https://github.com/shubhamchandak94/VT_codes/.
6 Bursty deletion codes
In this section we will look at Bursty deletions. By a single bursty deletion of size , we mean that some consecutive symbols were deleted. Note that, bursty deletion correction codes can correct for exactly 1 burst of size , but surprisingly they need not correct for a burst of size . For example: can correct 2bursty errors, but not single deletion errors!
6.1 VTcode based construction
We consider a construction based on singledeletion correction VTcodes to correct a single burst of deletions. How should one do that? One simple trick is to distribute these deletions across the length sequence, so that each length subsequence has exactly 1 deletion. For simplicity, let .
Then the codeword has the property that each of the rows below, belong to a code.

Let us analyze the scenario of 1 bursty error of size from position are deleted. This corresponds to exactly 1 deletion in each of the rows.

We still need to figure out which symbols of the deleted codeword belong to which rows, as the alignment might no longer be true. The cool thing is that, if there are exactly consecutive deletions, then every symbol will still be correctly aligned to the rows.

Thus, our code can in fact correct a bursty error of deletions, but need not correct lower number of bursts, which is quite unusual!
One important caveat to observe here is that the position of deletion in each of the rows is the same, or shifted by 1. Thus, if one of the rows is a VTcode, and the other rows just tell whether the error position is odd or even, that is enough to resolve the bursty error. This observation is the basis of further improvements to bursty error correction. For more details take a look at this paper: [11].
7 Multiple Deletions
One would imagine that it should be possible to extend the elegant construction of VTcodes from single deletion correction to multiple deletions. However, this problem has proved to be much more difficult.
There have been some recent works which extend the single deletion errors to multiple errors. Gabrys et al [6] provide an extension of the VTcodes idea to correct two deletions. There have been other recent works which provide multiple deletion correcting codes using different (non VTcode based) ideas [3].
Acknowledgement
I would like to thank Jay Mardia and Mary Wootters for interesting discussions on deletion codes.
References
 [1] Khaled AS AbdelGhaffar and Hendrik C Ferreira. Systematic encoding of the VarshamovTenengol’ts codes and the ConstantinRao codes. IEEE Transactions on Information Theory, 44(1):340–345, 1998.
 [2] Mahed Abroshan, Ramji Venkataramanan, and Albert Guillen I Fabregas. Efficient systematic encoding of nonbinary vt codes. In 2018 IEEE International Symposium on Information Theory (ISIT), pages 91–95. IEEE, 2018.
 [3] Joshua Brakensiek, Venkatesan Guruswami, and Samuel Zbarsky. Efficient lowredundancy codes for correcting multiple deletions. IEEE Transactions on Information Theory, 64(5):3403–3410, 2017.
 [4] Mahdi Cheraghchi. Capacity upper bounds for deletiontype channels. Journal of the ACM (JACM), 66(2):9, 2019.
 [5] Eleni Drinea and Michael Mitzenmacher. Improved lower bounds for the capacity of iid deletion and duplication channels. IEEE Transactions on Information Theory, 53(8):2693–2714, 2007.
 [6] Ryan Gabrys and Frederic Sala. Codes correcting two deletions. IEEE Transactions on Information Theory, 65(2):965–974, 2018.
 [7] Adam Kalai, Michael Mitzenmacher, and Madhu Sudan. Tight asymptotic bounds for the deletion channel with small deletion probabilities. In 2010 IEEE International Symposium on Information Theory, pages 997–1001. IEEE, 2010.
 [8] Vladimir I Levenshtein. Binary codes capable of correcting deletions, insertions, and reversals. In Soviet physics doklady, pages 707–710, 1966.
 [9] Vladimir I Levenshtein. On perfect codes in deletion and insertion metric. Discrete Mathematics and Applications, 2(3):241–258, 1992.
 [10] Michael Mitzenmacher et al. A survey of results for deletion channels and related synchronization channels. Probability Surveys, 6:1–33, 2009.
 [11] Clayton Schoeny, Antonia WachterZeh, Ryan Gabrys, and Eitan Yaakobi. Codes correcting a burst of deletions or insertions. IEEE Transactions on Information Theory, 63(4):1971–1985, 2017.
 [12] Neil JA Sloane. On singledeletioncorrecting codes. Codes and designs, 10:273–291, 2000.
 [13] Grigory Tenengolts. Nonbinary codes, correcting single deletion or insertion (Corresp.). IEEE Transactions on Information Theory, 30(5):766–769, 1984.
 [14] RR Varshamov and GM Tenengolts. Codes which correct single asymmetric errors (in Russian). Automatika i Telemkhanika, 161(3):288–292, 1965.
Comments
There are no comments yet.