Log In Sign Up

Coding for Optimized Writing Rate in DNA Storage

A method for encoding information in DNA sequences is described. The method is based on the precision-resolution framework, and is aimed to work in conjunction with a recently suggested terminator-free template independent DNA synthesis method. The suggested method optimizes the amount of information bits per synthesis time unit, namely, the writing rate. Additionally, the encoding scheme studied here takes into account the existence of multiple copies of the DNA sequence, which are independently distorted. Finally, quantizers for various run-length distributions are designed.


page 1

page 2

page 3

page 4


Survey of Information Encoding Techniques for DNA

Key to DNA storage is encoding the information to a sequence of nucleoti...

Encoding DNA sequences by integer chaos game representation

DNA sequences are fundamental for encoding genetic information. The gene...

Optimal Reference for DNA Synthesis

In the recent years, DNA has emerged as a potentially viable storage tec...

Iterative DNA Coding Scheme With GC Balance and Run-Length Constraints Using a Greedy Algorithm

In this paper, we propose a novel iterative encoding algorithm for DNA s...

Cryptanalyzing a Medical Privacy Protection Scheme based on DNA Coding and Chaos

Recently, a medical privacy protection scheme (MPPS) based on DNA encodi...

A constrained Shannon-Fano entropy coder for image storage in synthetic DNA

The exponentially increasing demand for data storage has been facing mor...

Sketching and Sequence Alignment: A Rate-Distortion Perspective

Pairwise alignment of DNA sequencing data is a ubiquitous task in bioinf...

I Introduction

DNA is believed to be a compelling medium for storing information due to its high density, energy efficiency, stability and longevity. Studies in the past decade have strengthened this belief by providing evidence that data on DNA can be written, stored and read both outside and inside living organisms [4, 7, 8, 18, 15, 9]. DNA has been synthesized by a powerful method of phosphoramidite chemistry. However, the quantity and quality of the synthesized DNA using this method is limited by cost [4, 7] and chemistry limitations [11, 13, 6].

Recently, a new inexpensive enzymatic method of DNA synthesis was proposed in [10]. Unlike other synthesis methods [1, 3] that focus on the synthesis of a precise DNA sequence, in this method information storage does not rely on precision at a single base level. The method relies on the synthesis of runs of homopolymeric bases, the length of which may vary. Moreover, the length of each run is dependent on the nucleotide appearing before the current run, which makes the run length distribution context dependent or in other words have “memory”. The distribution of run lengths also depends on the synthesis time for each run. One can expect reading long strands as an outcome of this synthesis method which is not a limitation now due to the advance long-read sequencing technologies like Oxford nanopore and PacBio [2, 16].

In [10], a coding approach was proposed that encodes information in the transitions between non-identical nucleotides. Hence, any message could be encoded using symbols (trits as mentioned in the paper). Further, the synthesis method was found to incur deletion, insertion and substitution of nucleotides out of which the deletion error was the most prominent. These errors were tackled by synthesizing multiple strands with the introduction of synchronization bases interspersed between information carrying nucleotides and then using multiple sequence alignment for the reconstruction.

The main drawback in the suggested coding approach in [10] is that it does not make use of the information in the run-length distribution. Thus, the rate of information in [10] is inherently upper bounded by bits per run.

In this paper, we suggest a method for encoding information that works in tandem with this new synthesis method. As a figure of merit, we study the number of information bits encoded per synthesis time unit, namely, we optimize for the writing rate. This differs from previous methods, that optimize the number of bits per channel use. The method we suggest is based on the Precision-Resolution (PR) scheme that was introduced in [17] to improve Run-Length Limitation (RLL) constraints. The PR framework provided an optimal set of run lengths when the clock frequency on the transmitter and receiver side was not the same. For the DNA synthesis method described in [10], the PR framework cannot be applied in its current form [17] because of the memory constraint, quantization error of run lengths and the availability of multiple copies of the same message on the receiver end. We extend the PR framework to include those constraints and design encoder and decoder utilizing the variations in run-lengths to store information in DNA.

The rest of the paper is organized as follows. In section II we describe constrained systems [14] and the terminator-free template independent DNA synthesis method introduced in [10]. In section III, we describe the generalized Precision Resolution scheme and provide examples of encoder and decoder when the run length distributions are Binomial and Poisson. In section IV, we conclude the paper and provide directions for future work.

Ii Preliminaries

Ii-a Constrained Systems

Throughout the paper, let denote some fixed finite alphabet. We shall eventually take to stand for the four bases found in DNA sequences. We use to denote the set of all finite strings over , and , where denotes the unique empty string. We denote the length of by . The concatenation of is denoted by , and denotes a concatenation of copies of .

Assume is a finite directed and labeled graph, that is is a finite set of vertices, is a finite multiset of edges (since we allow parallel edges), and are the labels on the edges. We say the length of an edge is . If all the edge lengths are , we say the graph is ordinary.

Let be a path in , . We say the path generates the word . The empty path generates the empty word . The graph is called lossless if given vertices (perhaps the same vertex) and a string , there is at most one path from to such that .

A constrained system is represented by a finite directed and labeled graph as above. The language of the constrained system is defined as all the words generated by finite paths in , namely,

The capacity of a constrained system is defined as

This intuitively measures the average numbers of bits of information per string symbol in long strings. It is well known that there exist encoders with rates below the capacity, and no encoders with rate higher than the capacity. If is ordinary and lossless, then it is also known that

where is the adjacency matrix of , and is the spectral radius of

, which by Perron-Frobenius theory is the largest positive eigenvalue of

. We note that if is not ordinary, we may easily construct an equivalent ordinary graph by converting each edge (where ) into a path by inserting auxiliary vertices .

Ii-B Terminator-Free DNA Synthesis

In [10], a method for DNA synthesis is described which is faster and cheaper than known methods. It does, however, suffer from distinct noise characteristics. We describe here the relevant coding-theoretic aspects of this method.

The alphabet is, of course, . The synthesis process proceeds in rounds. Assume at the beginning of the round, the current string is . A letter is chosen, which differs from the last letter of , denoted . A chemical reaction is then allowed to occur for a duration of time units. The resulting string at the end of the round is , where

is a random variable with distribution

. We emphasize the fact that the distribution of the new run lengths depends on the new letter being appended, the last letter of the string at the beginning of the round, and the duration of the chemical reaction.

Iii Generalized Precision-Resolution Schemes

In this section we generalize the precision-resolution (PR) framework to be able to handle the DNA synthesis method described in Section II.

The PR framework was introduced in [17] as a way to improve RLL constraints. In essence, the RLL constraint was originally devised to solve the problem of clocking differences between the transmitter and receiver. A binary sequence may be thought of as runs of zeros separated by ones, where the information is encoded in the lengths of the runs of zeros. However, if the clock frequency in the transmitter and receiver is not exactly the same, a transmitted run length may be measured differently by the receiver than originally intended by the transmitter. Thus, the constrained code employed should make sure run lengths may be unambiguously recoverable at the receiver side. It was shown in [17] that the choice of allowable run lengths in the RLL constraint is sub-optimal, and that the PR framework provides an optimal set of run-lengths, thus increasing the system’s capacity.

The PR framework, however, cannot be applied “as-is” in the case at hand. This is due to three main differences:

  1. Memory – The encoders described in [17] are block encoders, namely, encoders with a single state only. In the case of DNA synthesis as described in Section II, we must have memory since the behavior of each writing phase depends on the previous run that was written.

  2. Quantization error – In [17] it is assumed that a transmitted run length may change, and that the distribution of the noisy run length has a finite support. Thus, [17] is able to partition the set of all possible run lengths into disjoint intervals, and the receiver may quantize a received run length to an interval unambiguously. This does not seem to be the case described in [10], and we must assume the quantizer at the receiver may err.

  3. Multiple copies – The setting described in [17] allows the encoder to create a single output that is transmitted over the channel, and decoded at the receiver. Here, however, multiple copies of the DNA string may be created simultaneously. All of these copies are created from the same user information, but the synthesis process may result in different DNA strings. We may use these extra copies to improve our overall scheme.

We therefore describe a new framework that builds upon the work of [17] and generalizes it.

Iii-a Framework Description

Fix the alphabet . Assume a synthesis round of time units, attempting to write a letter following a run of , . The actual resulting run length is a random variable , where we note that the distribution depends on the current run being written, , the previous written run, , and the duration of the synthesis round .

Next, we observe that the synthesis method lends itself naturally to writing several copies of the same string (see [10]). Let denote the number of copies we write. Thus, we have i.i.d. random variables representing the actual run length resulting in each of the copies.

Following the general PR framework of [17], we now do the following:

  • We fix a maximal run-length decoding error, .

  • We fix a maximal synthesis time for all runs, . This is a desirable feature since long runs may affect the DNA molecule’s stability.

  • For each , , a sequence of allowable synthesis-round times, , to be used when writing a run of ’s following a run of ’s. Thus, is the number of possible synthesis times for a run . (We note that for later convenience, does not depend on and .)

  • Finally, we fix a quantizing function, . This function receives

    , a vector of

    i.i.d. random variables that are distributed according to for some . The function attempts to find , returning . We require .

We point out here, in the setup described above, we only consider the run-length errors and ignore other types of noise, such as insertion, deletion, and substitution errors. We also observe that this setting may cause a designed run to not appear at all, and then perhaps causing its two adjacent runs to be merged. However, by synthesizing several copies, the probability that the same run is deleted from

all copies, is exponentially small in the number of copies. A similar strategy was used by a previous work, and run deletions may then be handled by an alignment algorithm [10].

We now construct the following graph . We set . For all , , we add parallel edges directed from to with labels , for .

At this point we make an important observation: the system does not describe the DNA strings we write. Instead, the strings in may be thought of as a description of the synthesis algorithm. An edge with a label describes running a synthesis round for the letter for a duration of time units. This is in particular important since the capacity,

measures the average bits per time unit of synthesis rounds. While this may seem odd at first, we emphasize the following two points. First, the main cost in the transmission process described in this paper is

not the length of the DNA strings. Instead, the main cost stems from the time it takes to synthesize the DNA molecules. Second, even in a standard use of constrained systems, the length of the transmitted string differs from the length of the received string (for example, due to a difference in clock frequencies). Here, we may think of the transmitter as attempting to write the string describing the synthesis rounds, whereas the receiver reads the resulting DNA string.

Lemma 1

. The graph constructed in this section is lossless. If is the equivalent ordinary graph of (created by inserting dummy vertices) then


Assume to the contrary there exist two distinct paths, and , from to that generate the same word. Without loss of generality we also assume the two paths disagree on the first edge taken.

Let be the label of the first edge in . We must have that the first edge of is labeled by , where . Again, without loss of generality, assume . We must have that has a second edge, but by construction, it is labeled by with . It follows that , a contradiction. Hence, is lossless.

The proof for follows along the same lines after noting that two hypothetical distinct paths that generate the same word and differ in their first edge, must start in a vertex originally from (and not in an auxiliary vertex).

Finally, the fact that

is well known (e.g., see [12]). ∎

Using standard techniques, e.g., the state-splitting algorithm (see [12, 14]), we can build encoders for with rate that asymptotically approaches . At the receiver side, the run lengths are qunatized using the quantizing functions , and a decoder, , is employed to reverse the encoding process of .

Iii-B Fixing Quantizer Errors

We now turn to discuss errors due to the quantizers. Intuitively, at the receiver side, the receiver is well aware of the fact that the previous run was a run of ’s, whereas the current one is a run of ’s, for some , . It is, therefore, the quantizer’s goal to determine from the received run length of ’s. By construction, the probability of error is at most .

Assume a total of runs was written, where the th run was of designed length . It is now our goal to protect the sequence from a typical -fraction of errors. To that end, we may use any error-correcting code capable of correcting a typical -fraction of errors. Such codes exist with asymptotic redundancy of no more than  [5], where

Using such a linear systematic code, we map , where for all .

Since are already represented by the sequence synthesized thus far, we only need to append DNA bases to represent the redundancy of the code, namely the sequence . We do so by first identifying our original alphabet with , where (where in the case of DNA bases, we get ). We then transform the sequence of redundancy symbols to a sequence over the alphabet , , where for all , and .

Finally, to write the redundancy we use runs of designed length , and a -differential encoding. In detail, if the first runs were of the letters respectively, the following runs are of the letters

where addition is done in .

Overall, upon receiving a transmission, the receiver extracts the sequence of run lengths, . For the redundancy runs, the run lengths are of no importance, and the letters of the runs determine , which may be used to find . The decoder then uses the -ary error-correcting code to obtain with high probability. It then feeds the corrected sequence to the decoder to obtain the user information. We therefore have:

Theorem 2

. Using the setting described in this section, for all large enough , any user bits may be encoded into a sequence using at most

synthesis time, and be decoded correctly with high probability.


Take the bits and feed them to the encoder resulting in a DNA synthesis process of time asymptotically. In the worst case, this sequence is composed of synthesis rounds of length only. Protect these rounds using an error-correcting code with rate (asymptotically) , resulting in more rounds of length each that need to be written. ∎

Finally, this last theorem may be further improved by assuming the user input is uniform i.i.d. bits and replacing the worst-case assumption of all rounds of length

, with the expected number of rounds. This may be done by a Markov-chain analysis of the encoder


Theorem 3

. Assume the setting described in this section, and let be the ordinary version of . Further assume the user information bits are i.i.d. uniform random bits. Then for all large enough , the user information bits may be encoded into a sequence using at most

synthesis time, and be decoded correctly with high probability. Here is the sum of probabilities of non-auxiliary vertices in the stationary distribution of the max-entropic Markov chain over .


The proof is similar to that of Theorem 2, with the difference being that the expected number of synthesis rounds (after encoding the user information bits) is . This is due to the fact that a new round starts every time we reach a non-auxiliary vertex in . ∎

Iii-C Quantizers for Binomial Run Lengths

We proceed in this section and the following one, to study some special cases of quantizers, depending on the distribution of run lengths. In this section we assume that the run lengths have binomial distribution. Further, we assume that

are independent of the choice of and , i.e. .

We construct a maximum-likelihood quantizer to find the correct run-length index. For a given run, let denote the run lengths received at the output from copies that were synthesized. Assuming is i.i.d and distributed according to , the received run length sum is distributed according to , where , i.e.

The log-likelihood is given by

The index is determined by

Let the true index be denoted by . Let , be such that the range for which the decision is correct, i.e., , is given by

The probability of error is

Below, we describe the steps to design an encoder for this setup for different values of and .

  • Step 1: Set and

  • Step 2: For each , is calculated by finding the minimum such that


    Note that is the index where and .

Figures 1(a) and 1(b) show the achievable rates for the PR system described above for and , respectively, for different choices of , , and . Figure 1(c) shows the achievable rates for fixed parameters, except for varying values of .

(a) (b) (c)

Fig. 1: (a) Achievable rate for , and for different choices of using Theorem 3. (b) Achievable rate for , and for different choices of using Theorem 3. (c) Comparison of Achievable rates for , and for different choices of using Theorem 3.

(a) (b)

Fig. 2: (a) Achievable rates for Poisson Run lengths for different values of using Theorem 3 with where . Here (see Figure 2(b)) is a function of and . (b) Required for different values of and .

Iii-D Quantizers for Poisson Run Lengths

Suppose that the run lengths have a Poisson distribution with a parameter that can be set by the encoder. The parameter is chosen from the set

, . The decoder decides if where , and are designed such that

Let the true index be denoted by . The range for which the decision is correct is given by

The probability of error is given by:


Below, we describe the design an encoder for this setup.

  • Step 1: Set and

  • Step 2: For each , and are calculated as follows:

    for .

    for .

Figure 2(a) shows the achievable rate for different values of for Poisson run length based encoder and decoder obtained using the steps described above with . Figure 2(b) shows the required for different values of to achieve the rate curve given in Figure 2(a).

We point out here that the steps presented for choosing

in sections III-C and III-D are based on heuristics and may not be optimal. Nevertheless, the results show that the achievable rate is well beyond the maximum rate of

of [10].

Iv Conclusion

We described in this paper a method for encoding information in DNA sequences, that is based on the precision-resolution framework [17], in conjunction with the terminator-free synthesis method [10]. This method takes into account several possible run lengths for each run of the synthesized sequence, thus increasing the rate beyond the fundamental upper bound of bits per symbol. Additionally, the method accounts for quantizer error during the reading process, and it utilizes the fact that the receiver is in possession of multiple copies (independently distorted) of the synthesized sequence. We make note that the synthesis process may delete entire runs, perhaps also causing neighboring runs to merge. However, as in a previous work [10], such deletions may be handled by the alignment algorithm.

We also studied Binomial and Poisson distributions for synthesized run lengths, and provided methods for designing quantizers for these distributions. A point for future research is to conduct empirical studies of the synthesis process of [10] in order to find the actual parameters of run length distributions.

Finally, we believe this method should eventually be used in conjunction with addressless information encoding, such as coding of sliced information [19]. We leave this for future research.


  • [1] S. Beaucage and M. Caruthers, “Deoxynucleoside phosphoramidites—a new class of key intermediates for deoxypolynucleotide synthesis,” Tetrahedron Letters, vol. 22, no. 20, pp. 1859 – 1862, 1981. [Online]. Available:
  • [2] C. Bleidorn, “Third generation sequencing: technology and its potential impact on evolutionary biodiversity research,” Systematics and Biodiversity, vol. 14, no. 1, pp. 1–8, 2016. [Online]. Available:
  • [3] M. H. Caruthers, “A brief review of DNA and RNA chemical synthesis,” Biochemical Society Transactions, vol. 39, no. 2, pp. 575–580, 03 2011. [Online]. Available:
  • [4] G. M. Church, Y. Gao, and S. Kosuri, “Next-generation digital information storage in DNA,” Science, vol. 337, p. 1628, 2012.
  • [5] T. M. Cover and J. A. Thomas, Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing).   USA: Wiley-Interscience, 2006.
  • [6] P. Gaytán, “Chemical synthesis of oligonucleotides using acetone as a washing solvent,” Biotechniques, vol. 47, pp. 701–702, 2009.
  • [7] N. Goldman, P. Bertone, S. Chen, C. Dessimoz, E. M. LeProust, B. Sipos, and E. Birney, “Towards practical, high-capacity, low-maintenance information storage in synthesized DNA,” Nature, vol. 494, no. 7435, pp. 77–80, 2013.
  • [8] R. N. Grass, R. Heckel, M. Puddu, D. Paunescu, and W. J. Stark, “Robust chemical preservation of digital information on DNA in silica with error-correcting codes,” in Angew.   Engl. 54, 2015, pp. 2552–2555.
  • [9] S. Jain, F. Farnoud, M. Schwartz, and J. Bruck, “Duplication-correcting codes for data storage in the DNA of living organisms,” IEEE Trans. Inform. Theory, vol. 63, no. 8, pp. 4996–5010, Aug. 2017.
  • [10] H. H. Lee, R. Kalhor, N. Goela, J. Bolot, and G. M. Church, “Terminator-free template-independent enzymatic DNA synthesis for digital information storage,” Nature Communications, vol. 10, no. 2383, pp. 1–12, 2019.
  • [11] E. M. LeProust, B. J. Peck, K. Spirin, H. B. McCuen, B. Moore, E. Namsaraev, and M. H. Caruthers, “Synthesis of high-quality libraries of long (150mer) oligonucleotides by a novel depurination controlled process,” Nucleic Acids Research, vol. 38, no. 8, pp. 2522–2540, 03 2010. [Online]. Available:
  • [12] D. Lind and B. H. Marcus, An Introduction to Symbolic Dynamics and Coding.   Cambridge University Press, 1985.
  • [13] D. Lowe, The Great Acetonitrile Shortage. In the Pipeline, 2009. [Online]. Available:
  • [14] B. H. Marcus, R. M. Roth, and P. H. Siegel, Constrained Systems and Coding for Recording Channels.   V. S. Pless and W. C. Huffman (Editors), Elsevier, Amsterdam, 1998.
  • [15] L. Organick, S. D. Ang, Y.-J. Chen, R. Lopez, S. Yekhanin, K. Makarychev, M. Z. Racz, G. Kamath, P. Gopalan, B. Nguyen, C. N. Takahashi, S. Newman, H.-Y. Parker, C. Rashtchian, K. Stewart, G. Gupta, R. Carlson, J. Mulligan, D. Carmean, G. Seelig, L. Ceze, and K. Strauss, “Random access in large-scale DNA data storage,” Nature Biotechnology, vol. 36, no. 3, pp. 242–248, 2018. [Online]. Available:
  • [16] M. O. Pollard, D. Gurdasani, A. J. Mentzer, T. Porter, and M. S. Sandhu, “Long reads: their purpose and place,” Human Molecular Genetics, vol. 27, no. R2, pp. R234–R241, 05 2018. [Online]. Available:
  • [17] M. Schwartz and J. Bruck, “On the capacity of the precision-resolution system,” IEEE Trans. Inform. Theory, vol. 56, no. 3, pp. 1028–1037, 2010.
  • [18] S. L. Shipman, J. Nivala, J. D. Macklis, and G. M. Church, “CRISPR-Cas encoding of digital movie into the genomes of a population of living bacteria,” Nature, vol. 547, pp. 345–349, Jul. 2017.
  • [19] J. Sima, N. Raviv, and J. Bruck, “On coding over sliced information,” in Proceedings of the 2019 IEEE International Symposium on Information Theory (ISIT2019), Paris, France, Jul. 2019, pp. 767–771.