1 Introduction
Sequence comparison is an important step in many basic tasks in bioinformatics, from phylogeny reconstruction to genome assembly. It is often realised by sequence alignment techniques, which are computationally expensive, often requiring quadratic time in the length of the sequences. This has led to increased research into alignmentfree techniques Vinga01032003 . Hence standard notions for sequence comparison are gradually being complemented and in some cases replaced by alternative ones DomazetLoso:2009:EEP:1671627.1671629 ; WABI2015 . One such notion is based on comparing the words that are absent in each sequence nullrly . A word is an absent word (or a forbidden word) of some sequence if it does not occur in the sequence. Absent words represent a type of negative information: information about what does not occur in the sequence.
Given a sequence of length , the number of absent words of length at most is exponential in . However, the number of certain classes of absent words is only linear in . This is the case for minimal absent words, that is, absent words in the sequence for which all proper factors occur in the sequence BeMiReSc00 . An upper bound on the number of minimal absent words of a word of length over an alphabet of size is known to be Crochemore98automataand ; Mignosi02 . Hence it may be possible to compare sequences in time proportional to their lengths, for a fixedsized alphabet, instead of proportional to the product of their lengths. In what follows, we mainly consider sequences over a fixedsized alphabet since the most commonly studied alphabet in this context is .
An time and space algorithm for computing all minimal absent words of a sequence of length over a fixedsized alphabet based on the construction of suffix automata was presented in Crochemore98automataand . The computation of minimal absent words based on the construction of suffix arrays was considered in Pinho2009 ; although this algorithm has a lineartime performance in practice, the worstcase time complexity is . New time and space suffixarraybased algorithms were presented in DBLP:conf/isit/FukaeOM12 ; MAW ; PPAM2015 to bridge this unpleasant gap. An implementation of the algorithm presented in MAW is currently, to the best of our knowledge, the fastest available for the computation of minimal absent words. A more spaceefficient solution to compute all minimal absent words in time was also presented in Belazzougui2013 and an externalmemory algorithm in emmaw .
In this paper, we consider the problem of comparing two sequences and of respective lengths and , using their sets of minimal absent words. In Chairungsee2012109 , Chairungsee and Crochemore introduced a measure of similarity between two sequences based on the notion of minimal absent words. They made use of a lengthweighted index to provide a measure of similarity between two sequences, using sample sets of their minimal absent words, by considering the length of each member in the symmetric difference of these sample sets. This measure can be trivially computed in time and space provided that these sample sets contain minimal absent words of some bounded length . For unbounded length, the same measure can be trivially computed in time : for a given sequence, the cumulative length of all its minimal absent words can grow quadratically
with respect to the length of the sequence. This lengthweighted index forms the basis of a fundamentally new, recently introduced, algorithm for online pattern matching
pattern_matching .The same problem can be considered for two circular sequences. The measure of similarity of Chairungsee and Crochemore can be used in this setting provided that one extends the definition of minimal absent words to circular sequences. In Section 5, we give a definition of minimal absent words for a circular sequence from the Formal Language Theory point of view. We believe that this definition may also be of interest from the point of view of Symbolic Dynamics, which is the original context in which minimal absent words have been introduced BeMiReSc00 .
We also find a connection between the information provided by minimal absent words and the information provided by the set of grams. The former (absent words) can be seen as some kind of negative information, while the latter (grams) as positive information.
Our contributions. Here we make the following contributions:

We first show that the upper bound on the number of minimal absent words of a word of length over an alphabet of size is tight if (Section 3).

We present an time and space algorithm to compute the similarity measure introduced by Chairungsee and Crochemore by considering all minimal absent words of two sequences and of lengths and , respectively, over a fixedsized alphabet; thereby showing that it is indeed possible to compare two sequences in time proportional to their lengths (Section 4).

We show how this algorithm can be applied to compute this similarity measure for two circular sequences and of lengths and , respectively, in the same time and space complexity as a result of the extension of the definition of minimal absent words to circular sequences (Section 5).

We then present an time and space algorithm that given a word of length over an integer alphabet computes the largest integer for which each gram of is a gram of some minimal absent word of (Section 6).

Finally, we provide an opensource code implementation of our algorithms for sequence comparison using minimal absent words and investigate potential applications of our theoretical findings (Section 7).
2 Preliminaries
We begin with basic definitions and notation. Let be a word (or string) of length over a finite ordered alphabet of size . We also consider the case of words over an integer alphabet, where each letter is replaced by its rank in such a way that the resulting word consists of integers in the range .
For two positions and on , we denote by the factor (sometimes called substring) of that starts at position and ends at position (it is of length if ), and by the empty word, word of length 0. We recall that a prefix of is a factor that starts at position 0 (), a suffix is a factor that ends at position (), and that a factor of is a proper factor if it is not itself. A proper factor of that is neither a prefix nor a suffix of is called an infix of . The set of all factors of the word is denoted by . Any factor of length of is called a gram (or mer) of . The gram set of is the set of all factors of length of . We denote the reverse word of by , i.e. . We say that a word is a power of a word if there exists a positive integer , , such that is expressed as consecutive concatenations of , denoted by .
Let be a word of length with . We say that there exists an occurrence of in , or, more simply, that occurs in , when is a factor of . Every occurrence of can be characterised by a starting position in . We thus say that occurs at the starting position in when . Opposingly, we say that the word is an absent word of if it does not occur in . The absent word of is minimal if and only if all its proper factors occur in . The set of all minimal absent words for a word is denoted by . For example, if , then . In general, if we suppose that all the letters of the alphabet appear in which has length , the length of a minimal absent word of lies between and . It is equal to if and only if is of the form for some letter . So, if contains occurrences of at least two different letters, the length of any minimal absent word of is upper bounded by .
We now recall some basic facts about minimal absent words in Formal Language Theory. For further details and references the reader is recommended fici . A language over the alphabet is a set of finite words over . A language is regular if it is recognised by a finite state automaton. A language is called factorial if it contains all the factors of its words, while it is called antifactorial if no word in the language is a proper factor of another word in the language. Given a word , the language generated by is the language . The factorial closure of a language is the language consisting of all factors of the words in , that is, the language . Given a factorial language , one can define the (antifactorial) language of minimal absent words for as
Notice that is not the same language as the union of for . Every factorial language is uniquely determined by its (antifactorial) language of minimal absent words , through the equation
(1) 
The converse is also true, since by the definition of a minimal absent word we have
(2) 
The previous equations define a bijection between factorial and antifactorial languages. Moreover, this bijection preserves regularity. In the case of a single word , the set of minimal absent words for is indeed the antifactorial language . Thus, applying (1) and (2) to the language of factors of a single word, we have the following lemma.
Lemma 1.
Given two words and , if and only if .
Given a word of length over a fixedsized alphabet, it is possible to compute a trie storing all the minimal absent words of in time and space linear in . The size (number of nodes) of this trie is linear in . Furthermore, we can retrieve from its set of minimal absent words in time and space linear in the size of the input trie representing the minimal absent words of . Indeed, the algorithm MFtrie, introduced in Crochemore98automataand , builds the treelike deterministic automaton accepting the set of minimal absent words for a word taking as input the factor automaton of , that is the minimal deterministic automaton recognising the set of factors of . The leaves of the trie correspond to the minimal absent words for , while the internal states are those of the factor automaton. Since the factor automaton of a word has less than states (for details, see CHL07 ), this provides a representation of the minimal absent words of a word of length in space .
2.1 Suffix array and suffix tree
We denote by SA the suffix array of a nonempty word of length . SA is an integer array of size storing the starting positions of all (lexicographically) sorted nonempty suffixes of , i.e. for all we have SA . Let lcp denote the length of the longest common prefix between and for all positions , on , and otherwise. We denote by LCP the longest common prefix array of defined by LCP for all , and LCP. The inverse iSA of the array SA is defined by , for all . It is known that SA Nong:2009:LSA:1545013.1545570 , iSA, and LCP indLCP of a word of length , over an integer alphabet, can be computed in time and space .
The suffix tree of a nonempty word of length is a compact trie representing all suffixes of . The nodes of the trie which become nodes of the suffix tree are called explicit nodes, while the other nodes are called implicit. Each edge of the suffix tree can be viewed as an upward maximal path of implicit nodes starting with an explicit node. Moreover, each node belongs to a unique path of that kind. Thus, each node of the trie can be represented in the suffix tree by the edge it belongs to and an index within the corresponding path. We let denote the pathlabel of a node , i.e., the concatenation of the edge labels along the path from the root to . We say that is pathlabelled . Additionally, is used to denote the stringdepth of node . Node is a terminal node if its path label is a suffix of , that is, for some ; here is also labelled with index . It should be clear that each factor of is uniquely represented by either an explicit or an implicit node of . The suffixlink of a node with pathlabel is a pointer to the node pathlabelled , where is a single letter and is a word. The suffixlink of is defined if is an explicit node of , different from the root. In any standard implementation of the suffix tree, we assume that each node of the suffix tree is able to access its parent. Note that once is constructed, it can be traversed in a depthfirst manner to compute the stringdepth for each node . Let be the parent of . Then the stringdepth is computed by adding to the length of the label of edge . If is the root, then . It is known that the suffix tree of a word of length , over an integer alphabet, can be computed in time and space farach1997optimal .
2.2 A measure of similarity between words based on minimal absent words
In what follows, as already proposed in MAW , for every word , the set of minimal absent words associated with , denoted by , is represented as a set of tuples , where the corresponding minimal absent word of is defined by , , and , where . It is known that if and , then Mignosi02 .
In Chairungsee2012109 , Chairungsee and Crochemore introduced a measure of similarity between two words and based on the notion of minimal absent words. Let (respectively ) denote the set of minimal absent words of length at most of (respectively ). The authors made use of a lengthweighted index to provide a measure of similarity between and , using their sample sets and , by considering the length of each member in the symmetric difference of the sample sets. For sample sets and , they defined this index to be
In this paper we consider a more general measure of similarity for two words and . It is based on the set , and is defined by
so without any restriction on the lengths of minimal absent words. The smaller the value of , the more similar we assume and to be. Note that is affected by both the cardinality of and the lengths of its elements; longer words in contribute less in the value of than shorter ones. Hence, intuitively, the shorter the words in , the more dissimilar and are.
We provide the following examples for illustration. Let and . We have and . Thus,
so that
Similarly,
and
This measure of similarity aims at quantifying the distance between two words by means of their minimal absent words. In fact, we show here that this measure is consistent with the notion of distance (metric) in the mathematical sense.
Lemma 2.
is a metric on .
Proof.
It is clear that nonnegativity, for any , and symmetry, for any , are satisfied. In addition, we have from Lemma 1 that if and only if , hence identity is also satisfied. Furthermore, given three sets , and , we have by the properties of the symmetric difference that . Thus, given three words , and , for every we have that or and therefore contributes by to and to one of and . This shows that for any and so triangle inequality is also satisfied. ∎
Based on this similarity measure we consider the following problem:
In Section 4, we show that this problem can be solved in time and space.
2.3 Extension to circular words
We also consider the aforementioned problem for two circular words. A circular word of length can be viewed as a traditional linear word which has the left and rightmost letters wrapped around. Under this notion, the same circular word can be seen as different linear words, which would all be considered equivalent. More formally, given a word of length , we denote by , , the th rotation of , where . Given two words and , we define if there exists , , such that . A circular word is a conjugacy class of the equivalence relation . Given a circular word , any (linear) word in the equivalence class is called a linearisation of the circular word . Conversely, given a linear word , we say that is a circularisation of if is a linearisation of .
The factorial closures of the languages generated by two rotations of the same word and , i.e. the languages and , coincide, so one can unambiguously define the (infinite) language of factors of the circular word as the language , where is any linearisation of . This is coherent with the fact that a circular word can be seen as a word drawn on a circle, where there is no beginning and no end.
In Section 5, we give the definition of the set of minimal absent words for a circular word . We prove that the following problem can be solved within the same time and space complexity as its counterpart in the linear case.
2.4 Minimal absent words and grams
In Section 6, we present an time and space algorithm that given a word of length computes , the largest integer for which each gram of is a gram of some minimal absent word of .
3 Tight asymptotic bound on the number of minimal absent words
An important property of the minimal absent words of a word , that is at the basis of the algorithms presented in next sections, is that their number is linear in the size of . Let be a word of length over an alphabet of size . In Mignosi02 it is shown that the total number of minimal absent words of is smaller than or equal to . In the following lemma we show that is a tight asymptotic bound for the number of minimal absent words of whenever .
Lemma 3.
The upper bound on the number of minimal absent words of a word of length over an alphabet of size is tight if .
Proof.
The total number of minimal absent words of is smaller than or equal to Mignosi02 . Hence is an asymptotic upper bound for the number of minimal absent words of . In what follows we provide examples to show that this bound is tight if .
Let , i.e. , and consider the word of length . All words of the form , for , are minimal absent words of . Hence has at least minimal absent words.
Let with and consider the word , where and . Note that . Further note that is a factor of , for all and . Similarly, is a factor of , for all and . Thus, all proper factors of all the words in the set occur in . However, the only words in that occur in are the ones of the form , for . Hence has at least minimal absent words. ∎
4 Sequence comparison using minimal absent words
The goal of this section is to provide the first lineartime and linearspace algorithm for computing the similarity measure (see Section 2) between two words defined over a fixedsized alphabet. To this end, we consider two words and of lengths and , respectively, and their associated sets of minimal absent words, and , respectively. Next, we give a lineartime and linearspace solution for the MAWSequenceComparison problem.
It is known from Crochemore98automataand and MAW that we can compute the sets and in linear time and space relative to the two lengths and , respectively. The idea of our strategy consists of a merge sort on the sets and , after they have been ordered with the help of suffix arrays. To this end, we construct the suffix array associated to the word , together with the implicit LCP array corresponding to it. All of these structures can be constructed in time and space , as mentioned earlier. Furthermore, we can preprocess the array LCP for range minimum queries, which we denote by Fischer11 . With the preprocessing complete, the longest common prefix LCE of two suffixes of starting at positions and can be computed in constant time LCE , using the formula
Using these data structures, it is straightforward to sort the tuples in the sets and lexicographically. That is, two tuples, and , are ordered such that the one being the prefix of the other comes first, or according to the letter following their longest common prefix, when the former is not the case. In our setting, the latter is always the case since is prefixfree by the definition of minimal absent words. To do this, we simply go once through the suffix array associated with and assign to each tuple in , respectively , the rank of the suffix starting at the position indicated by its second component, in the suffix array. Since sorting an array of distinct integers, such that each is in , can be done in time (using for example bucket sort) we can sort each of the sets of minimal absent words, taking into consideration the letter on the first position and these ranks. Thus, from now on, we assume that where is lexicographically smaller than , for , and , where is lexicographically smaller than , for .
We now proceed with the merge. Thus, considering that we are analysing from and from , we note that the two are equal if and only if and
In other words, the two minimal absent words are equal if and only if their first letters coincide, they have equal length , and the longest common prefix of the suffixes of starting at the positions indicated by the second components of the tuples has length at least .
Such a strategy will empower us with the means for constructing a new set . At each step, when analysing tuples and , we proceed as follows:
Observe that the last condition is saying that basically each common tuple is added only once to their union.
Furthermore, simultaneously with this construction we can also calculate the similarity between the words, given by , which is initially set to . Thus, at each step, when comparing the tuples and , we update
We impose the increment of both and in the case of equality as in this case we only look at the symmetric difference between the sets of minimal absent words.
As all these operations take constant time and we perform them once per each tuple in and , it is easily concluded that the whole operation takes, in the case of a fixedsized alphabet, time and space . Thus, we can compute the symmetric difference between the complete sets of minimal absent words, as opposed to Chairungsee2012109 , of two words defined over a fixedsized alphabet, in linear time and space with respect to the lengths of the two words. We hence obtain the following result:
Theorem 4.
Problem MAWSequenceComparison can be solved in time and space .
5 Circular sequence comparison using minimal absent words
In this section we extend the notion of minimal absent words to circular words. Recall from Section 2 that, given a circular word , the set of factors of is defined as the (infinite) set , where is any linearisation of . We therefore define the set of minimal absent words of the circular word as the set of minimal absent words of the language , where is any linearisation of .
For instance, let . Then we have
Although is an infinite language, the set of minimal absent words of is always finite. More precisely, we have the following structural lemma (see also FiReRi17 ).
Lemma 5.
Let be a circular word and any linearisation of . Then
(3) 
That is, the minimal absent words of the circular word are precisely the minimal absent words of the (linear) word whose length is not greater than the length of , where is any linearisation of .
Proof.
If , with and , is an element in , then clearly .
Conversely, let , with and , be an element in . Then , while . So, there exists a letter different from such that and a letter different from such that . Therefore, . Any word of length at least cannot be extended to the right nor to the left by different letters in as such factors would yield two rotations of with different letter multiplicities. Hence . Since and are factors of some rotation of , we have , whence . ∎
The equality (3) was first introduced as the definition of the set of minimal absent words of a circular word in 6979851 .
Recall that a word is a power of a word if there exists a positive integer , , such that is expressed as consecutive concatenations of , denoted by . Conversely, a word is primitive if implies . Notice that a word is primitive if and only if any of its rotations is. We can therefore extend the definition of primitivity to circular words. The definition of does not allow one to uniquely reconstruct from , unless is known to be primitive, since it is readily verified that and therefore also the minimal absent words of these two languages coincide. However, from the algorithmic point of view, this issue can be easily managed by storing the length of a linearisation of together with the set . Moreover, in most practical scenarios, for example when dealing with biological sequences, it is highly unlikely that the input circular word is not primitive.
Using the result of Lemma 5, we can easily extend the algorithm described in the previous section to the case of circular words. That is, given two circular words of length and of length , we can compute in time and space the distance . We hence obtain the following result.
Theorem 6.
Problem MAWCircularSequenceComparison can be solved in time and space .
6 From minimal absent words to grams
In this section we consider a word of length over an integer alphabet. Our aim is to provide a measure of the extent to which some positive information about , the gram sets of , exist unaltered in the set of minimal absent words of , which can be seen as negative information about . More specifically, we define as the largest integer for which each gram of is a gram of some minimal absent word of . Note that, for instance, the set of grams of is used in molecular biology applications such as genome assembly Pevzner14082001 .
Example 7.
Consider the word over the alphabet . Its set of minimal absent words is . The set of grams of is and, as can be easily seen, each of them is a factor of some word in . The set of grams of is and we observe that baa is not a factor of any of the words in . We can hence conclude that in this case .
We present a nontrivial time and space algorithm to compute .
6.1 Useful properties
Let be the length of a shortest factor of that occurs only once in . In addition, let be the length of a shortest infix (factor that is not a prefix nor a suffix) of that occurs only once in .
Following the proof of (FICI2006214, , Proposition 10), any factor of a word that occurs more than once in is a factor of some minimal absent word of and hence .
Lemma 8.
For any word it holds that .
Proof.
Consider any nonempty infix of that occurs only once and suppose it is preceded by letter and followed by letter . Then can not be a factor of any of the minimal absent words of as the largest infix of any minimal absent word of must occur at least twice in , once in an occurrence of the largest proper prefix of this minimal absent word in and once in an occurrence of its larger proper suffix in . Note that , since otherwise and then does not occur only once in . It thus follows that . ∎
Fact 9.
We can compute and — and hence obtain the relevant bounds for — in time for a word of length .
Remark 10.
The relation between minimum unique substrings and maximum repeats has been investigated in DBLP:journals/fuin/IlieS11 .
Note that all 1grams that occur in are trivially contained in some minimal absent word of the form for some , so in what follows we assume that the factors of , for which we want to examine when they are factors of some minimal absent word of , are of the form , where and are (not necessarily distinct) letters and a (possibly empty) word. It is clear that any such factor of occurring only once can not be a minimal absent word itself. In addition, following the proof of Lemma 8, it can not be an infix of a minimal absent word. In the following lemma we provide a necessary and sufficient condition for to occur as a prefix of some minimal absent word of .
Lemma 11.
Let , with and a word, be a factor occurring only once in a word . The two following statements are equivalent:

is a prefix of some minimal absent word of ;

occurs at least twice in and if are the starting positions of its occurrences, with being the starting position of the occurrence of , at least one of the following holds: (i) for some , such that and ; (ii) .
Proof.
(): Consider a word occurring just once in and appearing as a prefix of some minimal absent word. Firstly is not itself a minimal absent word, so any minimal absent word that has as a prefix must be of the form , where is a possibly empty word and a letter. The existence of this minimal absent word means that occurs in and it is not preceded by (so occurs at least twice in ) and that occurs in and either:

it is followed by a letter , or

is a suffix of .
(): If (i) holds, then for any minimal such we have that is a minimal absent word.
If (i) does not hold, but (ii) holds, then is a minimal absent word.
∎
Similarly, whether is a suffix of some minimal absent word of depends on the extensions of in .
6.2 Computing
In this section we present Algorithm MawToQgrams that, given a word of length over an integer alphabet, computes in time and space . The algorithm first creates the suffix trees of and and then preprocesses them in time . The preprocessing phase for each tree is a depthfirst search traversal which allows us to store in each node a boolean variable which indicates if there is any branching in the subtree rooted at and a variable indicating the starting position of the first occurrence of in . The latter can be done in time since we store the starting position of the suffix corresponding to each terminal node while constructing the suffix tree. Algorithm MawToQgrams then calls Routines InfixBound, PrefixBound and SuffixBound to compute .
MawToQgramsx T(x)SuffixTreex T(rev(x))SuffixTreerev(x) each node B(v)true if there is any branching below and false otherwise S(v)Starting position of the first occurrence of in each node B(v)true if there is any branching below and false otherwise S(v)Starting position of the first occurrence of in qInfixBoundx qPrefixBoundx, q qSuffixBoundx, q q
As we have already seen, all the factors of that occur more than once in also occur in some minimal absent word of . Hence our aim is to identify a shortest factor of that is not a factor of any of the minimal absent words of .
We first present Routine Test that, given as inputs and , tests if the factor of that occurs only once in also occurs in some minimal absent word of . Let , where and is a word. The routine first checks if occurs as a prefix of some minimal absent word of by checking statement (2) of Lemma 11 as follows. It considers the node of with pathlabel ; note that in the pseudocode this node is denoted by . If this node is explicit, then it is named , while if it is implicit, then the destination of the edge it is on is named . The routine then checks in time if is true or if . If this is the case, then is a factor of some minimal absent word and the test returns true. Otherwise, the analogous check is performed for in . If both checks are unsuccessful, then the routine returns false. We discuss how to efficiently obtain the desired nodes later in this section.
Test
vNode(T(x))(i+1, j) IsImplicitv (v_1,v_2)Edgev vv_2 true vNode(T(rev(x)))(nj, ni1) IsImplicitv (v_1,v_2)Edgev vv_2 true false
Now note that the factors of that occur only once in are the labels of the leaves and of the implicit nodes on the edges between internal nodes and leaves in the suffix tree. Hence, if node is a leaf with , then corresponds to the shortest unique factor of occurring at . We can thus find and all the infixes of of a given length that occur only once in in time . We can also obtain the shortest unique prefix and the shortest unique suffix of in time .
Routine InfixBound first computes all unique infixes of of length and tests if there is any of them that does not occur in any minimal absent word of , in which case we have that . If this is not the case, the routine computes all unique infixes of of length and tests if there is any of them that does not occur in any minimal absent word of , in which case we have that . Otherwise, we use the bound shown in Lemma 8, and hence do not have to increment again.
InfixBoundx t(x) Iall such that is a unique infix of length each Testi,j=false I’all such that is a unique infix of length each Testi,j=false
Finally, we also perform the same test for the prefixes and suffixes of that occur only once and their length is smaller than the bound we have at that point. This is done by Routines PrefixBound and SuffixBound. We can then conclude on the value of .
PrefixBound
pLength of shortest unique prefix of p ≤q Test0,p1=false p1 pp+1 q
SuffixBound
sLength of shortest unique suffix of s ≤q Testns,n1=false s1 ss+1 q
We now discuss how to answer the queries in line of Test in time in total. We first discuss how to answer the queries asked within Routine InfixBound. While computing sets and , alongside the pair , we also store a pointer to the deepest explicit ancestor of the node with pathlabel . We can do this in time due to how we compute and . We have that for and or for . Following the suffixlink from the explicit node we retrieve the node with pathlabel or the node with pathlabel . We then only need to answer at most two child queries for each such node to obtain the node with pathlabel . Answering such queries online bears the cost of per query for integer alphabets or that of nondeterminism if we make use of perfect hashing to store the edges at every node of DBLP:journals/jacm/FredmanKS84 . We instead answer these queries offline: it is wellknown that we can answer child queries offline during a depthfirst traversal of the suffix tree in deterministic time by first sorting the queries at each node of . We first answer one child query per pair in a batch and then the potential second ones in another batch. The total time required for this is . Routine PrefixBound only considers nodes with pathlabels of the form , which can be found by following the edges upwards from the node with pathlabel . Routine SuffixBound only considers terminal nodes to which we can afford to store pointers while creating . We answer the respective queries for (line of Test) in a similar fashion. Finally, having the pointers to the required nodes, we perform all the tests offline.
Alternatively, we can obtain a deterministic time solution by employing a data structure for a special case of UnionFind DBLP:journals/jcss/GabowT85 — a detailed description of this technique can be found in the appendix of zstringsv1 .
Theorem 12.
Problem MAWQgrams can be solved in time and space .
Proof.
We build and preprocess the suffix trees of and in time and space farach1997optimal . Based on Lemma 8 we then have to perform the test for factors, which we can find in time . The tests are performed in total time by finding the required nodes and using the preprocessed suffix trees to check statement (2) of Lemma 11. We only need extra space to store a representation of the computed factors and perform the tests. ∎
7 Implementation and applications
We implemented the algorithms presented in Section 4 and Section 5 as programme scMAW to perform pairwise sequence comparison for a set of sequences using minimal absent words. scMAW uses programme MAW MAW for lineartime and linearspace computation of minimal absent words using suffix array. scMAW was implemented in the C programming language and developed under GNU/Linux operating system. It takes, as input argument, a file in MultiFASTA format with the input sequences, and then any of the two methods, for linear or circular sequence comparison, can be applied. It then produces a file in PHYLIP format with the distance matrix as output. Cell of the matrix stores (or for the circular case). The implementation is distributed under the GNU General Public License (GPL), and it is available at http://github.com/solonas13/maw, which is set up for maintaining the source code and the manpage documentation. Notice that all input datasets and the produced outputs referred to in this section are publicly maintained at the same website.
An important feature of the proposed algorithms is that they require space linear in the length of the sequences (see Theorem 4 and Theorem 6). Hence, we were also able to implement scMAW using the Open MultiProcessing (OpenMP) PI for shared memory multiprocessing programming to distribute the workload across the available processing threads without a large memory footprint.
7.1 Applications
Recently, there has been a number of studies on the biological significance of absent words in various species nullrly ; minabpro ; Silva02042015 . In minabpro , the authors presented dendrograms from dinucleotide relative abundances in sets of minimal absent words for prokaryotes and eukaryotic genomes. The analyses support the hypothesis that minimal absent words are inherited through a common ancestor, in addition to lineagespecific inheritance, only in vertebrates. Very recently, in Silva02042015 , it was shown that there exist three minimal words in the Ebola virus genomes which are absent from human genome. The authors suggest that the identification of such speciesspecific sequences may prove to be useful for the development of both diagnosis and therapeutics.
In this section, we show a potential application of our results for the construction of dendrograms for DNA sequences with circular structure. Circular DNA sequences can be found in viruses, as plasmids in archaea and bacteria, and in the mitochondria and plastids of eukaryotic cells. Circular sequence comparison thus finds applications in several contexts such as reconstructing phylogenies using viroids RNA conf/gcb/MosigHS06 or Mitochondrial DNA (MtDNA) mtDNAphylo . Conventional tools to align circular sequences could yield an incorrectly high genetic distance between closelyrelated species. Indeed, when sequencing molecules, the position where a circular sequence starts can be totally arbitrary. Due to this arbitrariness, a suitable rotation of one sequence would give much better results for a pairwise alignment SEA2015 ; WABI2015 . In what follows, we demonstrate the power of minimal absent words to pave a path to resolve this issue by applying Lemma 5 and Theorem 6. Next we do not claim that a solid phylogenetic analysis is presented but rather an investigation for potential applications of our theoretical findings.
We performed the following experiment with synthetic data. First, we simulated a basic dataset of DNA sequences using INDELible indelible . The number of taxa, denoted by , was set to ; the length of the sequence generated at the root of the tree, denoted by , was set to 2500bp; and the substitution rate, denoted by , was set to . We also used the following parameters: a deletion rate, denoted by , of relative to substitution rate of ; and an insertion rate, denoted by , of relative to substitution rate of . The parameters were chosen based on the genetic diversity standard measures observed for sets of MtDNA sequences from primates and mammals SEA2015 . We generated another instance of the basic dataset, containing one arbitrary rotation of each of the sequences from the basic dataset. We then used this randomised dataset as input to scMAW by considering as the distance metric. The output of scMAW was passed as input to NINJA ninja , an efficient implementation of neighbourjoining NJ
, a wellestablished hierarchical clustering algorithm for inferring dendrograms (trees). We thus used
NINJA to infer the respective tree under the neighbourjoining criterion. We also inferred the tree by following the same pipeline, but by considering as distance metric, as well as the tree by using the basic dataset as input of this pipeline and as distance metric. Hence, notice that represents the original tree. Finally, we computed the pairwise RobinsonFoulds (RF) distance RFdistance between: and ; and and .Let us define accuracy as the difference between 1 and the relative pairwise RF distance. We repeated this experiment by simulating different datasets and measured the corresponding accuracy. The results in Table 1 (see vs. ) suggest that by considering we can always reconstruct the original tree even if the sequences have first been arbitrarily rotated (Lemma 5). This is not the case (see vs. ) if we consider . Notice that accuracy denotes a (relative) pairwise RF distance of 0.
Dataset  vs.  vs. 

100%  100%  
100%  88,88%  
100%  100%  
100%  100%  
100%  100%  
100%  100%  
100%  97,87%  
100%  97,87%  
100%  100% 
8 Final remarks
In this paper, complementary to measures that refer to the composition of sequences in terms of their constituent patterns, we considered sequence comparison using minimal absent words, information about what does not occur in the sequences. We presented the first lineartime and linearspace algorithm to compare two sequences by considering all their minimal absent words. In the process, we presented some results of combinatorial interest, and also extended the proposed techniques to circular sequences. The power of minimal absent words is highlighted by the fact that they provide a tool for sequence comparison that is as efficient for circular as it is for linear sequences; whereas this is not the case, for instance, using the general edit distance model Maes . In addition, we presented a lineartime and linearspace algorithm that given a word computes the largest integer for which each gram of is a gram of some minimal absent word of . Finally, a preliminary experimental study shows the potential of our theoretical findings with regards to alignmentfree sequence comparison using negative information.
Acknowledgements
We warmly thank Alice Héliou (École Polytechnique) for her inestimable code contribution and Antonio Restivo (Università di Palermo) for useful discussions. We also thank the anonymous reviewers for their constructive comments which greatly improved the presentation of the paper. Gabriele Fici’s work was supported by the PRIN 2010/2011 project “Automi e Linguaggi Formali: Aspetti Matematici e Applicativi” of the Italian Ministry of Education (MIUR) and by the “National Group for Algebraic and Geometric Structures, and their Applications” (GNSAGA – INdAM). Robert Mercaş’s work was supported by a Newton Fellowship of the Royal Society. Solon P. Pissis’s work was supported by a Research Grant (#RG130720) awarded by the Royal Society.
References
 (1) M. Crochemore, G. Fici, R. Mercaş, S. P. Pissis, Lineartime sequence comparison using minimal absent words & applications, in: LATIN, Vol. 9644 of LNCS, Springer Berlin Heidelberg, 2016, pp. 334–346.
 (2) S. Vinga, J. Almeida, Alignmentfree sequence comparison—a review, Bioinformatics 19 (2003) 513–523.

(3)
M. DomazetLošo, B. Haubold, Efficient estimation of pairwise distances between genomes, Bioinformatics 25 (24) (2009) 3221–3227.
 (4) R. Grossi, C. S. Iliopoulos, R. Mercaş, N. Pisanti, S. P. Pissis, A. Retha, F. Vayani, Circular sequence comparison: algorithms and applications, Algorithms for Molecular Biology 11 (2016) 12.
 (5) C. Acquisti, G. Poste, D. Curtiss, S. Kumar, Nullomers: Really a matter of natural selection?, PLoS ONE 2 (10).
 (6) M. Béal, F. Mignosi, A. Restivo, M. Sciortino, Forbidden words in symbolic dynamics, Advances in Applied Mathematics 25 (2) (2000) 163––193.
 (7) M. Crochemore, F. Mignosi, A. Restivo, Automata and forbidden words, Information Processing Letters 67 (1998) 111–117.
 (8) F. Mignosi, A. Restivo, M. Sciortino, Words and forbidden factors, Theoretical Computer Science 273 (12) (2002) 99–117.
 (9) A. J. Pinho, P. J. S. G. Ferreira, S. P. Garcia, On finding minimal absent words, BMC Bioinformatics 11.
 (10) H. Fukae, T. Ota, H. Morita, On fast and memoryefficient construction of an antidictionary array, in: ISIT, IEEE, 2012, pp. 1092–1096.
 (11) C. Barton, A. Heliou, L. Mouchard, S. P. Pissis, Lineartime computation of minimal absent words using suffix array, BMC Bioinformatics 15 (2014) 388.
 (12) C. Barton, A. Heliou, L. Mouchard, S. P. Pissis, Parallelising the computation of minimal absent words, in: PPAM, Vol. 9574 of LNCS, 2015, pp. 243–253.
 (13) D. Belazzougui, F. Cunial, J. Kärkkäinen, V. Mäkinen, Versatile succinct representations of the bidirectional Burrows–Wheeler transform, in: ESA, Vol. 8125 of LNCS, 2013, pp. 133–144.
 (14) A. Heliou, S. P. Pissis, S. J. Puglisi, emMAW: computing minimal absent words in external memory, Bioinformatics 33 (17) (2017) 2746–2749.
 (15) S. Chairungsee, M. Crochemore, Using minimal absent words to build phylogeny, Theoretical Computer Science 450 (2012) 109–116.
 (16) M. Crochemore, A. Heliou, G. Kucherov, L. Mouchard, S. P. Pissis, Y. Ramusat, Minimal absent words in a sliding window and applications to online pattern matching, in: FCT, Vol. 10472 of LNCS, Springer Berlin Heidelberg, 2017, pp. 164–176.
 (17) G. Fici, Minimal forbidden words and applications, Ph.D. thesis, Université de MarnelaVallée (2006).
 (18) M. Crochemore, C. Hancart, T. Lecroq, Algorithms on Strings, Cambridge University Press, New York, NY, USA, 2007.
 (19) U. Manber, E. W. Myers, Suffix arrays: A new method for online string searches, SIAM Journal of Computing 22 (5) (1993) 935–948.
 (20) G. Nong, S. Zhang, W. H. Chan, Linear suffix array construction by almost pure inducedsorting, in: DCC, IEEE, 2009, pp. 193–202.
 (21) J. Fischer, Inducing the LCParray, in: WADS, Vol. 6844 of LNCS, 2011, pp. 374–385.
 (22) M. Farach, Optimal suffix tree construction with large alphabets, in: FOCS, 1997, pp. 137–143.
 (23) J. Fischer, V. Heun, Spaceefficient preprocessing schemes for range minimum queries on static arrays, SIAM Journal of Computing 40 (2) (2011) 465–492.
 (24) L. Ilie, G. Navarro, L. Tinta, The longest common extension problem revisited and applications to approximate string searching, Journal of Discrete Algorithms 8 (4) (2010) 418–428.
 (25) G. Fici, A. Restivo, L. Rizzo, Minimal forbidden factors of circular words, in: WORDS, Vol. 10432 of LNCS, 2017, pp. 36–48.
 (26) T. Ota, H. Morita, On a universal antidictionary coding for stationary ergodic sources with finite alphabet, in: ISITA, IEEE, 2014, pp. 294–298.
 (27) P. A. Pevzner, H. Tang, M. S. Waterman, An Eulerian path approach to DNA fragment assembly, Proceedings of the National Academy of Sciences 98 (17) (2001) 9748–9753.
 (28) G. Fici, F. Mignosi, A. Restivo, M. Sciortino, Word assembly through minimal forbidden words, Theoretical Computer Science 359 (1) (2006) 214–230.

(29)
L. Ilie, W. F. Smyth, Minimum unique
substrings and maximum repeats, Fundam. Inform. 110 (14) (2011) 183–195.
URL https://doi.org/10.3233/FI2011536  (30) M. L. Fredman, J. Komlós, E. Szemerédi, Storing a sparse table with O(1) worst case access time, J. ACM 31 (3) (1984) 538–544. doi:10.1145/828.1884.
 (31) H. N. Gabow, R. E. Tarjan, A lineartime algorithm for a special case of disjoint set union, J. Comput. Syst. Sci. 30 (2) (1985) 209–221. doi:10.1016/00220000(85)900145.
 (32) C. Barton, T. Kociumaka, C. Liu, S. P. Pissis, J. Radoszewski, Indexing weighted sequences: Neat and efficient, CoRR abs/1704.07625v1.
 (33) S. P. Garcia, O. J. Pinho, J. M. O. S. Rodrigues, C. A. C. Bastos, P. J. S. G. Ferreira, Minimal absent words in prokaryotic and eukaryotic genomes, PLoS ONE 6.
 (34) R. M. Silva, D. Pratas, L. Castro, A. J. Pinho, P. J. S. G. Ferreira, Three minimal sequences found in Ebola virus genomes and absent from human DNA, Bioinformatics 31 (15) (2015) 2421–2425.
 (35) A. Mosig, I. L. Hofacker, P. F. Stadler, Comparative analysis of cyclic sequences: Viroids and other small circular RNAs, in: GCB, Vol. 83 of LNI, 2006, pp. 93–102.
 (36) A. Goios, L. Pereira, M. Bogue, V. Macaulay, A. Amorim, mtDNA phylogeny and evolution of laboratory mouse strains, Genome Research 17 (3) (2007) 293–298.
 (37) C. Barton, C. S. Iliopoulos, R. Kundu, S. P. Pissis, A. Retha, F. Vayani, Accurate and efficient methods to improve multiple circular sequence alignment, in: SEA, Vol. 9125 of LNCS, 2015, pp. 247–258.
 (38) W. Fletcher, Z. Yang, INDELible: A flexible simulator of biological sequence evolution, Molecular Biology and Evolution 26 (8) (2009) 1879–1888.
 (39) T. J. Wheeler, Largescale neighborjoining with NINJA, in: WABI, Vol. 5724 of LNCS, 2009, pp. 375–389.
 (40) N. Saitou, M. Nei, The neighborjoining method: a new method for reconstructing phylogenetic trees., Molecular Biology and Evolution 4 (4) (1987) 406–425.
 (41) D. Robinson, L. Fould, Comparison of phylogenetic trees, Mathematical Biosciences 53 (12) (1981) 131–147.
 (42) M. Maes, On a cyclic stringtostring correction problem, Information Processing Letters 35 (2) (1990) 73–78.
Comments
There are no comments yet.