In-Place Bijective Burrows-Wheeler Transforms

04/27/2020
by   Dominik Köppl, et al.
KYUSHU UNIVERSITY
Tohoku University
0

One of the most well-known variants of the Burrows-Wheeler transform (BWT) [Burrows and Wheeler, 1994] is the bijective BWT (BBWT) [Gil and Scott, arXiv 2012], which applies the extended BWT (EBWT) [Mantaci et al., TCS 2007] to the multiset of Lyndon factors of a given text. Since the EBWT is invertible, the BBWT is a bijective transform in the sense that the inverse image of the EBWT restores this multiset of Lyndon factors such that the original text can be obtained by sorting these factors in non-increasing order. In this paper, we present algorithms constructing or inverting the BBWT in-place using quadratic time. We also present conversions from the BBWT to the BWT, or vice versa, either (a) in-place using quadratic time, or (b) in the run-length compressed setting using O(n r / r) time with O(r n) bits of words, where r is the sum of character runs in the BWT and the BBWT.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

12/19/2018

Efficient Representation and Counting of Antipower Factors in Words

A k-antipower (for k > 2) is a concatenation of k pairwise distinct word...
01/11/2021

Strictly In-Place Algorithms for Permuting and Inverting Permutations

We revisit the problem of permuting an array of length n according to a ...
10/23/2019

Resolution of the Burrows-Wheeler Transform Conjecture

Burrows-Wheeler Transform (BWT) is an invertible text transformation tha...
02/10/2022

MONI can find k-MEMs

Maximal exact matches (MEMs) have been widely used in bioinformatics at ...
01/22/2021

An in-place truncated Fourier transform

We show that simple modifications to van der Hoeven's forward and invers...
08/12/2018

Local Decodability of the Burrows-Wheeler Transform

The Burrows-Wheeler Transform (BWT) is among the most influential discov...
01/07/2019

An in-place, subquadratic algorithm for permutation inversion

We assume the permutation π is given by an n-element array in which the ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The Burrows-Wheeler transform (BWT) [6] is one of the most favored options both for (a) compressing and (b) indexing data sets. On the one hand, compression programs like bzip2 apply the BWT to achieve high compression rates. For that, they leverage the effect that the BWT built on repetitive data tends to have long character runs, which can be compressed by run-length compression, i.e., representing a substring of a’s by the tuple . On the other hand, self-indexing data structures like the FM-index [11] enhance the BWT to a full-text self-index. A combined approach of both compression and indexing is the run-length compressed FM-index [21], representing a BWT with character runs, i.e., maximal repetitions of a character, run-length compressed in bits. This representation can be computed directly in run-length compressed space thanks to Policriti and Prezza [30]. The BWT and its run-length compressed representation have been intensively studied during the past decades (e.g., [12, 1, 14] and the references therein). Contrary to that, a variant, called the bijective BWT (BBWT) [16], is far from being well-studied despite its mathematically appealing characteristics111The BBWT is a bijection between strings without the need of an artificial delimiter needed, e.g., to invert the BWT.. As a matter of fact, we are only aware of one index data structure based on the BBWT [3] and of two non-trivial construction algorithms [5, 2] of the (uncompressed) BBWT, both with the need of additional data structures.

In this article, we shed more light on the connection between the BWT and the BBWT by quadratic time in-place conversion algorithms in Sect. 5 constructing the BWT from the BBWT, or vice versa. We can also perform these conversions in the run-length compressed setting in time with space linear to the number of the character runs (cf. Sect. 4 and Sect. 4), where is the sum of character runs in the BWT and the BBWT.

2 Related Work

Given a text  of length , the BWT of  is the string obtained by assigning to the character preceding the -th lexicographically smallest suffix of  (or the last character of  if this suffix is the text itself). By this definition, we can construct the BWT with any suffix array [22] construction algorithm. However, storing the suffix array inherently needs bits of space. Crochemore et al. [9] tackled this space problem with an in-place algorithm constructing the BWT in online on the reversed text by simulating queries on a dynamic wavelet tree [17] that would be built on the (growing) BWT. They also gave an algorithm for restoring the text in-place in time.

In the run-length compressed setting, Policriti and Prezza [30] can compute the run-length compressed BWT having character runs in time while using bits of space. They additionally presented an adaption of the wavelet tree on run-length compressed texts, yielding a representation using bits of space with query and update time. Finally, practical improvements of the run-length compressed BWT construction were considered by Ohno et al. [29].

The BBWT is the string obtained by assigning to the last character of the -th smallest string in the list of all conjugates of the factors of the Lyndon factorization sorted with respect to the order [23, Def. 4]. Bannai et al. [2] recently revealed a connection between the bijective BWT and suffix sorting by presenting an time BBWT construction algorithm based on SAIS [28]. With dynamic data structures like a dynamic wavelet tree [27], Bonomo et al. [5] could devise an algorithm computing the BBWT in time. With nearly the same techniques, Mantaci et al. [24] presented an algorithm computing the BWT (and simultaneously the suffix array if needed) from the Lyndon factorization. All these construction algorithms need however data structures taking bits of space. However, the latter two (i.e., [5] and [24]) can work in-place by simulating the mapping (cf. Sects. 3.5 and 3.4), which we focus on in Sect. 5.1.

3 Preliminaries

Our computational model is the word RAM model with word size . Accessing a word costs time. An algorithm is called in-place if it uses, besides a rewriteable input, only bits of working space. We write for an interval  of natural numbers.

3.1 Strings

Let denote an integer alphabet of size  with . We call an element a string. Its length is denoted by . Given an integer , we access the -th character of  with . Concatenating a string times is abbreviated by . A string  is called primitive if there is no string  with for an integer  with .

When is represented by the concatenation of , i.e., , then , and are called a prefix, substring and suffix of , respectively; the prefix , substring , or suffix  is called proper if , , or , respectively. For two integers  with , let denote the substring of that begins at position  and ends at position  in . If , then is the empty string. In particular, the suffix starting at position  of  is called the -th suffix of , and denoted with . An occurrence of a substring  in is treated as a sub-interval of such that . The longest common prefix (LCP) of two strings  and  is the longest string that is a prefix of both and .

Orders on Strings.

We denote the lexicographic order with . Given two strings  and , if is a prefix of or there exists an integer  with such that and . Next we define the order of strings, which is based on the lexicographic order of infinite strings: We write if the infinite concatenation is lexicographically smaller than . For instance, but .

Rank and Select Queries.

Given a string , a character , and an integer , the rank query counts the occurrences of c in , and the select query gives the position of the -th c in . We stipulate that . A wavelet tree is a data structure supporting rank and select queries.

3.2 Lyndon Words

Given a string , its -th conjugate is defined as for an integer . We say that and all of its conjugates belong to the conjugate class . If a conjugate class contains exactly one conjugate that is lexicographically smaller than all other conjugates, then this conjugate is called a Lyndon word [20]. Equivalently, a string is said to be a Lyndon word if and only if for every proper suffix of  [10, Prop. 1.2].

The Lyndon factorization [8] of is the factorization of into a sequence of lexicographically non-increasing Lyndon words , where (a) each is a Lyndon word for , and (b) for each . Each Lyndon word is called a Lyndon factor.

[[10, Algo. 2.1]] Given a string  of length , there is an algorithm that outputs the Lyndon factors  one by one in increasing order in total time while keeping only a constant number of pointers to positions in  that (a) can move one position forward at one time or (b) can be set to the position of another pointer.

Proof.

The algorithm of Duval uses three variables , , and (cf. LABEL:algoDuval in the appendix) pointing to text positions. is the ending position of the previously computed Lyndon factor (or zero at the beginning). On each step, is incremented by one, while is either incremented by one or reset to , as long as is a prefix of a Lyndon word starting at . If is no longer such a prefix, then is either a Lyndon factor or a repetition of Lyndon factors, each of length . In total, we visit at most characters by incrementing the text positions , , and . ∎

For what follows, we fix a string over an alphabet with size . We use the string as our running example. Its Lyndon factors are , , , and .

Figure 1: All three BWT variants studied in this paper applied on our running example . Left: built on the last characters of the conjugates of all Lyndon words sorted in the order. Middle and Right: and built on the lexicographically sorted conjugates of  and of , respectively. To ease understanding, each character is marked with its position in  in subscript. Reading these positions in of and in of gives a circular suffix array (there are multiple possibilities with ) and the suffix array (the position of $ is uniquely defined as ).

3.3 Burrows-Wheeler Transforms

We denote the bijective BWT of  by , where is the last character of the -th string in the list storing the conjugates of all Lyndon factors of  sorted with respect to the order. A property of used in this paper as a starting point for an inversion algorithm is the following:

[[5, Lemma 15]] .

Proof.

There is no conjugate of a Lyndon factor that is smaller than the smallest Lyndon factor  since for every and every . Therefore, is the smallest string among all conjugates of all Lyndon factors. Hence, is the last character of , which is . ∎

The BWT of , called in the following , is the BBWT of for a delimiter smaller than all other characters in (cf. [15, Lemma 12] since is a Lyndon word). Originally, the BWT is defined by reading the last characters of all cyclic rotations of  (without $) sorted lexicographically [6]. Here, we call the resulting string . is equivalent to if contains the aforementioned unique delimiter $. We further write (and analogously or ) to denote the BWT of for a string .

Since (and analogously or ) is a permutation of , it is natural to identify each entry of  with a text position: By construction , where is the -th lexicographically smallest suffix, i.e., , where is the suffix array of . A similar relation is given between and the circular suffix array [19, 2], which is uniquely defined up to positions of equal Lyndon factors. Figure 1 gives an example for all three variants. In what follows, we review means to simulate a linear traversal of the text in forward or backward manner by , and then translate this result to .

3.4 Backward and Forward Steps

Having the location of in , we can compute (i.e., for ) and (i.e., for ) by rank and select queries. To move from  to , which we call a forward step, we can use the mapping:

(1)

where is the -th lexicographically smallest character in . To move from  to , we can use the backward step of the FM-index [11], which is also called mapping, and is defined as follows:

(2)

where is the number of occurrences of those characters in  that are smaller than (for each character ). We observe from the second equation of (2) that there is no need for  when having . This is important, as we can compute in time only having available. Hence, we can compute in time in-place. However, the same trick does not work with . To lookup , we can use the selection algorithm of Chan et al. [7] using and bits as working space (the algorithm restores after execution) to compute an entry of  in time.

In summary, we can compute both and in-place in time. The algorithm of Crochemore et al. [9, Thm. 2] inverting in-place in time uses the result of Munro and Raman [26] computing in time for a constant in the comparison model. As noted by Chan et al. [7, Sect. 1], the time bound for the inversion can be improved to time in the RAM model under the assumption that is rewritable.

If we allow more space, it is still advantageous to favor storing instead of if because storing and in their plain forms take bits and bits, respectively. To compute , we can also compute without  by endowing  with a predecessor data structure (which we do in Sect. 4.3).

Finally, we also need and on for our conversion algorithms. We can define and similarly for with the following peculiarity:

3.5 Steps in the Bijective BWT

The major difference to the BWT is that the LF mapping of the BBWT can contain multiple cycles, meaning that (or ) recursively applied to a position would result in searching circular (more precisely, the search stays within the same Lyndon factor). This is because is the extended BWT [23, Thm. 20 and Remark 12] applied to the multiset of Lyndon factors 

. This fact was exploited for circular pattern matching 

[19], but is not of interest here.

Instead, we follow the analysis of the so-called rewindings [3, Sect. 3]: Remembering that we store the last character of all conjugates of all Lyndon factors in , we observe that the entries in representing the Lyndon factors (i.e., the last characters of the Lyndon factors) are in sorted order (starting with and ending with ). That is because the lexicographic order and the order are the same for Lyndon words [5, Thm. 8]. Applying the backward step at such an entry results in a rewinding, i.e., we can move from the beginning of a Lyndon factor  (represented by in ) to the end of  (represented by in ) with one backward step. We use this property with Sect. 3.3 in the following sections to read the Lyndon factors from  individually in the order .

4 Run-Length Compressed Conversions

We now consider and represented as run-length compressed strings taking and bits of space, where and are the number of character runs in and , respectively. For , the goal of this section is the following:

We can convert  to  in time using bits as working space, or vice versa.

To prove this theorem, we need a data structure that works in the run-length compressed space while supporting rank and select queries as well as updates more efficiently than the time in-place approach described in Sects. 3.5 and 3.4:

4.1 Run-length Compressed Wavelet Trees

Given a run-length compressed string  of uncompressed length  with character runs, there is an bits representation of  that supports access, rank, select, insertions, and deletions in time [30, Lemma 1]. It consists of (1) a dynamic wavelet tree maintaining the starting characters of each character run and (2) a dynamic Fenwick tree maintaining the lengths of the runs. It can be accelerated to time by using the following representations:

  1. The dynamic wavelet tree of Navarro and Nekrich [27] on a text of length  uses bits, and supports both updates and queries in time.

  2. The dynamic Fenwick tree of Bille et al [4, Thm. 2] on -bit numbers uses bits, and supports both updates and queries in constant time if updates are restricted to be in-/decremental.

The obtained time complexity of this data structure directly improves the construction of : [[30, Thm. 2]] We can construct the in bits of space online on the reversed text in time.

In the run-length compressed wavelet tree representation, and support an update operation and a backward step in time with . This helps us to devise the following two conversions:

4.2 From to

We aim for directly outputting the characters of  in reversed order since we can then use the algorithm of Sect. 4.1 building online on the reversed text. We start with the first entry of (corresponding to the last Lyndon factor , i.e., storing according to Sect. 3.3) and do a backward step until we come back at this first entry (i.e., we have visited all characters of ). During that search, we copy the read characters to and mark in an array  of length  at entry  how often we visited the -th character run of . Finally, we remove the read cycle of by decreasing the run lengths of by the numbers stored in . By doing so, we remove the last Lyndon factor  from and consequently know that the currently first entry of must correspond to . This means that we can apply the algorithm recursively on the remaining to extract and delete the Lyndon factors in reversed order while building in the meantime. By removing , is still a valid BBWT since becomes the BBWT of whose Lyndon factors are the same as of  (but without ). Note that it is also possible to build in forward order, i.e., computing for increasing by applying the algorithm of Mantaci et al. [24, Fig. 1] while omitting the suffix array construction.

4.3 From to

To build , we need to be aware of the Lyndon factors of , which we compute with Sect. 3.2 by simulating a forward scan on  with on . To this end, we store the entries of the array in a Fusion tree [13] using bits and supporting predecessor search in time.222We assume that the alphabet  is effective, i.e., that each character of  appears at least once in . Otherwise, assume that uses characters. Then we build the static dictionary of Hagerup [18] in time, supporting access to a character in time, assigning each of the characters an integer from . We further map to the alphabet , which can be done in time by using space for a linear-time integer sorting algorithm. This time complexity also covers a forward step in by simulating  with the Fusion tree on . Hence, this fusion tree allows us to apply Sect. 3.2 computing the Lyndon factorization of  with a multiplicative time penalty since this algorithm only needs to perform forward traversals. The starting point of such a traversal is the position  with because returns the first character of . Whenever we detect a Lyndon factor  (starting with ), we copy this factor to our dynamic . For that, we always maintain the first and the last position of  in memory. Having the last position of , we perform backward steps on until returning at the first position of  to read the characters of  in reversed order. Then we continue with the algorithm of Sect. 3.2 at the position after  (for recursing on ). Inserting a Lyndon factor into works exactly as sketched by Bonomo et al. [5, Thm. 17] or in LABEL:algoConstructBBWT in the appendix (we review this algorithm in detail in Sect. 5.1).

5 In-Place Conversions

We finally present our in-place conversions that work in quadratic time by computing or in time having only stored either , , or . We note that the constructions from the text also work in the comparison model, while inverting a transform or converting two different transforms have a multiplicative time penalty as the fastest option to access in the comparison model uses time for a constant  [26]. We start with the construction and inversion of (Sects. 5.2 and 5.1), where we show (a) that we can construct from the text in the same manner as Bonomo et al. [5] construct , and (b) that the latter construction works also in-place. Next, we show in Sect. 5.3 how to invert  with the BWT inversion algorithm of Crochemore et al. [9, Fig. 3], which allows us to also convert to with the BWT construction algorithm of the same paper [9, Fig. 2]. Finally, we show a conversion from to in Sect. 5.4. An overview is given in Table 1.

ToFrom \ [9, Fig. 3] Sect. 5.3 Sect. 5.2 [9, Fig. 2] \ Sect. 5.3 Sect. 5.1 Sect. 5.4 \ Sect. 5.1 \

Table 1: Overview of in-place conversions in focus of Sect. 5 working in quadratic time.

5.1 Constructing and

We can compute and from  with the algorithm of Bonomo et al. [5] computing the extended BWT [23]. The extended BWT is the BWT defined on a set of primitive strings. As stated in Sect. 3.5, the extended BWT coincides with if this set of primitive strings is the set of Lyndon factors of  [5, Thm. 14]. We briefly describe the algorithm of Bonomo et al. [5] for computing the BBWT (cf. Fig. 2 and LABEL:algoConstructBBWT in the appendix): For each Lyndon factor  (starting with ), prepend to . To insert the remaining characters of the factor , let be the position of the currently inserted character. Then perform, for each down to , a backward step , and insert at  (cf. LABEL:algoConstructBBWT in the appendix). To understand why this computes , we observe that the last character of the most recently inserted Lyndon factor  is always the first character in according to Sect. 3.3. By recursively inserting the preceding character at the place returned by a backward step, we precisely insert this character at the position where we would expect it (another backward step from the same position  would then return the inserted character). Using only backward steps and insertions, this algorithm works in-place in time by simulating as described in Sect. 3.4.

Consequently, we can build if is a Lyndon word since in this case and coincide [15, Lemma 12]. That is because sorting the suffixes of  is equivalent to sorting the conjugates of (if is a Lyndon word, then its Lyndon factorization consists only of itself).

It is easy to generalize this to work for a general string . First, if is primitive, then we compute its so-called Lyndon conjugate, i.e., a conjugate of  that is a Lyndon word. (The Lyndon conjugate of  is uniquely defined if is primitive.) We can find the Lyndon conjugate of  in time with the following two lemmata:

[ [10, Prop. 1.3] ] Given two Lyndon words  and , is a Lyndon word if .

Given a primitive string , we can find its Lyndon conjugate in time with bits of space.

Proof.

We use Sect. 3.2 to detect the last Lyndon factor  of the Lyndon factorization  of  with bits of working space. According to Sect. 5.1, is a Lyndon word since , and so is a Lyndon word by a recursive argument. Hence, we have found ’s Lyndon conjugate. ∎

Let be the Lyndon conjugate of  for . Since is identical to , we are done by running the algorithm of Bonomo et al. [5] on . Finally, if is not primitive, then there is a primitive string  such that for an integer . We can compute with the above considerations. For obtaining , according to [25, Prop. 2], we only need to make each character in to a character run of length , i.e., if , we append to for increasing  (cf. [15, Thm. 13]). Checking whether is primitive can be done in time by checking for each pair of positions their longest common prefix. We summarized these steps in the pseudo code of LABEL:algoConstructBWTC in the appendix.

Figure 2: Computing from our running example in four steps (visualized by four columns separated by three arrows ), cf. Sect. 5.1. In each column, the characters from the top to the solid horizontal line ( ) form the currently built BBWT. The characters below that up to the dashed horizontal line ( ) are under consideration of being merged into BBWT. This dashed line is always before the beginning of the next yet unread Lyndon factor. First column: We have already computed the BBWT of , which is cba. In the following we want to add the next Lyndon factor  to it. For that, we prepend its last character to the currently constructed BBWT. Second column: We move the last character above the dashed line to the position with , and update . We recurse in the third column, and have produced the BBWT of in the forth column.

5.2 Inverting

To invert , we use the techniques of Crochemore et al. [9, Fig. 3] inverting  in-place in  time. An invariant is that the BWT entry, whose mapping corresponds to the next character to output, is marked with a unique delimiter $. Given that , the algorithm outputs , sets , removes , and recurses until $ is the last character remaining in . By doing so, it restores the text in text order.

To adapt this algorithm for inverting , we additionally need a pointer  storing the first symbol of the text (since there is no unique delimiter such as $ in general). Given that points to , we set  and subsequently output . From now on, the algorithm works exactly as [9, Fig. 3] if we set after outputting  (cf. LABEL:algoInvertBWTC in the appendix). More involving is inverting or converting to , which we tackle next.

Figure 3: Inverting of our running example (cf. Sect. 5.3). First Column: We prepend the $ delimiter to the last Lyndon factor  by inserting $ at . A forward step symbolized by the dashed arrow ( ) leads us from $ to the first character of . Second Column: We output , remove $ and update . The output is appended to the string shown below the dashed horizontal line ( ). We continue with a forward step to access , and recurse in the third column. Forth Column: Since a forward step returns $, we know that we have successfully extracted .

5.3 Inverting

Similarly to Sect. 4.2, we read the Lyndon factors from in the order , and move each read Lyndon factor directly to a text buffer such that while reading the last Lyndon factor  for an from , we move the characters of to , producing and . This allows us to recurse by reading always the last Lyndon factor  stored in .

Here, we want to apply the inversion algorithm for described in Sect. 5.3. For adapting this algorithm to work with , it suffices to insert $ at (cf. Fig. 3). By doing so, we add $ to the cycle of the currently last Lyndon factor  stored in , i.e., we enlarge the Lyndon factor  to . That is because (a) corresponds to the last character  of  (cf. Sect. 3.3), and after inserting $, , hence (a forward step on the last character of gives $) and gives the position in corresponding to . Moreover, inserting $ makes the BBWT of , where is the last Lyndon factor of . We now use the property that is a Lyndon word for each , allowing us to perform the inversion steps of Crochemore et al. [9, Fig. 3] on . By doing so, we can remove the entry of corresponding to for increasing and prepend the extracted characters to the text buffer storing  within our working space while keeping a valid BBWT.

Instead of inverting , we can convert to in-place by running the in-place BWT construction algorithm of Crochemore et al. [9, Fig. 2] on the text buffer after the extraction of each Lyndon factor. Unfortunately, this works not character-wise, but needs a Lyndon factor to be fully extracted before inserting its characters into . Interestingly, for the other direction (from to ), we can propose a different kind of conversion that works directly on without decoding it.

5.4 From to on the Fly

Like in Sect. 4.3, we process the Lyndon factors of  individually to compute by scanning in text order to simulate Sect. 3.2. Suppose that we have built on with $ being the -th Lyndon factor of , and suppose that we have detected the first Lyndon factor . Let f denote the last character of .333f stands for final character. Further let and be the position of the last character of and the last character of , respectively, such that and . Let such that and if or otherwise. Since and are Lyndon factors, . Consequently, the suffix (the context of ) is lexicographically smaller than the suffix (the context of ), i.e., . Figure 4 gives an overview of the introduced setting.

Figure 4: Setting of Sect. 5.4 with focus on forming a cycle for a Lyndon factor ending with f in . Left: We exchange with with the aim to form a cycle. Right: To obtain this cycle we additionally need to swap with the elements of the dashed rectangle ( ) corresponding to the interval  having the same height as the dotted rectangle ( ) covering .
Figure 5: Computing from (cf. Sect. 5.4) of our running example . In the left column, we find the first Lyndon factor  of by forward steps with . Since , . We obtain the middle column by exchanging with . Since there are two b’s between b at and $ in the left column, we need to swap with the two elements below of it in the middle column. This gives a cycle in the right column. We can recurse since the mapping of $ now yields the second character of .
Figure 6: Special case for computing from (cf. Sect. 5.4) with the different example string having as its first Lyndon factor. Left column: We find the first Lyndon factor  of by forward steps with . Its last character is stored at . By exchanging $ with the last character of  in , we obtain the middle column. Middle column: The mapping for the third d in becomes invalid. However, there is only a character run of in of the interval  in starting with . So we recurse on to find characters different from to swap in the respective interval . Right Column: We have created a cycle with the characters of the first Lyndon factor. A forward step at $ gives the first character of the next Lyndon factor.

Our aim is to change such that a forward or backward step within the characters belonging to always results in a cycle. Informally, we want to cut out of , which additionally allows us to recursively continue with the mapping to find the end of the next Lyndon factor .444As a matter of fact, if we now want to restore the text with the modified by , we would only produce . For that, we exchange with (cf. Fig. 5). Then the character (i.e., the first character of ) becomes the next character of $ in terms of the forward step (), while a backwards search on the first character of yields ’s last character ( returns , but now ). This is sufficient as long as for every . Otherwise, it can happen that we change the mapping from the -th f of to the -th f of (or vice versa) unintentionally. In such a case, we swap some entries in within the f interval of . In detail, we conduct the exchange ( with ), but continue with swapping and unless becomes that f that corresponds to for increasing starting with until or . This may not be sufficient if the characters we swap are identical (cf. Fig. 6). In such a case, we recurse on the interval of , see also LABEL:algoBWTToBBWT in the appendix.

Instead of checking whether we have created a cycle after each swap, we want to compute the exact number of swaps needed for this task. For that we note that exchanging with decrements the values of for every by one. In particular, changes for those f’s in that are between and . Hence, the number of swaps  is the number of positions  with . The swaps are performed within the range  starting with and covering all positions with and since covers all entries whose mapping has changed. However, if starts with a character run of (or of if )555For , , and hence, was $ but now is ., swapping the identical characters does not change , and therefore has no effect of changing . Instead, we search the end of this run within to swap the first entry  below this run with the first entry of this run, and recurse on swapping entry  with entries below of it.

Correctness.

To see why the swaps restore the LF mapping for and the remaining part of the text , we examine those substrings of  that we might no longer find with the LF mapping after exchanging with .

In detail, we examine each substring with that is represented in (before changing it) with , and . Due to the LF-mapping, , meaning that is the -th f in , which stores f’s. After exchanging with , becomes for with . However, for all , did not change. Hence, we only have to focus on the range .

First, suppose that . If we swap with , then is still , but becomes such that we have fixed the substring . This also works in a more general setting: If for every , we can perform swaps like above for all entries in