String Periods in the Order-Preserving Model

01/04/2018 ∙ by Garance Gourdel, et al. ∙ ENS Paris-Saclay Ural Federal University 0

The order-preserving model (op-model, in short) was introduced quite recently but has already attracted significant attention because of its applications in data analysis. We introduce several types of periods in this setting (op-periods). Then we give algorithms to compute these periods in time O(n), O(n n), O(n ^2 n/ n), O(n n) depending on the type of periodicity. In the most general variant the number of different periods can be as big as Ω(n^2), and a compact representation is needed. Our algorithms require novel combinatorial insight into the properties of such periods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Study of strings in the order-preserving model (op-model

, in short) is a part of the so-called non-standard stringology. It is focused on pattern matching and repetition discovery problems in the shapes of number sequences. Here the shape of a sequence is given by the relative order of its elements. The applications of the op-model include finding trends in time series which appear naturally when considering e.g. the stock market or melody matching of two musical scores; see

[33]. In such problems periodicity plays a crucial role.

One of motivations is given by the following scenario. Consider a sequence of numbers that models a time series which is known to repeat the same shape every fixed period of time. For example, this could be certain stock market data or statistics data from a social network that is strongly dependent on the day of the week, i.e., repeats the same shape every consecutive week. Our goal is, given a fragment of the sequence , to discover such repeating shapes, called here op-periods, in . We also consider some special cases of this setting. If the beginning of the sequence is synchronized with the beginning of the repeating shape in , we refer to the repeating shape as to an initial op-period. If the synchronization takes place also at the end of the sequence, we call the shape a full op-period. Finally, we also consider sliding op-periods that describe the case when every factor of the sequence repeats the same shape every fixed period of time.

Order-preserving model.

Let denote the set . We say that two strings and over an integer alphabet are order-equivalent (equivalent in short), written , iff .

Example 1.

.

Order-equivalence is a special case of a substring consistent equivalence relation (SCER) that was defined in [38].

For a string of length , we can create a new string of length such that is equal to the number of distinct symbols in that are not greater than . The string is called the shape of and is denoted by . It is easy to observe that two strings are order-equivalent if and only if they have the same shape.

Example 2.

.

Periods in the op-model.

We consider several notions of periodicity in the op-model, illustrated by Fig. 1. We say that a string has a (general) op-period with shift if and only if and is a factor of a string such that:

The shape of the op-period is . One op-period can have several shifts; to avoid ambiguity, we sometimes denote the op-period as . We define as the set of all shifts of the op-period .

An op-period is called initial if , full if it is initial and divides , and sliding if . Initial and sliding op-periods are particular cases of block-based and sliding-window-based periods for SCER, both of which were introduced in [38].

0

0

3

2

1

1

3

2

1

1

4

3

1

1

2

5

1

1

3

4

1

1

2

4
Figure 1: The string to the left has op-period 4 with three shifts: . Due to the shift 0, the string has an initial—therefore, a full—op-period 4. The string to the right has op-period 4 with all four shifts: . In particular, 4 is a sliding op-period of the string. Notice that both strings (of length ) have (general, sliding) periods 4, but none of them has the order-border (in the sense of [37]) of length .

Models of periodicity.

In the standard model, a string of length has a period iff for all . The famous periodicity lemma of Fine and Wilf [27] states that a “long enough” string with periods and has also the period . The exact bound of being “long enough” is . This result was generalized to arbitrary number of periods [10, 32, 41].

Periods were also considered in a number of non-standard models. Partial words, which are strings with don’t care symbols, possess quite interesting Fine–Wilf type properties, including probabilistic ones; see [5, 6, 7, 39, 40, 31]. In Section 2, we make use of periodicity graphs introduced in [39, 40]. In the abelian (jumbled) model, a version of the periodicity lemma was shown in [16] and extended in [8]. Also, algorithms for computing three types of periods analogous to full, initial, and general op-periods were designed [20, 25, 26, 34, 35, 36]. In the computation of full and initial op-periods we use some number-theoretic tools initially developed in [34, 35]. Remarkably, the fastest known algorithm for computing general periods in the abelian model has essentially quadratic time complexity [20, 36], whereas for the general op-periods we design a much more efficient solution. A version of the periodicity lemma for the parameterized model was proposed in [2].

Op-periods were first considered in [38] where initial and sliding op-periods were introduced and direct generalizations of the Fine–Wilf property to these kinds of op-periods were developed. A few distinctions between the op-periods and periods in other models should be mentioned. First, “to have a period 1” becomes a trivial property in the op-model. Second, all standard periods of a string have the “sliding” property; the first string in Fig. 1 demonstrates that this is not true for op-periods. The last distinction concerns borders. A standard period in a string of length corresponds to a border of of length , which is both a prefix and a suffix of . In the order-preserving setting, an analogue of a border is an op-border, that is, a prefix that is equivalent to the suffix of the same length. Op-borders have properties similar to standard borders and can be computed in time [37]. However, it is no longer the case that a (general, initial, full, or sliding) op-period must correspond to an op-border; see [38].

Previous algorithmic study of the op-model.

The notion of order-equivalence was introduced in [33, 37]. (However, note the related combinatorial studies, originated in [23], on containment/avoidance of shapes in permutations.) Both [33, 37] studied pattern matching in the op-model (op-pattern matching) that consists in identifying all consecutive factors of a text that are order-equivalent to a given pattern. We assume that the alphabet is integer and, as usual, that it is polynomially bounded with respect to the length of the string, which means that a string can be sorted in linear time (cf. [17]). Under this assumption, for a text of length and a pattern of length , [33] solve the op-pattern matching problem in time and [37] solve it in time. Other op-pattern matching algorithms were presented in [3, 15].

An index for op-pattern matching based on the suffix tree was developed in [19]. For a text of length it uses space and answers op-pattern matching queries for a pattern of length in optimal, time (or time if we are to report all occurrences). The index can be constructed in expected time or worst-case time. We use the index itself and some of its applications from [19].

Other developments in this area include a multiple-pattern matching algorithm for the op-model [33], an approximate version of op-pattern matching [29], compressed index constructions [13, 22], a small-space index for op-pattern matching that supports only short queries [28], and a number of practical approaches [9, 11, 12, 14, 24].

Our results.

We give algorithms to compute:

  • all full op-periods in time;

  • the smallest non-trivial initial op-period in time;

  • all initial op-periods in time;

  • all sliding op-periods in expected time or worst-case time (and linear space);

  • all general op-periods with all their shifts (compactly represented) in time and space. The output is the family of sets represented as unions of disjoint intervals. The total number of intervals, over all , is .

In the combinatorial part, we characterize the Fine–Wilf periodicity property (aka interaction property) in the op-model in the case of coprime periods. This result is at the core of the linear-time algorithm for the smallest initial op-period.

Structure of the paper.

Combinatorial foundations of our study are given in Section 2. Then in Section 3 we recall known algorithms and data structures for the op-model and develop further algorithmic tools. The remaining sections are devoted to computation of the respective types of op-periods: full and initial op-periods in Section 4, the smallest non-trivial initial op-period in Section 5, all (general) op-periods in Section 6, and sliding op-periods in Section 7.

2 Fine–Wilf Property for Op-Periods

The following result was shown as Theorem 2 in [38]. Note that if and are coprime, then the conclusion is void, as every string has the op-period 1.

Theorem 1 ([38]).

Let and . If a string of length has initial op-periods and , it has initial op-period . Moreover, if has length and sliding op-periods and , it has sliding op-period .

The aim of this section is to show a periodicity lemma in the case that .

2.1 Preliminary Notation

For a string of length , by (for ) we denote the th letter of and by we denote a factor of equal to . If , denotes the empty string .

A string which is strictly increasing, strictly decreasing, or constant, is called strictly monotone. A strictly monotone op-period of is an op-period with a strictly monotone shape. Such an op-period is called increasing (decreasing, constant) if so is its shape. Clearly, any divisor of a strictly monotone op-period is a strictly monotone op-period as well. A string is 2-monotone if , where are strictly monotone in the same direction.

Below we assume that . Let a string have op-periods and . If there exists a number such that and , we say that these op-periods are synchronized and is a synchronization point (see Fig. 2).

Figure 2: Op-periods and synchronized at position .
Remark 1.

The proof of Theorem 1 can be easily adapted to prove the following.

Theorem 2.

Let and . If op-periods and of a string of length are synchronized, then has op-period , synchronized with them.

2.2 Periodicity Theorem For Coprime Periods

For a string , by we denote a string of length over the alphabet such that:

Observation 1.
  1. A string is strictly monotone iff its trace is a unary string.

  2. If has an op-period with shift , then “almost” has a period , namely, for any such that and . (This is because both and equal the sign of the difference between the same positions of the shape of the op-period of .)

Example 3.

Consider the string 7 5 8 1 4 6 2 4 5. It has an op-period with shape 2 3 1. The trace of this string is:

- + - + + - + +

The positions giving the remainder 1 modulo 3 are shown in gray; the sequence of the remaining positions is periodic.

To study traces of strings with two op-periods, we use periodicity graphs (see Fig. 3 below) very similar to those introduced in [39, 40] for the study of partial words with two periods. The periodicity graph represents all strings of length having the op-periods and . Its vertex set is the set of positions of the trace . Two positions are connected by an edge iff they contain equal symbols according to Observation 1b. For convenience, we distinguish between - and -edges, connecting positions in the same residue class modulo (resp., modulo ). The construction of is split in two steps: first we build a draft graph (see Fig. 3,a), containing all - and -edges for each residue class, and then delete all edges of the orange clique corresponding to the th class modulo and all edges of the blue clique corresponding to the th class modulo (see Fig. 3,b,c). If some vertices belong to the same connected component of , then for every string corresponding to . In particular, if is connected, then is unary and is strictly monotone by Observation 1a.

Figure 3: Examples of periodicity graph: (a) draft graph ; (b) periodicity graph ; (c) periodicity graph . Orange/blue are -edges (resp., -edges) and the vertices equal to modulo (resp., to modulo ).
Example 4.

The graph in Fig. 3,b is connected, so all strings having this graph are strictly monotone. On the other hand, some strings with the graph in Fig. 3,c have no monotonicity properties. Thus, the string of length 18 indeed has the op-period 8 with shift 5 (and shape ) and the op-period 5 with shift 2 (and shape ).

It turns out that the existence of two coprime op-periods makes a string “almost” strictly monotone.

Theorem 3.

Let be a string of length that has coprime op-periods and with shifts and , respectively, such that . Then:

  1. if , then has a strictly monotone op-period ;

  2. if and the op-periods are synchronized, then is 2-monotone;

  3. if and the op-periods are synchronized, then is a strictly monotone op-period of ;

  4. if and the op-periods are not synchronized, then is strictly monotone;

  5. if , the op-periods are not synchronized, and is initial, then is strictly monotone;

  6. if and is initial, then is a strictly monotone op-period of .

Proof.

Take a string of length having op-periods (with shift ) and (with shift ). Let . Consider the draft graph (see Fig. 3,a). It consists of -cliques (numbered from 0 to by residue classes modulo ) connected by some -edges. If , there are exactly -edges, which connect -cliques in a cycle due to coprimality of and . Thus we have a cyclic order on -cliques: for the clique , the next one is . The number of -edges connecting neighboring cliques increases with the number of vertices: if , every vertex has an adjacent -edge, and if , every -clique is connected to the next -clique by at least two -edges.

To obtain the periodicity graph , one should delete all edges of the th -clique and the th -clique from . First consider the effect of deleting -edges. If the th -clique has at least three vertices, then after the deletion each -clique will still be connected to the next one. Indeed, if we delete edges between , , and , then there are still the edges and , connecting the corresponding -cliques. If the -clique has a single edge, its deletion will break the connection between two neighboring -cliques if they were connected by a single edge. This is not the case if , but may happen for any smaller ; see Fig. 3,c, where .

Now look at the effect of only deleting -edges from . If all vertices in the th -clique have -edges (this holds for any if ), the graph after deletion remains connected; if not, it consists of a big connected component and one or more isolated vertices from the th -clique.

Finally we consider the cumulative effect of deleting - and -edges. Any synchronization point becomes an isolated vertex. In total, there are two ways of making the draft graph disconnected: break the connection between neighboring -cliques distinct from the removed -clique (Fig. 4,a) or get isolated vertices in the removed -clique (Fig. 4,b). The first way does not work if (see above) or if the op-periods are synchronized (the removed -edge was adjacent to the removed -clique). For the second way, only synchronization points are isolated if (each vertex has a -edge, see above). Note that in this case all non-isolated vertices of periodicity graph are connected. Hence all positions of the trace , except for the isolated ones, contain the same symbol. So all factors of involving no isolated positions are strictly monotone (in the same direction).

Figure 4: Disconnecting a draft graph: (a) removing the only edge between neighboring -cliques distinct from the removed -clique; (b) getting isolated vertices in the removed -clique.

At this point all statements of the theorem are straightforward:

  • all synchronization points are equal modulo by the Chinese Remainder Theorem;

  • all isolated positions are equal modulo ;

  • the condition on excludes both ways to disconnect the draft graph;

  • for the initial op-period, ; if , there is no deletion of -edges; if , then the -cliques connected by the edge are also connected by ; so only the disconnection by isolated positions is possible. ∎

3 Algorithmic Toolbox for Op-Model

Let us start by recalling the encoding for op-pattern matching (op-encoding) from [19, 37]. For a string of length and we define:

If there is no such , then . Similarly, we define:

and if no such exists. Then is the op-encoding of . It can be computed efficiently as mentioned in the following lemma.

Lemma 1 ([37]).

The op-encoding of a string of length over an integer alphabet can be computed in time.

The op-encoding can be used to efficiently extend a match.

Lemma 2.

Let and be two strings of length and assume that the op-encoding of is known. If , one can check if in time.

Proof.

Let and . Lemma 3 from [19] asserts that, if , then

and otherwise,

(Conditions involving or when or should be omitted.) ∎

3.1 table

For a string of length , we introduce a table such that is the length of the longest prefix of that is equivalent to a prefix of . It is a direct analogue of the PREF array used in standard string matching (see [21]) and can be computed similarly in time using one of the standard encodings for the op-model that were used in [15, 19, 37]; see lemma below.

Lemma 3.

For a string of length , the table can be computed in time.

Proof.

Let be a string of length . The standard linear-time algorithm for computing the table for (see, e.g., [21]) uses the following two properties of the table:

  1. If , , and , then .

  2. If we know that , then can be computed in time by extending the common prefix character by character.

In the case of the table, the first of these properties extends without alterations due to the transitivity of the relation. As for the second property, the matching prefix can be extended character by character using Lemma 2 provided that the op-encoding for is known. The op-encoding can be computed in advance using Lemma 1. ∎

Let us mention an application of the table that is used further in the algorithms. We denote by (“longest op-periodic prefix”) the length of the longest prefix of a string having as an initial op-period.

Lemma 4.

For a string of length , for a given can be computed in time after -time preprocessing.

Proof.

We start by computing the table for in time. We assume that . To compute , we iterate over positions and for each of them check if . If is the first position for which this condition is not satisfied (possibly because ), we have . Clearly, this procedure works in the desired time complexity. ∎

Remark 2.

Note that it can be the case that . See, e.g., the strings in Fig. 1 and .

3.2 Longest Common Extension Queries

For a string , we define a longest common extension query in the order-preserving model as the maximum such that . Symmetrically, is the maximum such that .

Similarly as in the standard model [18], LCP-queries in the op-model can be answered using lowest common ancestor (LCA) queries in the op-suffix tree; see the following lemma.

Lemma 5.

For a string of length , after preprocessing in expected time or in worst-case time one can answer -queries in  time.

Proof.

The order-preserving suffix tree (op-suffix tree) that is constructed in [19] is a compacted trie of op-encodings of all the suffixes of the text. In expected time or worst-case time one can construct a so-called incomplete version of the op-suffix tree in which each explicit node may have at most one edge whose first character label is not known. Fortunately, for -queries the labels of the edges are not needed; the only required information is the depth of each explicit node and the location of each suffix. Therefore, for this purpose the incomplete op-suffix tree can be treated as a regular suffix tree and preprocessed using standard lowest common ancestor data structure that requires additional preprocessing and can answer queries in time [4]. ∎

3.3 Order-preserving Squares

The factor is called an order-preserving square (op-square) iff . For a string of length , we define the set

Op-squares were first defined in [19] where an algorithm computing all the sets for a string of length in time was shown.

We say that an op-square is right shiftable if is an op-square and right non-shiftable otherwise. Similarly, we say that the op-square is left shiftable if is an op-square and left non-shiftable otherwise. Using the approach of [19], one can show the following lemma.

Lemma 6.

All the (left and right) non-shiftable op-squares in a string of length can be computed in time.

Proof.

We show the algorithm for right non-shiftable op-squares; the computations for left non-shiftable op-squares are symmetric.

Let be a string of length . An op-square is called right non-extendible if or . We use the following claim.

Claim (See Lemma 18 in [19]).

All the right non-extendible op-squares in a string of length can be computed in time.

Note that a right non-shiftable op-square is also right non-extendible, but the converse is not necessarily true. Thus it suffices to filter out the op-squares that are right shiftable. For this, for a right non-extendible op-square we need to check if . This condition can be verified in time after -time preprocessing using Lemma 5. ∎

4 Computing All Full and Initial Op-Periods

For a string of length , we define for as:

Here we assume that . In the computation of full and initial op-periods we heavily rely on this table according to the following obvious observation.

Observation 2.

is an initial op-period of a string of length if and only if for all .

4.1 Computing Initial Op-Periods

Let us introduce an auxiliary array such that:

Straight from Observation 2 we have:

Observation 3.

is an initial period of if and only if .

The table could be computed straight from definition in time. We improve this complexity to by employing Eratosthenes’s sieve. The sieve computes, in particular, for each a list of all distinct prime divisors of . We use these divisors to compute the table via dynamic programming in a right-to-left scan, as shown in Algorithm 1.

1 ;
2 for  down to  do
3      foreach prime divisor of  do
4           ;
5          
6           for  to  do
7                if  then  is an initial op-period;
8               
Algorithm 1 Computing All Initial Op-Periods of
Theorem 4.

All initial op-periods of a string of length can be computed in time.

Proof.

By Lemma 3, the table for the string—hence, the table—can be computed in time. Then we use Algorithm 1. Each prime number has at most multiples below . Therefore, the complexity of Eratosthenes’s sieve and the number of updates on the table in the algorithm is ; see [1]. ∎

4.2 Computing Full Op-Periods

Let us recall the following auxiliary data structure for efficient -computations that was developed in [35]. We will only need a special case of this data structure to answer queries for .

Fact 1 (Theorem 4 in [35]).

After -time preprocessing, given any , the value can be computed in constant time.

Let denote the set of all positive divisors of . In the case of full op-periods we only need to compute for . As in Algorithm 1, we start with . Then we perform a preprocessing phase that shifts the information stored in the array from indices to indices . It is based on the fact that for , if and only if . Finally, we perform right-to-left processing as in Algorithm 1. However, this time we can afford to iterate over all divisors of elements from . Thus we arrive at the pseudocode of Algorithm 2.

1 ;
2 for  to  do
3      ;
4      ;
5     
6      foreach  in decreasing order do
7           foreach  do
8                ;
9               
10               
11                foreach  do
12                     if  then  is a full op-period;
13                    
Algorithm 2 Computing All Full Op-Periods of
Theorem 5.

All full op-periods of a string of length can be computed in time.

Proof.

We apply Algorithm 2. The complexity of the first for-loop is by Fact 1. The second for-loop works in time as the sizes of the sets , are and the elements of these sets can be enumerated in time as well. ∎

5 Computing Smallest Non-Trivial Initial Op-Period

If a string is not strictly monotone itself, it has such op-periods and they can all be computed in time. We use this as an auxiliary routine in the computation of the smallest initial op-period that is greater than 1.

Theorem 6.

If a string of length is not strictly monotone, all of its strictly monotone op-periods can be computed in time.

Proof.

We show how to compute all the strictly increasing op-periods of a string that is not strictly monotone itself; computation of strictly decreasing and constant op-periods is the same. Let be a string of length and let us denote . Let be the set of all positions in such that ; by the assumption of this theorem, we have that . This set provides a simple characterization of strictly increasing op-periods of .

Observation 4.

is a strictly increasing op-period of a string that is not strictly monotone itself if and only if for all .

First, assume that . By Observation 4, each is an op-period of with the shift . From now we can assume that .

For a set of positive integers , by we denote . The claim below follows from Fact 1. However, we give a simpler proof.

Claim.

If , then can be computed in time.

Proof.

Let and denote . We want to compute .

Note that for all . Hence, the sequence contains at most distinct values.

Set . To compute for , we check if . If so, . Otherwise . Hence, we can compute using Euclid’s algorithm in time. The latter situation takes place at most times; the conclusion follows. ∎

Consider the set . By Observation 4, is a strictly increasing op-period of if and only if and . Thus there is exactly one strictly increasing op-period of each length that divides and its shift is determined uniquely.

The value can be computed in time by Claim 5. Afterwards, we find all its divisors and report the op-periods in time. ∎

Let us start with the following simple property.

Lemma 7.

The shape of the smallest non-trivial initial op-period of a string has no shorter non-trivial full op-period.

Proof.

A full op-period of the initial op-period of a string is an initial op-period of . ∎

Now we can state a property of initial op-periods, implied by Theorem 3, that is the basis of the algorithm.

Lemma 8.

If a string of length has initial op-periods such that and , then is strictly monotone.

Proof.

Let us consider three cases. If , then by Theorem 3a, both and are strictly monotone. If , then Theorem 3e implies that is strictly monotone, hence and are strictly monotone as well. Finally, if , we have that is strictly monotone by Theorem 3f. ∎

1 if  has a non-trivial strictly monotone op-period then
     return smallest such op-period;
        Theorem 6
2     
3       the length of the longest monotone prefix of plus 1;
4      while  do
5           := ;
6           if  then return ;
7           := ;
8          
return ;
Algorithm 3 Computing the Smallest Non-Trivial Initial Op-Period of
Theorem 7.

The smallest initial op-period of a string of length can be computed in time.

Proof.

We follow the lines of Algorithm 3. If is not strictly monotone itself, we can compute the smallest non-trivial strictly monotone initial op-period of using Theorem 6. Otherwise, the smallest such op-period is 2. If has a non-trivial strictly monotone initial op-period and the smallest such op-period is , then none of is an initial op-period of . Hence, we can safely return .

Let us now focus on the correctness of the while-loop. The invariant is that there is no initial op-period of that is smaller than . If the value of equals , then is an initial op-period of and we can safely return it. Otherwise, we can advance by 1. There is also no smallest initial op-period such that . Indeed, Lemma 8 would imply that is strictly monotone if (which is impossible due to the initial selection of ) and Theorem 1 would imply an initial op-period of that is smaller than and divides if (which is impossible due to Lemma 7). This justifies the way is increased.

Now let us consider the time complexity of the algorithm. The algorithm for strictly monotone op-periods of Theorem 6 works in time. By Lemma 4, can be computed in time. If , this is . Otherwise, at least doubles; let be the new value of . Then . The case that doubles can take place at most times and the total sum of over such cases is . ∎

6 Computing All Op-Periods

An interval representation of a set of integers is where , …, ; is called the size of the representation.

Our goal is to compute a compact representation of all the op-periods of a string that contains, for each op-period , an interval representation of the set .

For an integer set , by we denote the set . The following technical lemma provides efficient operations on interval representations of sets.

Lemma 9.
  1. Assume that and are two sets with interval representations of sizes and , respectively. Then the interval representation of the set can be computed in time.

  2. Assume that are sets with interval representations of sizes and be positive integers. Then the interval representations of all the sets can be computed in time.

Proof.

To compute in point a, it suffices to merge the lists of endpoints of intervals in the interval representations of and . Let be the merged list. With each element of we store a weight if it represents the beginning of an interval and a weight if it represents the endpoint of an interval. We compute the prefix sums of these weights for . Then, by considering all elements with a prefix sum equal to 2 and their following elements in , we can restore the interval representation of .

Let us proceed to point b. Note that, for an interval , the set either equals if , or otherwise is a sum of at most two intervals. For each interval in the representation of , for , we compute the interval representation of . Now it suffices to compute the sum of these intervals for each . This can be done exactly as in point a provided that the endpoints of the intervals comprising representations of are sorted. We perform the sorting simultaneously for all using bucket sort [17]. The total number of endpoints is and the number of possible values of endpoints is at most . This yields the desired time complexity of point b. ∎

Lemma 10.

For a string of length , interval representations of the sets for all can be computed in time.

Proof.

Let us define the following two auxiliary sets.

By Lemma 6, all the sets and can be computed in time. In particular, .

Let us note that, for each , . Thus let and . The interval representation of the set is . Clearly, it can be computed in time. ∎

We will use the following characterization of op-periods.

Observation 5.

is an op-period of with shift if and only if all the following conditions hold:

  1. is an op-square for every ,

  2. ,

  3. .

Theorem 8.

A representation of size of all the op-periods of a string of length can be computed in time.

Proof.

We use Algorithm 4. The sets , , and describe the sets of shifts that satisfy conditions (A), (B), and (C) from Observation 5, respectively.

A crucial role is played by the set of all positions which are not the beginnings of op-squares of length . It is computed as a complement of the set .

Compute for all ;
   Lemma 10
1 for  to  do
2      ;
3      ; ;
4      if  then ;
5     else ; ;
6     
7      for  to simultaneously do
           ; ;
             Lemma 9b
8          
9           ;
10           for  to  do
11                ;
                ;
                  Lemma 9a
12               
return for ;
Algorithm 4 Computing a Compact Representation of All Op-Periods

Operations “” on sets are performed simultaneously using Lemma 9b. All sets , , have -sized representations. This guarantees time. ∎

7 Computing Sliding Op-Periods

For a string of length , we define a family of strings such that for . Note that the characters of the strings are shapes. Moreover, the total length of strings is quadratic in , so we will not compute those strings explicitly. Instead, we use the following observation to test if two symbols are equal.

Observation 6.

if and only if .

Sliding op-periods admit an elegant characterization based on ; see Figure 5.

Lemma 11.

An integer , , is a sliding op-period of if and only if and is a period of , or and .

Proof.

If is a sliding op-period, then . Consequently, Observation 5 yields that is an op-square for every and that .

If , then the former property yields