1 Introduction
Study of strings in the orderpreserving model (opmodel
, in short) is a part of the socalled nonstandard stringology. It is focused on pattern matching and repetition discovery problems in the shapes of number sequences. Here the shape of a sequence is given by the relative order of its elements. The applications of the opmodel include finding trends in time series which appear naturally when considering e.g. the stock market or melody matching of two musical scores; see
[33]. In such problems periodicity plays a crucial role.One of motivations is given by the following scenario. Consider a sequence of numbers that models a time series which is known to repeat the same shape every fixed period of time. For example, this could be certain stock market data or statistics data from a social network that is strongly dependent on the day of the week, i.e., repeats the same shape every consecutive week. Our goal is, given a fragment of the sequence , to discover such repeating shapes, called here opperiods, in . We also consider some special cases of this setting. If the beginning of the sequence is synchronized with the beginning of the repeating shape in , we refer to the repeating shape as to an initial opperiod. If the synchronization takes place also at the end of the sequence, we call the shape a full opperiod. Finally, we also consider sliding opperiods that describe the case when every factor of the sequence repeats the same shape every fixed period of time.
Orderpreserving model.
Let denote the set . We say that two strings and over an integer alphabet are orderequivalent (equivalent in short), written , iff .
Example 1.
.
Orderequivalence is a special case of a substring consistent equivalence relation (SCER) that was defined in [38].
For a string of length , we can create a new string of length such that is equal to the number of distinct symbols in that are not greater than . The string is called the shape of and is denoted by . It is easy to observe that two strings are orderequivalent if and only if they have the same shape.
Example 2.
.
Periods in the opmodel.
We consider several notions of periodicity in the opmodel, illustrated by Fig. 1. We say that a string has a (general) opperiod with shift if and only if and is a factor of a string such that:
The shape of the opperiod is . One opperiod can have several shifts; to avoid ambiguity, we sometimes denote the opperiod as . We define as the set of all shifts of the opperiod .
An opperiod is called initial if , full if it is initial and divides , and sliding if . Initial and sliding opperiods are particular cases of blockbased and slidingwindowbased periods for SCER, both of which were introduced in [38].
Models of periodicity.
In the standard model, a string of length has a period iff for all . The famous periodicity lemma of Fine and Wilf [27] states that a “long enough” string with periods and has also the period . The exact bound of being “long enough” is . This result was generalized to arbitrary number of periods [10, 32, 41].
Periods were also considered in a number of nonstandard models. Partial words, which are strings with don’t care symbols, possess quite interesting Fine–Wilf type properties, including probabilistic ones; see [5, 6, 7, 39, 40, 31]. In Section 2, we make use of periodicity graphs introduced in [39, 40]. In the abelian (jumbled) model, a version of the periodicity lemma was shown in [16] and extended in [8]. Also, algorithms for computing three types of periods analogous to full, initial, and general opperiods were designed [20, 25, 26, 34, 35, 36]. In the computation of full and initial opperiods we use some numbertheoretic tools initially developed in [34, 35]. Remarkably, the fastest known algorithm for computing general periods in the abelian model has essentially quadratic time complexity [20, 36], whereas for the general opperiods we design a much more efficient solution. A version of the periodicity lemma for the parameterized model was proposed in [2].
Opperiods were first considered in [38] where initial and sliding opperiods were introduced and direct generalizations of the Fine–Wilf property to these kinds of opperiods were developed. A few distinctions between the opperiods and periods in other models should be mentioned. First, “to have a period 1” becomes a trivial property in the opmodel. Second, all standard periods of a string have the “sliding” property; the first string in Fig. 1 demonstrates that this is not true for opperiods. The last distinction concerns borders. A standard period in a string of length corresponds to a border of of length , which is both a prefix and a suffix of . In the orderpreserving setting, an analogue of a border is an opborder, that is, a prefix that is equivalent to the suffix of the same length. Opborders have properties similar to standard borders and can be computed in time [37]. However, it is no longer the case that a (general, initial, full, or sliding) opperiod must correspond to an opborder; see [38].
Previous algorithmic study of the opmodel.
The notion of orderequivalence was introduced in [33, 37]. (However, note the related combinatorial studies, originated in [23], on containment/avoidance of shapes in permutations.) Both [33, 37] studied pattern matching in the opmodel (oppattern matching) that consists in identifying all consecutive factors of a text that are orderequivalent to a given pattern. We assume that the alphabet is integer and, as usual, that it is polynomially bounded with respect to the length of the string, which means that a string can be sorted in linear time (cf. [17]). Under this assumption, for a text of length and a pattern of length , [33] solve the oppattern matching problem in time and [37] solve it in time. Other oppattern matching algorithms were presented in [3, 15].
An index for oppattern matching based on the suffix tree was developed in [19]. For a text of length it uses space and answers oppattern matching queries for a pattern of length in optimal, time (or time if we are to report all occurrences). The index can be constructed in expected time or worstcase time. We use the index itself and some of its applications from [19].
Other developments in this area include a multiplepattern matching algorithm for the opmodel [33], an approximate version of oppattern matching [29], compressed index constructions [13, 22], a smallspace index for oppattern matching that supports only short queries [28], and a number of practical approaches [9, 11, 12, 14, 24].
Our results.
We give algorithms to compute:

all full opperiods in time;

the smallest nontrivial initial opperiod in time;

all initial opperiods in time;

all sliding opperiods in expected time or worstcase time (and linear space);

all general opperiods with all their shifts (compactly represented) in time and space. The output is the family of sets represented as unions of disjoint intervals. The total number of intervals, over all , is .
In the combinatorial part, we characterize the Fine–Wilf periodicity property (aka interaction property) in the opmodel in the case of coprime periods. This result is at the core of the lineartime algorithm for the smallest initial opperiod.
Structure of the paper.
Combinatorial foundations of our study are given in Section 2. Then in Section 3 we recall known algorithms and data structures for the opmodel and develop further algorithmic tools. The remaining sections are devoted to computation of the respective types of opperiods: full and initial opperiods in Section 4, the smallest nontrivial initial opperiod in Section 5, all (general) opperiods in Section 6, and sliding opperiods in Section 7.
2 Fine–Wilf Property for OpPeriods
The following result was shown as Theorem 2 in [38]. Note that if and are coprime, then the conclusion is void, as every string has the opperiod 1.
Theorem 1 ([38]).
Let and . If a string of length has initial opperiods and , it has initial opperiod . Moreover, if has length and sliding opperiods and , it has sliding opperiod .
The aim of this section is to show a periodicity lemma in the case that .
2.1 Preliminary Notation
For a string of length , by (for ) we denote the th letter of and by we denote a factor of equal to . If , denotes the empty string .
A string which is strictly increasing, strictly decreasing, or constant, is called strictly monotone. A strictly monotone opperiod of is an opperiod with a strictly monotone shape. Such an opperiod is called increasing (decreasing, constant) if so is its shape. Clearly, any divisor of a strictly monotone opperiod is a strictly monotone opperiod as well. A string is 2monotone if , where are strictly monotone in the same direction.
Below we assume that . Let a string have opperiods and . If there exists a number such that and , we say that these opperiods are synchronized and is a synchronization point (see Fig. 2).
Remark 1.
The proof of Theorem 1 can be easily adapted to prove the following.
Theorem 2.
Let and . If opperiods and of a string of length are synchronized, then has opperiod , synchronized with them.
2.2 Periodicity Theorem For Coprime Periods
For a string , by we denote a string of length over the alphabet such that:
Observation 1.

A string is strictly monotone iff its trace is a unary string.

If has an opperiod with shift , then “almost” has a period , namely, for any such that and . (This is because both and equal the sign of the difference between the same positions of the shape of the opperiod of .)
Example 3.
Consider the string 7 5 8 1 4 6 2 4 5. It has an opperiod with shape 2 3 1. The trace of this string is:
 +  + +  + +
The positions giving the remainder 1 modulo 3 are shown in gray; the sequence of the remaining positions is periodic.
To study traces of strings with two opperiods, we use periodicity graphs (see Fig. 3 below) very similar to those introduced in [39, 40] for the study of partial words with two periods. The periodicity graph represents all strings of length having the opperiods and . Its vertex set is the set of positions of the trace . Two positions are connected by an edge iff they contain equal symbols according to Observation 1b. For convenience, we distinguish between  and edges, connecting positions in the same residue class modulo (resp., modulo ). The construction of is split in two steps: first we build a draft graph (see Fig. 3,a), containing all  and edges for each residue class, and then delete all edges of the orange clique corresponding to the th class modulo and all edges of the blue clique corresponding to the th class modulo (see Fig. 3,b,c). If some vertices belong to the same connected component of , then for every string corresponding to . In particular, if is connected, then is unary and is strictly monotone by Observation 1a.
Example 4.
The graph in Fig. 3,b is connected, so all strings having this graph are strictly monotone. On the other hand, some strings with the graph in Fig. 3,c have no monotonicity properties. Thus, the string of length 18 indeed has the opperiod 8 with shift 5 (and shape ) and the opperiod 5 with shift 2 (and shape ).
It turns out that the existence of two coprime opperiods makes a string “almost” strictly monotone.
Theorem 3.
Let be a string of length that has coprime opperiods and with shifts and , respectively, such that . Then:

if , then has a strictly monotone opperiod ;

if and the opperiods are synchronized, then is 2monotone;

if and the opperiods are synchronized, then is a strictly monotone opperiod of ;

if and the opperiods are not synchronized, then is strictly monotone;

if , the opperiods are not synchronized, and is initial, then is strictly monotone;

if and is initial, then is a strictly monotone opperiod of .
Proof.
Take a string of length having opperiods (with shift ) and (with shift ). Let . Consider the draft graph (see Fig. 3,a). It consists of cliques (numbered from 0 to by residue classes modulo ) connected by some edges. If , there are exactly edges, which connect cliques in a cycle due to coprimality of and . Thus we have a cyclic order on cliques: for the clique , the next one is . The number of edges connecting neighboring cliques increases with the number of vertices: if , every vertex has an adjacent edge, and if , every clique is connected to the next clique by at least two edges.
To obtain the periodicity graph , one should delete all edges of the th clique and the th clique from . First consider the effect of deleting edges. If the th clique has at least three vertices, then after the deletion each clique will still be connected to the next one. Indeed, if we delete edges between , , and , then there are still the edges and , connecting the corresponding cliques. If the clique has a single edge, its deletion will break the connection between two neighboring cliques if they were connected by a single edge. This is not the case if , but may happen for any smaller ; see Fig. 3,c, where .
Now look at the effect of only deleting edges from . If all vertices in the th clique have edges (this holds for any if ), the graph after deletion remains connected; if not, it consists of a big connected component and one or more isolated vertices from the th clique.
Finally we consider the cumulative effect of deleting  and edges. Any synchronization point becomes an isolated vertex. In total, there are two ways of making the draft graph disconnected: break the connection between neighboring cliques distinct from the removed clique (Fig. 4,a) or get isolated vertices in the removed clique (Fig. 4,b). The first way does not work if (see above) or if the opperiods are synchronized (the removed edge was adjacent to the removed clique). For the second way, only synchronization points are isolated if (each vertex has a edge, see above). Note that in this case all nonisolated vertices of periodicity graph are connected. Hence all positions of the trace , except for the isolated ones, contain the same symbol. So all factors of involving no isolated positions are strictly monotone (in the same direction).
At this point all statements of the theorem are straightforward:

all synchronization points are equal modulo by the Chinese Remainder Theorem;

all isolated positions are equal modulo ;

the condition on excludes both ways to disconnect the draft graph;

for the initial opperiod, ; if , there is no deletion of edges; if , then the cliques connected by the edge are also connected by ; so only the disconnection by isolated positions is possible. ∎
3 Algorithmic Toolbox for OpModel
Let us start by recalling the encoding for oppattern matching (opencoding) from [19, 37]. For a string of length and we define:
If there is no such , then . Similarly, we define:
and if no such exists. Then is the opencoding of . It can be computed efficiently as mentioned in the following lemma.
Lemma 1 ([37]).
The opencoding of a string of length over an integer alphabet can be computed in time.
The opencoding can be used to efficiently extend a match.
Lemma 2.
Let and be two strings of length and assume that the opencoding of is known. If , one can check if in time.
Proof.
Let and . Lemma 3 from [19] asserts that, if , then
and otherwise,
(Conditions involving or when or should be omitted.) ∎
3.1 table
For a string of length , we introduce a table such that is the length of the longest prefix of that is equivalent to a prefix of . It is a direct analogue of the PREF array used in standard string matching (see [21]) and can be computed similarly in time using one of the standard encodings for the opmodel that were used in [15, 19, 37]; see lemma below.
Lemma 3.
For a string of length , the table can be computed in time.
Proof.
Let be a string of length . The standard lineartime algorithm for computing the table for (see, e.g., [21]) uses the following two properties of the table:

If , , and , then .

If we know that , then can be computed in time by extending the common prefix character by character.
In the case of the table, the first of these properties extends without alterations due to the transitivity of the relation. As for the second property, the matching prefix can be extended character by character using Lemma 2 provided that the opencoding for is known. The opencoding can be computed in advance using Lemma 1. ∎
Let us mention an application of the table that is used further in the algorithms. We denote by (“longest opperiodic prefix”) the length of the longest prefix of a string having as an initial opperiod.
Lemma 4.
For a string of length , for a given can be computed in time after time preprocessing.
Proof.
We start by computing the table for in time. We assume that . To compute , we iterate over positions and for each of them check if . If is the first position for which this condition is not satisfied (possibly because ), we have . Clearly, this procedure works in the desired time complexity. ∎
Remark 2.
Note that it can be the case that . See, e.g., the strings in Fig. 1 and .
3.2 Longest Common Extension Queries
For a string , we define a longest common extension query in the orderpreserving model as the maximum such that . Symmetrically, is the maximum such that .
Similarly as in the standard model [18], LCPqueries in the opmodel can be answered using lowest common ancestor (LCA) queries in the opsuffix tree; see the following lemma.
Lemma 5.
For a string of length , after preprocessing in expected time or in worstcase time one can answer queries in time.
Proof.
The orderpreserving suffix tree (opsuffix tree) that is constructed in [19] is a compacted trie of opencodings of all the suffixes of the text. In expected time or worstcase time one can construct a socalled incomplete version of the opsuffix tree in which each explicit node may have at most one edge whose first character label is not known. Fortunately, for queries the labels of the edges are not needed; the only required information is the depth of each explicit node and the location of each suffix. Therefore, for this purpose the incomplete opsuffix tree can be treated as a regular suffix tree and preprocessed using standard lowest common ancestor data structure that requires additional preprocessing and can answer queries in time [4]. ∎
3.3 Orderpreserving Squares
The factor is called an orderpreserving square (opsquare) iff . For a string of length , we define the set
Opsquares were first defined in [19] where an algorithm computing all the sets for a string of length in time was shown.
We say that an opsquare is right shiftable if is an opsquare and right nonshiftable otherwise. Similarly, we say that the opsquare is left shiftable if is an opsquare and left nonshiftable otherwise. Using the approach of [19], one can show the following lemma.
Lemma 6.
All the (left and right) nonshiftable opsquares in a string of length can be computed in time.
Proof.
We show the algorithm for right nonshiftable opsquares; the computations for left nonshiftable opsquares are symmetric.
Let be a string of length . An opsquare is called right nonextendible if or . We use the following claim.
Claim (See Lemma 18 in [19]).
All the right nonextendible opsquares in a string of length can be computed in time.
Note that a right nonshiftable opsquare is also right nonextendible, but the converse is not necessarily true. Thus it suffices to filter out the opsquares that are right shiftable. For this, for a right nonextendible opsquare we need to check if . This condition can be verified in time after time preprocessing using Lemma 5. ∎
4 Computing All Full and Initial OpPeriods
For a string of length , we define for as:
Here we assume that . In the computation of full and initial opperiods we heavily rely on this table according to the following obvious observation.
Observation 2.
is an initial opperiod of a string of length if and only if for all .
4.1 Computing Initial OpPeriods
Observation 3.
is an initial period of if and only if .
The table could be computed straight from definition in time. We improve this complexity to by employing Eratosthenes’s sieve. The sieve computes, in particular, for each a list of all distinct prime divisors of . We use these divisors to compute the table via dynamic programming in a righttoleft scan, as shown in Algorithm 1.
Theorem 4.
All initial opperiods of a string of length can be computed in time.
4.2 Computing Full OpPeriods
Let us recall the following auxiliary data structure for efficient computations that was developed in [35]. We will only need a special case of this data structure to answer queries for .
Fact 1 (Theorem 4 in [35]).
After time preprocessing, given any , the value can be computed in constant time.
Let denote the set of all positive divisors of . In the case of full opperiods we only need to compute for . As in Algorithm 1, we start with . Then we perform a preprocessing phase that shifts the information stored in the array from indices to indices . It is based on the fact that for , if and only if . Finally, we perform righttoleft processing as in Algorithm 1. However, this time we can afford to iterate over all divisors of elements from . Thus we arrive at the pseudocode of Algorithm 2.
Theorem 5.
All full opperiods of a string of length can be computed in time.
5 Computing Smallest NonTrivial Initial OpPeriod
If a string is not strictly monotone itself, it has such opperiods and they can all be computed in time. We use this as an auxiliary routine in the computation of the smallest initial opperiod that is greater than 1.
Theorem 6.
If a string of length is not strictly monotone, all of its strictly monotone opperiods can be computed in time.
Proof.
We show how to compute all the strictly increasing opperiods of a string that is not strictly monotone itself; computation of strictly decreasing and constant opperiods is the same. Let be a string of length and let us denote . Let be the set of all positions in such that ; by the assumption of this theorem, we have that . This set provides a simple characterization of strictly increasing opperiods of .
Observation 4.
is a strictly increasing opperiod of a string that is not strictly monotone itself if and only if for all .
First, assume that . By Observation 4, each is an opperiod of with the shift . From now we can assume that .
For a set of positive integers , by we denote . The claim below follows from Fact 1. However, we give a simpler proof.
Claim.
If , then can be computed in time.
Proof.
Let and denote . We want to compute .
Note that for all . Hence, the sequence contains at most distinct values.
Set . To compute for , we check if . If so, . Otherwise . Hence, we can compute using Euclid’s algorithm in time. The latter situation takes place at most times; the conclusion follows. ∎
Consider the set . By Observation 4, is a strictly increasing opperiod of if and only if and . Thus there is exactly one strictly increasing opperiod of each length that divides and its shift is determined uniquely.
The value can be computed in time by Claim 5. Afterwards, we find all its divisors and report the opperiods in time. ∎
Let us start with the following simple property.
Lemma 7.
The shape of the smallest nontrivial initial opperiod of a string has no shorter nontrivial full opperiod.
Proof.
A full opperiod of the initial opperiod of a string is an initial opperiod of . ∎
Now we can state a property of initial opperiods, implied by Theorem 3, that is the basis of the algorithm.
Lemma 8.
If a string of length has initial opperiods such that and , then is strictly monotone.
Proof.
Theorem 7.
The smallest initial opperiod of a string of length can be computed in time.
Proof.
We follow the lines of Algorithm 3. If is not strictly monotone itself, we can compute the smallest nontrivial strictly monotone initial opperiod of using Theorem 6. Otherwise, the smallest such opperiod is 2. If has a nontrivial strictly monotone initial opperiod and the smallest such opperiod is , then none of is an initial opperiod of . Hence, we can safely return .
Let us now focus on the correctness of the whileloop. The invariant is that there is no initial opperiod of that is smaller than . If the value of equals , then is an initial opperiod of and we can safely return it. Otherwise, we can advance by 1. There is also no smallest initial opperiod such that . Indeed, Lemma 8 would imply that is strictly monotone if (which is impossible due to the initial selection of ) and Theorem 1 would imply an initial opperiod of that is smaller than and divides if (which is impossible due to Lemma 7). This justifies the way is increased.
Now let us consider the time complexity of the algorithm. The algorithm for strictly monotone opperiods of Theorem 6 works in time. By Lemma 4, can be computed in time. If , this is . Otherwise, at least doubles; let be the new value of . Then . The case that doubles can take place at most times and the total sum of over such cases is . ∎
6 Computing All OpPeriods
An interval representation of a set of integers is where , …, ; is called the size of the representation.
Our goal is to compute a compact representation of all the opperiods of a string that contains, for each opperiod , an interval representation of the set .
For an integer set , by we denote the set . The following technical lemma provides efficient operations on interval representations of sets.
Lemma 9.

Assume that and are two sets with interval representations of sizes and , respectively. Then the interval representation of the set can be computed in time.

Assume that are sets with interval representations of sizes and be positive integers. Then the interval representations of all the sets can be computed in time.
Proof.
To compute in point a, it suffices to merge the lists of endpoints of intervals in the interval representations of and . Let be the merged list. With each element of we store a weight if it represents the beginning of an interval and a weight if it represents the endpoint of an interval. We compute the prefix sums of these weights for . Then, by considering all elements with a prefix sum equal to 2 and their following elements in , we can restore the interval representation of .
Let us proceed to point b. Note that, for an interval , the set either equals if , or otherwise is a sum of at most two intervals. For each interval in the representation of , for , we compute the interval representation of . Now it suffices to compute the sum of these intervals for each . This can be done exactly as in point a provided that the endpoints of the intervals comprising representations of are sorted. We perform the sorting simultaneously for all using bucket sort [17]. The total number of endpoints is and the number of possible values of endpoints is at most . This yields the desired time complexity of point b. ∎
Lemma 10.
For a string of length , interval representations of the sets for all can be computed in time.
Proof.
Let us define the following two auxiliary sets.
By Lemma 6, all the sets and can be computed in time. In particular, .
Let us note that, for each , . Thus let and . The interval representation of the set is . Clearly, it can be computed in time. ∎
We will use the following characterization of opperiods.
Observation 5.
is an opperiod of with shift if and only if all the following conditions hold:

is an opsquare for every ,

,

.
Theorem 8.
A representation of size of all the opperiods of a string of length can be computed in time.
7 Computing Sliding OpPeriods
For a string of length , we define a family of strings such that for . Note that the characters of the strings are shapes. Moreover, the total length of strings is quadratic in , so we will not compute those strings explicitly. Instead, we use the following observation to test if two symbols are equal.
Observation 6.
if and only if .
Sliding opperiods admit an elegant characterization based on ; see Figure 5.
Lemma 11.
An integer , , is a sliding opperiod of if and only if and is a period of , or and .
Proof.
If is a sliding opperiod, then . Consequently, Observation 5 yields that is an opsquare for every and that .
If , then the former property yields
Comments
There are no comments yet.