1. Introduction and Motivation
One of the most interesting and least studied problem in pattern matching is known as the subsequence string matching or the hidden pattern matching [12]. In this case, we search for a pattern of length in the text of length as subsequence, that is, we are looking for indices such that . We say that is hidden in the text . We do not put any constraints on the gaps , so in language of [8] this is known as the unconstrained hidden pattern matching. The most interesting quantity of such a problem is the number of subsequence occurrences in the text generated by a random source. In this paper, we study the limiting distribution of this quantity when , the length of the pattern, grows with .
Hereafter, we assume that a memoryless source generates the text , that is, all symbols are generated independently with probability for symbol , where the alphabet is assumed to be finite. We denote by the probability of the pattern . Our goal is to understand the probabilistic behavior, in particular, the limiting distribution of the number of subsequence occurrences that we denote by . It is known that the behavior of depends on the order of magnitude of the pattern length . For example, for the exact pattern matching (i.e., the pattern must occur as a string in consecutive positions of the text), the limiting distribution is normal for (more precisely, when , hence up to ), but it becomes a Pólya–Aeppli distribution when for some constant
, and finally (conditioned on being nonzero) it turns into a geometric distribution when
[12] (see also [2]). We might expect a similar behaviour for the subsequence pattern matching. In [8] it was proved by analytic combinatoric methods that the number of subsequence occurrences, , is asymptotically normal when , and not much is known beyond this regime. (See also [3]. Asymptotic normality for fixed follows also by general results for statistics [10].) However, in many applications – as discussed below – we need to consider patterns whose lengths grow with . In this paper, we prove two main results. In Theorem 2.6 we establish that for the number of subsequence occurrences is normally distributed. Furthermore, in Theorem 2.7 we show that under some constrains on the structure of , the asymptotic normality can be extended to . Moreover, for the special pattern consisting of the same symbol repeated, we show in Theorem 2.4 that for , the distribution of number of occurrences is asymptotically normal, while for larger (up to for some ) it is asymptotically lognormal. We conjecture that this dichotomy is true for a large class of patterns. Finally, for random typical we establish in Corollary 4.4 that is asymptotically normal for .Regarding methodology, unlike [8] we use here probabilistic tools. We first observe that can be represented as a statistic (see (2.3) and Section 3.2). This suggests to apply the Hoeffding [10] projection method to prove asymptotic normality of for some large patterns. Indeed, we first decompose
into a sum of orthogonal random variables with variances of decreasing order in
(for not too large), and show that the variable of the largest variance converges to a normal distribution, proving our main results Theorems 2.6 and 2.7.The hidden pattern matching problem, especially for large patterns, finds many applications from intrusion detection, to trace reconstruction, to deletion channel, to DNAbased storage systems [1; 4; 5; 6; 12; 17]. Here we discuss below in some detail two of them, namely the deletion channel and the trace reconstruction problem.
A deletion channel [5; 6; 7; 14; 17; 20] with parameter takes a binary sequence where as input and deletes each symbol in the sequence independently with probability . The output of such a channel is then a subsequence of , where
follows the binomial distribution
, and the indices correspond to the bits that are not deleted. Despite significant effort [6; 14; 15; 17; 20] the mutual information between the input and output of the deletion channel and its capacity are still unknown. However, it turns out that the mutual information can be exactly formulated as the problem of the subsequence pattern matching. In [5] it was proved that(1.1) 
where the sum is over all binary sequences of length smaller than and is the number of subsequence occurrences of in the text . As one can see, to find precise asymptotics of the mutual information we need to understand the probabilistic behavior of for and typical . The trace reconstruction problem [4; 11; 16; 18] is related to the deletion channel problem since we are asking how many copies of the output deletion channel we need to see until we can reconstruct the input sequence with high probability.
2. Main Results
In this section we formulate precisely our problem and present our main results. Proofs are delayed till the next section.
2.1. Problem formulation and notation
We consider a random string of length . We assume that are i.i.d. random letters from a finite alphabet ; each letter has the distribution
(2.1) 
for some given vector
; we assume for each . We may also use for a random letter with this distribution.Let be a fixed string of length over the same alphabet . We assume . Let
(2.2) 
which is the probability that equals .
Let be the number of occurrences of as a subsequence of .
For a set (in our case or ) and , let be the collection of sets with . Thus, . For , contains just the empty set . For , we identify and in the obvious way. We write as , where we assume that . Then
(2.3) 
where
(2.4) 
Remark 2.1.
In the limit theorems, we are studying the asymptotic distribution of . We then assume that and (usually) ; we thus implicitly consider a sequence of words of lengths . But for simplicity we do not show this in the notation. ∎
We have for every . Hence,
(2.5) 
Further, let
(2.6) 
so , and
(2.7) 
so and
(2.8) 
We also write for the norm of a random variable , while is the usual Euclidean norm of a vector in some .
denotes constants that may be different at different occurrences; they may depend on the alphabet and , but not on , or .
Finally, and mean convergence in distribution and probability, respectively.
We are now ready to present our main results regarding the limiting distribution of , the number of subsequence occurrences when . We start with a simple example, namely, for some , and show that depending on whether
or not the number of subsequences will follow asymptotically either the normal distribution or the lognormal distribution.
Before we present our results we consider asymptotically normal and lognormal distributions in general, and discuss their relation.
2.2. Asymptotic normality and lognormality
If is a sequence of random variables and and are sequences of real numbers, with , then
(2.9) 
means that
(2.10) 
We say that is asymptotically normal if for some and , and asymptotically lognormal if for some and (this assumes ). Note that these notions are equivalent when the asymptotic variance is small, as made precise by the following lemma.
Lemma 2.2.
If , and are arbitrary, then
(2.11) 
Proof.
By replacing by , we may assume that . If with , then , and thus . It follows that (with ), and thus
(2.12) 
and thus .
The converse is proved by the same argument. ∎
Remark 2.3.
Lemma 2.2 is best possible. Suppose that . If , then , and thus
(2.13) 
In this case (and only in this case), thus converges in distribution, after scaling, to a lognormal distribution. If , then no linear scaling of can converge in distribution to a nondegenerate limit, as is easily seen. ∎
2.3. A simple example
We consider first a simple example where the asymptotic distribution can be found easily by explicit calculations. Fix and let , a string with identical letters. Then, if is the number of occurrences of in , then
(2.14) 
We will show that is asymptotically normal if is small, and lognormal for larger .
Theorem 2.4.
Let . Suppose that , with .

Then
(2.15) 
In particular, if , then
(2.16) 
If , then this implies
(2.17) and thus
(2.18)
Proof.
(i): We have . Define
. Then, by the Central Limit Theorem,
(2.19) 
By (2.14), we have
(2.20) 
where is the Euler gamma function. We fix a sequence such that ; this is possible by the assumption. Note that (2.19) implies that , and thus . We may thus in the sequel assume . We assume also that is so large that .
2.4. General results
We now present our main results. However, first we discuss the road map of our approach. First, we observe that the representation (2.3) shows that can be viewed as a statistic. For convenience, we consider in (2.7), which differs from by a constant factor only, and show in (3.18) that can be decomposed into a sum of orthogonal random variables such that, when is not too large, . Next, in Lemma 3.7 we prove that appropriately normalized converges to the standard normal distribution. This will allow us to conclude the asymptotic normality of .
In this paper, we only consider the region . First, for we claim that the number of subsequence occurrences always is asymptotically normal.
Theorem 2.6.
If , then
(2.28) 
where
(2.29) 
Furthermore, and .
In the second main result, we restrict the patterns to such that are not typical for the random text; however, we will allow .
Theorem 2.7.
Let be the proportions of the letters in , i.e., . Suppose that . If further , then we have the asymptotic normality
(2.30) 
where is given by (2.29). Furthermore, and .
3. Analysis and Proofs
In this section we will prove our main results. We start with some preliminaries.
3.1. Preliminaries and more notation
Let, for ,
(3.1) 
Thus, letting be any random variable with the distribution of ,
(3.2) 
Let and
(3.3) 
Lemma 3.1.
Let and be as above.

For every ,
(3.4) 
For some and every ,
(3.5) 
For any vector with ,
(3.6)
3.2. A decomposition
The representation (2.3) shows that is a special case of a statistic. (Recall that, in general, a statistic is a sum over subsets as in (2.3) of for some function .) For fixed , the general theory of Hoeffding [10] applies and yields asymptotic normality. (Cf. [13, Section 4] for a related problem.) For (our main interest), we can still use the orthogonal decomposition of [10], which in our case takes the following form.
By the definitions in Section 2.1 and (3.1),
(3.10) 
By multiplying out this product, we obtain
(3.11) 
Hence,
(3.12) 
We rearrange this sum. First, let , and consider all terms with a given . For each and , with , let
(3.13) 
For given and , the number of such that equals the number of ways to choose, for each , elements of in a gap of length , where we define and , ; this number is
(3.14) 
Consequently, combining the terms in (3.12) with the same ,
(3.15) 
We define, for and ,
(3.16) 
and
(3.17) 
Thus (3.15) yields the decomposition
(3.18) 
For , contains only the empty set , and
(3.19) 
Furthermore, note that two summands in (3.15) with different are orthogonal, as a consequence of (3.2) and independence of different . Consequently, the variables (, ) are orthogonal, and hence the variables () are orthogonal.
Let
(3.20) 
Note also that by the combinatorial definition of given before (3.14), we see that
(3.21) 
since this is just the number of , and
(3.22) 
since this sum is the total number of ways to choose elements of the elements of in the gaps.
3.3. The projection method
We use the projection method used by Hoeffding [10] to prove asymptotic normality for statistics. Translated to the present setting, the idea of the projection method is to approximate by , thus ignoring all terms with in the sum in (3.18
). In order to do this, we estimate variances.
First, by (3.4) and the independence of the ,
(3.23) 
By Minkowski’s inequality, (3.16), (3.23) and (3.22),
(3.24) 
or, equivalently,
(3.25) 
This leads to the following estimates.
Lemma 3.2.
For ,
(3.26) 
Proof.
Note that, for ,
(3.28) 
Lemma 3.3.
If , then
(3.29) 
Proof.
3.4. The first term
For , we identify and , and we write . Note that, by (3.14),
(3.32) 
Remark 3.4.
For later use, we define also
(3.33) 
Then, for fixed ,
is a (shifted) hypergeometric distribution:
(3.34) 
which we write as
(3.35) 
∎
Comments
There are no comments yet.