Improved Quantum Information Set Decoding

08/02/2018 ∙ by Elena Kirshanova, et al. ∙ 0

In this paper we present quantum information set decoding (ISD) algorithms for binary linear codes. First, we give an alternative view on the quantum walk based algorithms proposed by Kachigar and Tillich (PQCrypto'17). It is more general and allows to consider any ISD algorithm that has certain properties. The algorithms of May-Meuer-Thomae and Becker-Jeux-May-Meuer satisfy these properties. Second, we translate May-Ozerov Near Neighbour technique (Eurocrypt'15) to an `update-and-query' language more suitable for the quantum walk framework. First, this re-interpretation makes possible to analyse a broader class of algorithms and, second, allows us to combine Near Neighbour search with the quantum walk framework and use both techniques to give a quantum version of Dumer's ISD with Near Neighbour.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The Information Set Decoding problem with integer parameters

asks to find the error-vector

given a matrix and a vector such that the Hamming weight of , denoted , is bounded by some integer. The matrix is called the parity-check matrix of a binary linear -code , where is the minimum distance of the code. In this work, we stick to the so-called full distance decoding setting, i.e., when we search for with . The analysis is easy to adapt to half-distance decoding, i.e., when .

The ISD problem is relevant not only in coding theory but also in cryptography: several cryptographic constructions, e.g. [McE78], rely on the hardness of ISD. The problem seems to be intractable even for quantum computers, which makes these constructions attractive for post-quantum cryptography.

First classical ISD algorithm due to Prange dates back to 1962 [Pra62] followed by a series of improvements [Ste89, Dum91, FS09, MMT11, BJMM12], culminating in algorithms [MO15, BM17, BM18] that rely on Nearest Neighbour techniques in Hamming metric. On the quantum side, the ISD problem received much less attention: Bernstein in [Ber10] analysed a quantum version of Prange’s algorithm, and recently Kachigar and Tillich [KT17] gave a series of ISD algorithms based on quantum walks. Our results extend the work of [KT17].

Our contributions:

  1. We present another way of analysing quantum ISD algorithms from [KT17]

    : it allows to simplify the complexity estimates for every ISD algorithm given in

    [KT17];

  2. We re-phrase May-Ozerov Near Neighbour algorithm [MO15] in the ‘update-and-query’ language and give a method to analyse its complexity;

  3. We present a quantum version of the May-Ozerov ISD algorithm.

Our second contribution is of independent interest as it provides an alternative but more flexible view on May-Ozerov Near Neighbour algorithm for the Hamming metric. We give simple formulas for analysing its complexity which allow us to stay in the Hamming space, i.e., without reductions from other metrics as it is usually done in the literature [Chr17]. The third contribution answers the problem left open in [KT17], namely, how to use Near Neighbour technique within quantum walks. Our results are summarized in the table below.

Algorithm Quantum Classical
Time Space Time Space
Prange [Ber10, Pra62] 0.060350 0.120600
Stern/Dumer [Ste89, Dum91] 0.116035 0.03644
   + Shamir-Schroeppel (SS) [KT17] 0.059697 0.00618
   + Near Neighbour (NN) Sect.4 0.059922 0.00897 0.113762 0.04248
   + SS + NN Sect.4 0.059450 0.00808
MMT [MMT11] 0.111468 0.05408
   – Kachigar-Tillich [KT17] 0.059037 0.01502
BJMM [BJMM12] 0.101998 0.07590
   – Kachigar-Tillich [KT17] 0.058696 0.01877
Table 1: Running time and space complexities of ISD algorithms (full distance decoding). The columns give the exponent-constants , i.e., runtime and memory complexities are of the form . For Prange’s algorithm, the space is .

For each classical algorithm, Table 1 gives running time and space complexities of their quantum counterparts. By the ‘quantum space’ in Table 1

, we mean the number of qubits in a quantum state an algorithm operates on. Note that this work does improve over Kachigar-Tillich quantum versions of MMT or BJMM ISD algorithms, but we present a different way of analysing the asymptotic complexities these algorithms.

In Sect. 4 we show how to combine the Near Neighbour search of May and Ozerov [MO15] with quantum version of the ISD algorithm due to Dumer [Dum91]. Combined with the so-called Shamir-Schroeppel trick [SS81], which was already used in [KT17], we can slightly improve the running time of this algorithm.

We note that, as in the classical setting, the Near Neighbour technique requires more memory, but we are still far from the Time=Memory regime. It turns out that, as opposed to the classical case, quantum Near Neighbour search does not improve MMT or BJMM. We argue why this is the case at the and of Sect. 4. We leave as an open problem an application of quantum Near Neighbour to MMT/BJMM algorithms as well as quantum speed-ups for algorithm described in the recent work be Both-May [BM17, BM18].

2 Preliminaries

We start with overview on classical algorithms for ISD, namely, Prange [Pra62], Stern and its variants [Ste89, Dum91], MMT [MMT11], and BJMM [BJMM12] algorithms. We continue with known quantum speed-ups for these algorithms.

2.1 Classical ISD algorithms

All known ISD algorithms try to find the error-vector by a clever enumeration of the search space for , which is of size , where is the binary entropy function. In the analysis of ISD algorithms, it is common to relate the parameters (the error-weight), and (the rank of a code) to dimension , and simplify the running times to the form for some constant .111We omit sub-exponential in factors throughout, because we are only interested in the constant . Furthermore, our analysis is for an average case and we sometimes omit the word ‘expected’. To do this, we make use of Gilbert-Varshamov bound which states that as . This gives us a way to express as a function of and . Finally, the running time of an ISD algorithm is obtained by a brute-force search over all (up to some precision) that leads to the worst-case complexity. In the classical setting, this worst-case is reached by codes of rate , while in the quantum regime it is .

Decoding algorithms start by permuting the columns of which is equivalent to permuting the positions of ’s in . The goal is to find a permutation such that has exactly ’s on the first coordinates and the remaining weight of is distributed over the last coordinates. All known ISD algorithms make use of the fact that such a permutation is found. We expect to find a good after trials, where

(1)

The choice of and how we proceed with depends on the ISD algorithm.

For example, Prange’s algorithm [Pra62] searches for a permutation that leads to . To check whether a candidate is good, it transforms into systematic form (provided the last columns of

form an invertible matrix which happens with constant success probability). The same transformation is applied to the syndrome

giving a new syndrome . From the choice of , it is easy to see that for a good , we just ‘read-off’ the error-vector from the new syndrome, i.e., , and to verify a candidate , we check if . We expect to find a good after trials.

From now on, we assume that we work with systematic form of , i.e.

(2)

Other than restricting the weight of to be 0 on the last coordinates, we may as well allow at the price of a more expensive check for . This is the choice of Stern’s algorithm [Ste89], which was later improved in [Dum91] (see also [FS09]). We describe the improved version. We start by adjusting the systematic form of introducing the -length 0-window, so that Eq. 2 becomes

(3)

Now we search for a permutation that splits the error as

such that and , where ’s are of appropriate dimensions. With such an , we can re-write Eq. (3) as

(4)

We enumerate all possible vectors of the form into a list and all vectors of the form into a list . The above equation tells us that for the correct pair , the sum of the corresponding list-vectors equals to on the first -coordinates. We search for two vectors that are equal on this -window. We call such pair a match. We check among these matches if the Hamming distance between , denoted , is . To retrieve the error-vector, we store ’s together with the corresponding ’s in the lists. The probability of finding a permutation that meets all the requirements is

(5)

It would be more precise to have instead of in the above formula, but these two quantities differ by only a factor of which we ignore. The expected running time of the algorithm is then

(6)

where the first argument of is the time to sort , the second is the expected number of pairs from that are equal on , which we check for a solution. See Fig. 1 for an illustration of the algorithm.

Figure 1: On the left: A variant of Stern’s ISD algorithm due to Dumer [Dum91]. The list is constructed from all possible -weight vectors : . is constructed similarly with and swapped. Gray-shaded vertical strip indicates the coordinates on which the elements and match. Line-shaded horizontal strips indicate a subset of lists stored on quantum registers during the execution of quantum walk search algorithm.
On the right: May-Meurer-Thomae decoding [MMT11]. The lists are shorter than in Dumer’s algorithm as their elements already match on -coordinates. Quantum walk operates on subsets of the bottom lists . We also keep the auxiliary register , where we store the result of merging into , and

The Representation technique of [BJMM12, MMT11] further improves the search for matching vectors by constructing the lists faster. Now the list consists of all pairs where (as opposed to ) with . Similarly, . The key observation is that now there are ways to represent the target as . Hence, it is enough to construct only an fraction of . Such a fraction of (analogously, for ) is built by merging in the meet-in-the-middle way yet another two lists and filled with vectors of the form (for ) and (for ) for all -weight and , respectively. These starting lists are of size

(7)

During the merge, we force vectors from be equal to vectors from on coordinates leaving only one (in expectation) pair ( whose sum gives (see Fig. 1, right). Here and later, we shall abuse notations slightly: technically, the list elements are pairs , but the merge is always done on the second element, and the error retrieval is done on the first.

The number of necessary permutations we need to try is given by Eq. (5). Provided a good is found, the time to find the correct is now given by . This is the maximum between (I) the size of starting lists, (II) the size of the output after the first merge on coordinates, and (III) the size of the final output after merging on the remaining coordinates. Optimization for reveals that (II) is the maximum in case of classical MMT. Overall, the expected complexity of the algorithm is

(8)

Becker-Jeux-May-Meurer in [BJMM12] further improves the merging step (i.e., the dominant one) noticing that zero-coordinates of can be split in not only as , but also as . It turns out that constructing longer starting lists using of weights is profitable as it significantly increases the number of representations from to , thus allowing a better balance between the two merges: the first merge on coordinates and the second on coordinates. The expected running time of the BJMM algorithm is given by

(9)

In fact, the actual BJMM algorithm is slightly more complicated than we have described, but the main contribution comes from adding ‘1+1’ to representations, so hereafter we refer to this simplified version as BJMM.

2.2 Quantum ISD algorithms

Quantum ISD using Grover’s algorithm. To speed-up Prange’s algorithm, Bernstein in [Ber10] uses Grover’s search over the space of permutations, which is of size . This drops the expected runtime from (classical) down to (quantum), cf. Table 1. The approach has an advantage over all the quantum algorithms we discuss later as it requires quantum registers to store data of only size.

To obtain a quantum speed-up for other ISD algorithms like Stern’s, MMT, BJMM, we need to describe quantum walks.

Quantum walks. At the heart of the above ISD algorithms (except Prange’s) is the search for vectors from given lists that satisfy a certain relation. This task can be generalized to the -list matching problem.

Definition 1 (-list matching problem)

Let be fixed. Given equal sized lists of binary vectors and a function that decides whether a -tuple forms a ‘match’ or not (outputs 1 in case of a ‘match’), find all -tuples s.t. .

For example, the Stern’s algorithm uses and its decides for a ‘match’ whenever a pair is equal on certain fixed coordinates. For MMT or BJMM, we deal with four lists , and function decides for the match if are equal on a certain part of coordinates (merge of with , and with ) and, in addition, is 0 on .

Quantumly we solve the above problem with the algorithm of Ambainis [Amb04]. Originally it was described only for the case (search version of the so-called Element distinctness problem), but later extended to a more general setting, [CE05]. We note that the complexity analysis in [CE05] is done in terms of query calls to the function, while here we take into account the actual time to compute .

Ambainis algorithm is best described as a quantum walk on the so-called Johnson Graph.

Definition 2 (Johnson graph and its eigenvalue gap)

The Johnson graph for an -size list is an undirected graph with vertices labelled by all -size subsets of the list, and with an edge between two vertices iff . It follows that has

vertices. Its eigenvalue gap is

, [BCA89].

Let us briefly explain how we solve the -list matching problem using quantum walks. Our description follows the so-called MNRS framework [MNRS11] due to Magniez-Nayak-Roland-Santha, which measures the complexity of a quantum walk search algorithm in the costs of their Setup, Update, and Check phases.

To setup the walk, we first prepare a uniform superposition over all -size subsets together with an auxiliary register (normalization omitted):

The auxiliary register contains all the information needed to decide whether contains a match. In the ISD setting, stores intermediate and output lists of the matching process. For example, in Stern’s algorithm () contains all pairs that match on coordinates. In case the merge is done is several steps like in MTT (), the intermediate sublists are also stored in (see Figure 1).

The running time and the space complexity of the Setup phase are essentially the running time and the space complexity of the corresponding ISD algorithm with the input lists of size instead of . By the end of the Setup phase, we have a superposition over all -sublists of , where each is entangled with the register that contains the result of merging into . Also, during the creation of we can already tell if it contains the error-vector that solves the ISD problem. When we talk about quantum space of an ISD algorithm (e.g., Table 1), we mean the size of the register.

Next, in the Update phase we choose a sublist and replace one element by . This is one step of a walk on the Johnson graph. We update the data stored in : remove all the pairs in the merged lists that involve and create possibly new matches with . We assume the sub-lists ’s are kept sorted and stored in a data-structure that allows fast insertions/removals (e.g., radix trees as proposed in [BJLM13]). We also assume that elements in that result in a match, store pointers to their match. For example, if in Stern’s algorithm give a match, we keep a pointer to and also a pointer from to .

After we have performed updates (recall, is the eigenvaule gap of ), we check if the updated register gives a match. This is the Checking phase.

Thanks to the MNRS framework, once we know the costs of (a) the Setup phase , (b) the Update phase , and (c) the Checking phase , we know that after many steps, we measure a register that contains the correct error-vector with overwhelming probability, where

(10)

In the above formula, is a fraction of vertices in that contain the correct error-vector. For a fixed , we have where . Strictly speaking, the walk we have just described is a walk on a -Cartesian product of Johnson graphs – one for each sublist , so the value in Eq. (10) must be the eigenvalue gap for such a large graph. As proved in [KT17, Theorem 2], for fixed constant , it is lower-bounded by . The analysis of [KT17] as well ours are asymptotical, so we ignore the constant factor of . An optimal choice for that minimizes Eq. (10) is discussed in the next section.

Kachigar-Tillich quantum ISD algorithms. The quantum walk search algorithm described above solves the ISD problem provided we have found a permutation that gives the desired distribution of 1’s in the error-vector. Kachigar and Tillich in [KT17] suggest to run Grover’s algorithm for with the ‘checking’ function for Grover’s search being a routine for the -list matching problem. Their ISD algorithm performs transformations on the quantum state of the form (normalization omitted):

(11)

The outer-search is Grover’s algorithm over permutations, where is chosen such that we expect to have one that leads to a good permutation of 1’s in the error-vector (see Eq. (5)). The check if a permutation is good is realized via quantum walk search for vectors that match on certain coordinates and lead to the correct error vector. Note an important difference between classical and quantum settings: during the quantum walk we search over sublists which are exponentially shorter than .

After steps, the register contains a -tuple that leads to the correct error vector provided a permutation is good. Hence, after steps, the measurement of the first register gives a good with constant success probability. The resulting state will be entangled with registers that store together with the pointers to the matching elements. Once we measure , we retrieve these pointers and, finally, reconstruct the error vector as in the classical case.

Quantum Shamir-Schroeppel technique was introduced in [SS81] to reduce the memory complexity of a generic meet-in-the-middle attack, i.e., the -list matching problem for . Assume we want to find a pair s.t.  on certain coordinates. Assume further that we can decompose s.t. (analogously, for ). The idea of Shamir and Schroeppel is to guess that the correct vectors are equal to some on coordinates and enumerate all such pairs. Namely, we enumerate by constructing in the meet-in-the-middle way from in time , s.t.  only contains vectors that are equal to on (same for ). Classically, we make guesses for , so the overall time of the algorithm will be (same as naive -list matching), but we save in memory.

In [KT17], in order to improve not only in memory, but also in time, the authors run Grover’s search over guesses for . Indeed, this gives a speed-up for ISD algorithms that solve the -list matching problem (cf. the complexities of Dumer’s algorithm in Table 1).

3 Quantum MMT and BJMM algorithms

In this section we analyse the complexity of quantum versions of MMT and BJMM ISD algorithms given in [KT17]. We note that the way we apply and analyse quantum walks to ISD closely resembles Bernstein’s et al. algorithm for Subset Sum [BJLM13].

Let us first look at the generalized version of a quantum ISD algorithm, where we can plug-in any of the ISD algorithms described in Sect. 2. Recall that on input we receive , and are asked to output of weight that satisfies . Alg. 1 below can be viewed as a ‘meta’ quantum algorithm for ISD.

1:Prepare a superposition over -many permutations
2:For each
3:a: Setup a superposition for ,
4:b: Run a quantum walk search on to find a matching tuple , if exists; indicate otherwise that no tuple is found.
5:Apply amplitude amplification (Grover’s search) on Step 1 for those that led to a match on Step 2.b. Measure the register and then the register .
Algorithm 1 A quantum ISD algorithm

The algorithm is parametrized by (I.) the size of the permutation space we iterate over in order to find the desired distribution of 1’s in the solution (e.g., Eq. (5) for MMT); (II.) – the number of staring lists ’s an ISD-algorithm considers (e.g., for Prange, for Stern/Dumer, for MMT); (III.) – the size of ’s, . The asymptotic complexity of Alg. 1 will depend on these quantities as we now explain in detail.

Step 1 consists in preparing a superposition , which is efficient. Step 2 is a quantum walk algorithm for the -list matching problem, i.e. search for all from which the solution vector can be constructed. The cost of Step 2 can be split into the cost of the Setup phase (Step 2(a)) and the cost of the Update and Check phases (Step 2(b)).

The cost of the Step 2(a) – preparing a superposition over

-tensor product of

and computing the data for – is essentially the cost of a classical ISD algorithm, where on input instead of the lists ’s, we consider sublists of size . Recall that ‘computing the data for ’ means constructing the subset using only elements from ’s (see Fig. 1). Step 2(b) performs a quantum walk over the -Cartesian product of Johnson Graphs, , with eigenvalue gap for . To estimate – the fraction of that give the solution, note that with probability , an -size subset contains an element that contributes to the solution. Hence, such subsets – one vertex of – contain the solution with probability .

Now we focus on the Update and Check phases. Recall that at these steps we replace one element from a list and, to keep the state consistent, remove the data from that was generated using the removed elements, and compute the data in for the newly added element. Hence, asymptotically the expected cost is the number of elements we need to recompute in the lists contained in (for example, for MMT or BJMM algorithms, it is the number of elements in affected by the replacement of one element in a starting list ). Once we know the time to create , and , we obtain the total complexity of Step 2 from Eq. (10).

Finally, Grover’s search over -many permutations requires calls to a ‘checking’ function for a measurement to output a good . The measurement will collapse the state given in Eq. (11) into a superposition of , where the amplitude of those that contain the actual solution will be amplified. Measurement of leads to the solution. Regarding Step 2 as ‘checking’ routine for amplitude-amplification of Step 1 and assuming that an ISD algorithm on input-lists of size has classical running time , we obtain the following complexity of Alg. 1:

Theorem 3.1

Assume we run Alg. 1 with a classical ISD algorithm with the following properties: (I.) it expects after permutations of the columns of to find the desired weight-distribution for the error-vector, (II.) it performs the search for the error-vector over lists each of size in (quantum) time , (III.) replacing one element in any of these lists, costs to update the register at Step 2(b). Then for satisfying , the expected running time of Alg. 1 is

In particular, the MMT algorithm has , given in Eq. (5), given in Eq. (7) and

. Under the (heuristic) assumption that elements in all lists are uniform and independent from

, we expect , leading to

Similarly, for the BJMM algorithm [BJMM12] with starting lists-sizes given in Eq. (9), expected is , and for some , we have

Proof

The first statement follows from the discussion before the theorem: Grover’s search for a good makes ‘calls’, where each ‘call’ is a quantum walk search of complexity . The condition on is set such that the Steps 2(a) and 2(b) in Alg. 1 are asymptotically balanced, namely, we want , cf. Eq. (10). We have , , , the cost of one update step is , and the checking phase is (as it consist in checking if contains the solution, which can be done in time when is kept sorted). With this, the optimal choice for should satisfy

For the classical MMT algorithm, the dominating step is the construction of the lists whose elements are already equal on a certain number of coordinates denoted in Sect. 2 (see ‘middle’ lists in Fig. 1 in the right figure). Quantumly, however, Kachigar and Tillich in [KT17] observed that if we assume instead that the dominant step is creation of the lists (the ‘upper’ lists in Fig. 1), we obtain a slightly faster algorithm. The reason is in Shamir-Schroeppel technique: we construct the list by 1. forcing elements in and be equal to a vector on coordinates, 2. looping over all possible vectors . Quantumly, the loop costs ‘calls’ (again, here a ‘call’ is a quantum walk). Hence, taking creation the lists as the dominant one, the setup phase (or Step 2(a) of Alg. 1) is of complexity , where is the size of the sets . Both parameters are subject to optimizations.

To determine , we remind that and , where is obtained by considering all pairs that are equal to a fixed vector on -coordinates and are equal to another fixed value on - coordinates. Under the assumption that all elements are uniform random and independent, changing one element in , would require to recompute - many elements in . Similarly, changing one element in leads to changing -many elements in . To simplify the analysis, we choose the parameters such that , that is, is irrelevant asymptotically. This simplification puts the following constraint on the parameters .

The analysis now simplifies to the balancing constraint between the setup phase for the quantum walk (which the creation of the sets of size ) and . Solving for , we receive as the optimal size for ’s. Hence, the running time of the Step 2 of Alg. 1 for MMT is , where the second multiple is Grover’s itearion over vectors . From the constraint , we obtain , and hence, the second statement of the teorem.

The BJMM algorithm differs from MMT in the number of representations and the size of the starting lists . Similar to MMT, we choose , the complexity of the quantum walk for BJMM becomes .

The above complexity result gives formulas that depend on various parameters. In order to obtain the figures from Table 1, we run an optimization program that finds parameters values that minimize the running time under the constraints mentioned in the above proof. While we do not prove that these value are global optima, the values we obtain are feasible (they satisfy all the constraints), and hence, can be used inside the decoding algorithm. We chose use the optimization package implemented in Maple. The optimization program for Table 1 is available at http://perso.ens-lyon.fr/elena.kirshanova/.

From the table one can observe that classically, the improvements over Prange achieved by recent algorithms are quite substantial: BJMM gains a factor of in the leading-order term. Quantumly, however, the improvement is less pronounced. The reason lies in the fact that the speed-up coming from Grover’s search is much larger than the speed-up offered by the quantum walk. Also, the -list matching problem become harder (quantumly) once we increase because the fraction of ‘good’ subsets becomes smaller.

4 Decoding with Near Neighbour Search

For a reader familiar with Indyk-Motwani locality-sensitive hashing [IM98] for Near Neighbour search (defined below), Stern’s algorithm and its improvements [Dum91] essentially implement such hashing by projecting on -coordinates and applying it to the lists . In this section, we consider another Near Neighbour technique.

4.1 Re-interpretation of May-Ozerov Near Neighbour algorithm

The best known classical ISD algorithm is due to May-Ozerov [MO15]. It is based on the observation that ISD is a similarity search problem under the Hamming metric. In particular, Eq. (2) defines the approximate relation:

(12)

The approximation sign means that the Hamming distance between the left-hand side and the right-hand side of Eq. (12) is at most (cf. Eq. (4)). Enumerating over all and , we receive an instance of the -Near Neighbour (NN) problem:

Definition 3 (-Near Neighbour)

Let be a list of uniform random binary vectors. The -Near Neighbour problem consists in preprocessing s.t. upon receiving a query vector , we can efficiently find all that are -close to , i.e., all with for some .222The (dimensionless) distances we consider here, denoted further , are all , since we can flip the bits of the query point and search for ‘close’ rather than ‘far apart’ vectors.

Thus the ISD instance given in Eq. (12) becomes a special case of the -NN problem with for all , and the queries taken from for all . In [MO15], the algorithm is described for this special case, namely, when the number of queries is equal to and all the queries are explicitly given in advance. So it is not immediately clear how to use their result in quantum setting, where we only operate on the sublists of and update them with new vectors during the quantum walk.

In this section, we re-phrase the May-Ozerov algorithm in terms more common to the near neighbour literature, namely in the ‘Update’ and ‘Query’ quantities. It allows us to use the algorithm in more general settings, e.g., when the number of queries differs from and when the query-points do not come all at once. This view enables us to adapt their algorithm to quantum-walk framework.

The main ingredient of the May-Ozerov algorithm is what became known as Locality-Sensitive Filtering (LSF), see [BDGL16] for an example of this technique in the context of lattice sieving. In LSF we create a set of filtering vectors which divide the Hamming space into (possibly overlapping) regions. These regions are defined as Hamming balls of radius centred at , where is an LSF-parameter we can choose. So each filtering vector defines a region as the set of all vectors that are -close to , namely,