# The Polynomial Learning With Errors Problem and the Smearing Condition

As quantum computing advances rapidly, guaranteeing the security of cryptographic protocols resistant to quantum attacks is paramount. Some leading candidate cryptosystems use the Learning with Errors (LWE) problem, attractive for its simplicity and hardness guaranteed by reductions from hard computational lattice problems. Its algebraic variants, Ring-Learning with Errors (RLWE) and Polynomial Learning with Errors (PLWE), gain in efficiency over standard LWE, but their security remains to be thoroughly investigated. In this work, we consider the "smearing" condition, a condition for attacks on PLWE and RLWE introduced in [6]. We expand upon some questions about smearing posed by Elias et al. in [6] and show how smearing is related to the Coupon Collector's Problem Furthermore, we develop some practical algorithms for calculating probabilities related to smearing. Finally, we present a smearing-based attack on PLWE, and demonstrate its effectiveness.

## Authors

• 1 publication
• 1 publication
• 1 publication
• 1 publication
• 1 publication
05/19/2020

### Continuous LWE

We introduce a continuous analogue of the Learning with Errors (LWE) pro...
02/12/2018

### Quantum Algorithm for Optimization and Polynomial System Solving over Finite Field and Application to Cryptanalysis

In this paper, we give quantum algorithms for two fundamental computatio...
08/25/2021

### Quantum Algorithms for Variants of Average-Case Lattice Problems via Filtering

We show polynomial-time quantum algorithms for the following problems: ...
08/04/2020

### Non-Commutative Ring Learning With Errors From Cyclic Algebras

The Learning with Errors (LWE) problem is the fundamental backbone of mo...
02/12/2018

### Quantum Algorithms for Optimization and Polynomial Systems Solving over Finite Fields

In this paper, we give quantum algorithms for two fundamental computatio...
02/21/2019

### Ring Learning With Errors: A crossroads between postquantum cryptography, machine learning and number theory

The present survey reports on the state of the art of the different cryp...
03/05/2021

### An algebraic approach to the Rank Support Learning problem

Rank-metric code-based cryptography relies on the hardness of decoding a...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1. Introduction

Quantum computing promises to be a game-changing technology, as many problems that are considered intractable for conventional computers could be solved efficiently by harnessing properties of quantum physics to represent information. While quantum computing provides new methods to approach complex computing problems, it can also be used as a powerful tool to break existing cryptographic security.

There are currently two groundbreaking quantum algorithms which break today’s conventional cryptosystems. In 1994, Shor [18] proposed an efficient polynomial-time quantum algorithm for solving the integer factorization and discrete log problems. Indeed, this algorithm breaks much of public key cryptography, as many widely-used public key cryptosystems rely on the difficulty of integer factorization and elliptic curve variants of the discrete logarithm problem, both of which have no known polynomial-time solution with conventional computing. In 1996, Grover [10] proposed a quantum algorithm that provides a quadratic speed up over classical algorithms for searching a key space, weakening the security of symmetric key cryptosystems which rely on the difficulty of guessing a random shared key.

To address this issue, in 2016 the National Institute of Standard and Technology [16] announced the need to replace cryptosystems and standards based on vulnerable problems with post-quantum cryptography alternatives. A promising avenue in post-quantum cryptography is lattice-based cryptography, cryptography based on well-studied computational problems on lattices which have no known efficient solution with either classical or quantum computing. Some lattice-based cryptography relies on the Learning With Errors (LWE) problem, introduced by O. Regev [13] in 2005, which exploits the difficulty of solving a “noisy” linear system modulo a known integer. Regev also proved a reduction from worst-case computational lattice problems to LWE, affirming its difficulty and making LWE a strong candidate to base cryptographic systems.

The basic search LWE problem takes the form of a linear system hiding a secret integer vector

, with integer coefficient vector , and integer error vector , modulo some integer :

 a1s1+e1 ≡c1modq a2s2+e2 ≡c2modq a3s3+e3 ≡c3modq ⋮

While Gaussian elimination makes this system easy to solve with known , , and , the introduction of the unknown noise makes an easy linear system extremely difficult to solve. Even with small noise, the traditional process of Gaussian elimination magnifies noise to the point of rendering the modular linear system unsolvable.

The decision LWE problem is to distinguish with non-negligible advantage between a uniform distribution and a distribution over the noisy inner products

(where is sampled uniformly at random). Since its introduction, the conjectured hardness of LWE [13] has already been used as a building block for many cryptographic applications: in efficient signature schemes [19], fully-homomorphic encryption schemes [3], pseudo-random functions [2], and protocols for secure multi-party computation [5], and it also validates the hardness of the NTRU cryptosystem [11].

The “algebraically structured” variants, called Ring LWE (RLWE) [15], Polynomial LWE (PLWE) [12], and Module LWE [1] (drawing values from any ring of integers, polynomial rings, and modules in place of the set of integers respectively), offer more succinct representations of information.

While the hardness of RLWE relies on the conjectured hardness of computational lattice problems over a restricted set of lattices (called ideal lattices) [14], its construction has inherent algebraic structure, which could make it vulnerable to algebraic attacks. In this paper we consider attacks against the PLWE problem.

We analyze a condition for attacks against the PLWE problem called the smearing condition, which was introduced by Elias, Lauter, Ozman, and Stange in [7]. We demonstrate the parallels between the smearing condition and the Coupon Collector’s Problem and develop recursive methods for computing the probability of smearing. We also present a new attack on the PLWE decision problem using the smearing condition.

The paper is organized as follows: Section 2 summarizes relevant background related to the RLWE problems, Section 3 focuses on the smearing condition and gives an overview of related work, Section 4 provides methods of calculating smearing probabilities for both uniform and non-uniform distributions, and in Section 5 we provide a smearing-based attack on the PLWE problem.

## 2. Preliminaries

### 2.1. Lattices and Gaussians

A lattice is a discrete additive subgroup of a vector space . If has dimension a lattice

can be viewed as the set of all integer linear combinations of a set of linearly independent vectors

for some , written . If we call the lattice full-rank, and we will only consider lattices of full-rank. We can extend this notion of lattices to matrix spaces by stacking the columns of a matrix. We recall the following standard definitions of lattices and Gaussians.

###### Definition 2.1.

Given a lattice in a space endowed with a metric , the minimum distance of is defined as . Similarly, is the minimum length of a set of linearly independent vectors, where the length of a set of vectors is defined as .

###### Definition 2.2.

Given a lattice , where is endowed with an inner product , the dual lattice is defined .

For a vector space with norm and , we define the Gaussian function by

 ρσ(x)=e−π∥x∥22σ2.

(normal distribution) with parameter

has a continuous probability density function

 f(x)=1σ√2πe−π∥x∥22σ2.

When sampling a Gaussian over a lattice we will use the discrete form of the Gaussian distribution. This Gaussian distribution is discretized as follows.

###### Definition 2.3.

A discrete Gaussian distribution with parameter over a lattice is a distribution in which each lattice point is sampled with probability proportional to .

 Pλ≔ρσ(λ)ρσ(L),ρσ(L)=∑λ∈Lρr(λ)

It is well-known that the sum of

independent, normally-distributed random variables is normal.

###### Definition 2.4 (Informal).

A spherical Gaussian distribution is a multivariate Gaussian distribution such that there are no interactions between the dimensions.

This implies that we can simply select each coordinate from a Gaussian distribution. We use the Gaussian distribution as the error distribution in the Learning with Errors problem, discussed below.

### 2.2. Learning with Errors Distributions

Let be a monic irreducible polynomial in of degree . We use the notation to denote the polynomial ring .

An instance of the RLWE distribution is given by a choice of number field , secret , prime , and parameter (for the error distribution).

###### Definition 2.5 (RLWE Distribution, [7]).

Let be the ring of integers for number field . Define

 Rq=R/qR.

Let be the uniform distribution over . Let the error distribution be , a discrete Gaussian distribution over . For some , , , pairs of the form compose the RLWE distribution over .

The PLWE distribution is defined similarly; rather than the ring of integers of a number field, the distribution is defined over a polynomial ring. An instance of the PLWE distribution is now given by a choice of monic, irreducible polynomial , secret , prime , and parameter (for the error distribution).

###### Definition 2.6 (PLWE Distribution, [7]).

Let be monic, irreducible of degree . Assume that splits over . Define

 P\coloneqqZ[x]/(f(x)),Pq\coloneqqP/qP.

Let be a discrete Gaussian distribution over spherical in the power basis of . Let be the uniform distribution over . Let the error distribution be a discrete Gaussian distribution over .

For some , , and , pairs of the form

 (a,c=a⋅s+e)

compose the PLWE distribution over .

The decision problems for RLWE and PLWE are analogous to Decision LWE: given the same number of (arbitrarily many) independent samples from two distributions, determine with non-negligible advantage that the set of samples follows the RLWE (PLWE) distribution () versus a uniform distribution over ().

## 3. Smearing Condition

### 3.1. Motivation and Related Work

A common technique for breaking cryptographic schemes is to transfer the problem onto a smaller space, where looking for the secret key by brute force is feasible. In finding the secret in a PLWE problem by brute force, the attacker would have to go through different possibilities, which is infeasible due to the sizes of and . However, if the attacker can somehow transfer the PLWE problem onto a smaller field, like , then brute force suddenly becomes feasible, and, if not much information is lost in this transformation, then a brute search on would help solve the original problem on .

An example of this approach is the “ attack” on Decision-PLWE, as presented in [7]. Suppose that has a root at , i.e. . Expressing in the power basis, , where . Then, . So, if samples follow the PLWE distribution, can only take on a small range of values.

Note that there are possibilities for the value of . So,

1. For all possible guesses (where is a guess for and for each sample , compute .

• Check and record if is within .
Note: If the guess for is correct, will equal . If the guess for is incorrect, or if are uniform to begin with, will be uniform over .

2. Make a decision about the sample distribution:

1. If there is one for which all the ’s are within , the are taken from the PLWE distribution with .

2. If all possible values give uniform distributions of , the are taken from the uniform distribution.

3. If several appear to work, repeat the algorithm with more samples.

This attack works with probability [7]. Similar attacks exist for of small order, where for some small .

Other attacks include exploitation of the size of the error values. However, the probability of success for this particular attack decays (except for in a special case) and is unlikely to be implemented [6].

###### Definition 3.1 ([7]).

Let be a monic, irreducible polynomial of degree and let be a root of . Then a smearing map is defined as

 πγ:Pq →Zq g(x) ↦g(γ)
###### Definition 3.2 ([7]).

Given a smearing map and a subset , we say that smears under if .

Note that for a smearing map we have . Also, note that where is a polynomial ring and is one of its roots. This implies that has cosets in , which are, consequently, .

###### Lemma 3.3.

Let be a smearing map. Then, if and only if and are in the same coset of .

###### Proof.

The claim follows from the fact that

 f(γ)=g(γ) ⟺(f−g)(γ)=0 ⟺f−g=h⋅(x−γ) for some h∈Pq ⟺f,g are in the same coset of (x−γ).

This lemma implies that the set smears if and only if contains an element in each of the cosets of in . In the next two sections we investigate the size of a subset sampled from a uniform distribution and investigate the properties of a subset sampled from a Gaussian distribution as in the PLWE problem.

### 3.2. Smearing: The Uniform Distribution Case

We investigate the size of a subset sampled from a uniform distribution i.e the polynomials in are chosen uniformly random over

. As we will see, the assumption of uniformity eliminates much of the algebraic aspects of the smearing problem as related to PLWE and reduces the problem to a classic problem in probability theory,

the Coupon Collector’s Problem.

The classical version of the Coupon Collector’s Problem is as follows: Suppose a company places one of distinct types of coupons, , into each of its cereal boxes independently, with equal probability . Let be a random variable indicating the number of objects one has to buy before collecting all of the coupons. The question then is:

How many boxes should one expect to buy before collecting at least one of each type of coupon? Equivalently, what is ?

The following is a well-known lemma which computes

with a geometric distribution approach.

###### Lemma 3.4 ([9]).

Let be the number of boxes needed to be purchased to collect all coupons. Then,

 E[X]=qHq=qlogq+γq+12+O(1/q)

where is the -th harmonic number and is the Euler-Mascheroni constant. Furthermore,

 Var(X)<π26q2.

We can reduce the problem of uniform smearing to the Coupon Collector’s Problem.

###### Lemma 3.5.

A uniform distribution over maps under to a uniform distribution over .

###### Proof.

By Lemma 3.3, and the fact that all cosets of are of the same size, a polynomial chosen uniformly at random in will have probability of being in any given coset of and hence there is a probability that produces any given element in .

So, instead of selecting polynomials in we can choose elements of uniformly. In this context, the smearing problem is identical to the Coupon Collector’s Problem. Each polynomial has an image uniformly chosen between and , and we want to “collect all the coupons,” i.e. for each element , collect at least one polynomial having as its image under the smearing map.

#### 3.2.1. The Principal Question

Let and be given. Assume that the elements of are chosen uniformly at random (, ). The question is to determine the probability that will smear.

###### Remark 3.6.

Note that, although in the smearing problem must be prime, in the broader context of the coupon collector’s problem can be any positive integer; we will not demand such a restriction within our probabilistic calculations.

Fix a polynomial and thus fix some root and the smearing map . Denote with the probability that a subset of size smears.

###### Remark 3.7.

Given a probability distribution on

(the random variable from the coupon collector’s problem representing the number of cereal boxes one must purchase before collecting all coupons),

is simply the cumulative distribution function of

, since

We compute and approximate smearing probabilities in Section 4.

### 3.3. Smearing: The Non-Uniform Case

In this section, we investigate the smearing condition when the error distribution over is not uniform. Note first that we can view drawing from

 (a,a⋅s+e),a←UPq,s∈Pq,e←Gσ,Pq

as simply drawing

 (a,e),a←UPq,e←Gσ,Pq

since multiplying an element selected uniformly at random by a fixed secret is the same as selecting uniformly at random, which, when the Gaussian distribution is added, yields the same Gaussian distribution for . When we discuss the mapped error distribution, we consider selecting and its mapping .

#### 3.3.1. The Distribution of e(α)

An explicit method of calculating the probability distribution of given the distribution of the polynomial coefficients of is presented in [4].

###### Theorem 3.8.

[4] Suppose are independent random variables in with the same probability distribution . Let . Then, for any , the probability distribution of can be computed as the coefficients of the polynomial

 c(x)c(xa1)⋯c(xan−1)modxq−1.

Since ,

by setting , and using the theorem above we can compute the probability distribution of over . In general, we refer to that distribution as , the “mapped error distribution”.

We can compute the discrete Gaussian distribution over as in [4]. The parameter used is .

Note that the low multiplicative order of over gives the mapped error distribution structure; this illustrates the setting for the of low order attack, as the mapped error distribution is certainly not uniform.

## 4. Computing Smearing Probabilities

Let be a discrete probability distribution on (where denotes the set ), and let denote the probability that, when samples are independently drawn from , they will “smear,” i.e. each element in will be chosen at least once. When is the uniform distribution, we denote this probability as , or simply as . In this section, we provide practical ways of calculating these probabilities.

### 4.1. An Approximation of Uniform Smearing Probabilities

A result by Erdős and Rényi [8] gives a way to approximate for large values of .

###### Theorem 4.1 (Erdős, Rényi [8]).

Let be the uniform distribution over , and let be the random variable denoting the number of independent samples one must take from until picking each of at least once. Then,

 limq→∞Pr(X

In our case, , so making the substitution gives the formula

 P(m,q)≈exp(−qexp(−mq)).

Although this is a powerful approximation, for some applications it might be preferable to calculate this probability exactly for concrete values of and . The following sections contribute towards this goal, as well as give formulas for the case when is not uniform.

### 4.2. A Recursive Formula in m

###### Proposition 4.2.

Let be a discrete probability distribution on , with being the probability of picking the th element. Let denote the probability distribution on elements after the th element has been removed from , and the remaining probabilities have been normalized. Then,

 Pχ(m,q)=Pχ(m−1,q)+q∑k=1pk(1−pk)m−1⋅Pχ/k(m−1,q−1).
###### Proof.

Assume that we choose independent samples one-by-one from the distribution . Let be the event that the samples smear. Let be the event that the sample achieves smearing (i.e. the previous samples cover distinct elements, and the th sample happens to cover the remaining th element). Also, let be the event that smearing happens within the first samples (i.e. by the time samples have been taken, they already take on distinct values). Notice that . Therefore,

 Pχ(m,q)=Pr(S)=Pr(A)+Pr(B).

To calculate

, we use the Law of Total Probability to condition on the outcome of the

sample, which we denote by :

 Pr(A) =q∑k=1Pr(A|K=k)⋅Pr(K=k) =q∑k=1Pr(A|K=k)⋅pk.

To calculate the value of , we notice that the only way that smearing is achieved by the th sample being equal to is if, first, the previous samples all fall into , and, second, if the previous samples smear on . Therefore,

 Pr(A|K=k)=(1−pk)m−1⋅Pχ/k(m−1,q−1),

where is the probability that the first samples are contained in , and is the probability that, conditioned on this, these samples smear on . Hence,

 Pr(A)=q∑k=1pk(1−pk)m−1⋅Pχ/k(m−1,q−1).

On the other hand, the probability of event , or that smearing is achieved within the first samples, is simply , so

 Pχ(m,q)=Pχ(m−1,q)+q∑k=1pk(1−pk)m−1⋅Pχ/k(m−1,q−1).

In the case where is the uniform distribution on , this relation becomes greatly simplified:

###### Lemma 4.3.

For the uniform distribution on ,

 P(m,q)=P(m−1,q)+P(m−1,q−1)⋅(q−1q)m−1.
###### Proof.

We use the result of Proposition 4.2. In the uniform distribution, for every . Furthermore, for every , is the uniform distribution on , so . Therefore, if is the uniform distribution, then

 Pχ(m,q) =Pχ(m−1,q)+q∑k=1pk(1−pk)m−1⋅Pχ/k(m−1,q−1) =P(m−1,q)+q∑k=11q(q−1q)m−1⋅P(m−1,q−1) =P(m−1,q)+P(m−1,q−1)⋅(q−1q)m−1.

Lemma 4.3, if implemented as a recursive formula, provides a very rapid method of computing . The base cases are rather straightforward. If , then

 P(m,q)=0,

since it is impossible to pick different elements with fewer than samples. If , then

 P(m,q)=q!qq.

To see why this is the case, notice that if the number of samples is equal to , then every single sample must be a “success,” i.e. pick an element of that has not been picked before. For the first sample, one can pick any of , so the probability of success is . For the second sample, there are unpicked elements, so the probability of success is . For the third sample, there are now elements that are not selected, so the probability of success is now . This continues until the sample, for which there is only one option left, giving a success probability of . Multiplying these probabilities together gives . Finally, if , and , then , as the first sample is always a success, and one success is sufficient in this case.

Computing using the recursive formula of Lemma 4.3 along with these base cases results in the computation of for each of and . Hence, the complexity of the recursive computation is on the order of . Notice, however, that as a result, one computes not just , but also for all , which is very useful information to have for choosing parameters for the smearing attack (which will be discussed later).

Figure 3 shows values for a range of and , calculated using this method.

While Lemma 4.3 provides an effective method of calculating smearing probabilities for uniform distributions, using Proposition 4.2 to calculate smearing probabilities for non-uniform distributions is inefficient, since, to calculate , one must calculate smearing probabilities on all subsets of , which makes the complexity on the order of . A more efficient method for non-uniform smearing can be achieved by recursion in , rather than recursion in , as described in the next section.

### 4.3. A Recursive Formula in q.

###### Proposition 4.4.

Let be a discrete probability distribution on , with being the probability of picking the element. Let denote the probability distribution on elements after the element has been removed from , and the remaining probabilities have been normalized. Then,

 Pχ(m,q)=m−q+1∑k=1(mk)pkq(1−pq)m−k⋅Pχ/q(m−k,q−1).
###### Proof.

Let be a random variable denoting the number of times the th element is picked. For smearing to occur, must be at least 1 (else the element will not be chosen), but cannot be greater than . This is because there are samples in total, and one needs at least of them to cover the first elements, leaving a maximum of available for the th element. Then, by the Law of Total Probability,

 Pχ(m,q)=m−q+1∑k=1Pr(K=k)⋅Pr(smearing|K=k).

Since the samples are drawn independently, notice that . Therefore,

 Pr(K=k)=(mk)pkq(1−pq)m−k.

On the other hand, the probability that smearing occurs given that the element is chosen times, where , is the probability that samples, taken from , smear on . Hence,

 Pr(smearing|K=k)=Pχ/q(m−k,q−1).

Finally,

 Pχ(m,q)=m−q+1∑k=1(mk)pkq(1−pq)m−k⋅Pχ/q(m−k,q−1).

Using Proposition 4.4 as a basis for a recursive method of calculating is more efficient than using Proposition 4.2. The base cases are very similar to those in Section 4.2. If , then , and if and , then for the same reasons as in the uniform case. To find , notice that, as in the uniform case, each sample must pick an element of which has not been picked before. Hence, if samples smear, they must be a permutation of . Each such permutation has a probability of , where is the probability of picking the th element, and there are such permutations, meaning the probability of smearing is

 Pχ(q,q)=q!⋅q∏k=1pk.

As expected, when is uniform, for every , so the formula simplifies to the one in Section 4.2.

Calculating recursively using Proposition 4.4 along with these base cases results in the computation of for each of and . In turn, the computation of each of these values requires a sum of on the order of terms. Hence, the complexity of this recursive method is on the order of . Notice that, as in the recursion-in- method, this recursion-in- method results in the calculation of not just , but also for all . Of course, an attacker on PLWE would not have prior knowledge of the non-uniform distribution , but such information is nevertheless useful in a retrospective analysis of the effectiveness of the smearing attack (discussed later).

## 5. The Smearing Attack

Here, we build on the previous sections to present a smearing-based attack on Polynomial Learning With Errors, which we call the “smearing attack.”

### 5.1. The Uniform Distribution Smears the Best

A fundamental principle regarding uniform and non-uniform smearing is as follows. We begin with the following lemma.

###### Lemma 5.1.

Let be a distribution on , and let , be two elements of . Let and be the probabilities of selecting and respectively. Construct as follows: take , and replace the probabilities of and with . Then,

 Pχ(m,q)≤Pχ′(m,q),

with equality if and only if .

###### Proof.

Define as the random variable representing the number of samples from the distribution which fall into . Notice that , for both and . Let be a specific instance of , . Conditioned on , smearing on is independent from smearing on , and since the probabilities on are unchanged between and , to compare and it suffices to compare the probabilities of picking at least one of each of and for both distributions, restricted to what is going on with these samples. For , this probability is

 Pr(picking both i and j|K=k) =1−Pr(not picking both i and j|K=k) =1−(Pr(picking only i|K=k)+Pr(picking only j|K=k)) =1−((pipi+pj)k+(pjpi+pj)k) =1−pki+pkj(pi+pj)k,

since the probability of picking out of is and similarly with . For , a similar computation shows that

 Pr(picking both i and j|K=k) =1−(pi+pj2)k+(pi+pj2)k((pi+pj2)+(pi+pj2))k =1−(pi+pj2)k+(pi+pj2)k(pi+pj)k.

It remains, thus, to show that

 pki+pkj≥(pi+pj2)k+(pi+pj2)k,

with equality if and only if . To show this, consider this as an optimization problem, where we try to minimize the quantity subject to the constraint . By method of Lagrangian multipliers, setting gives

 kPk−1i=kPk−1j=λ,

which implies that is minimized at Hence,

 pki+pkj≥(pi+pj2)k+(pi+pj2)k,

with equality if and only if . This implies that

 Pχ(m,q|K=k)≤Pχ′(m,q|K=k)

for . Then, by the Law of Total Probability,

 Pχ′(m,q)−Pχ(m,q) =(m−q+2∑k=2Pχ′(m,q|K=k)⋅Pr(K=k))−(m−q+2∑k=2Pχ(m,q|K=k)⋅Pr(K=k)) =m−q+2∑k=2(Pχ′(m,q|K=k)−Pχ(m,q|K=k))⋅Pr(K=k) ≥m−q+2∑k=20⋅Pr(K=k) =0.

Therefore,

 Pχ(m,q)≤Pχ′(m,q),

with equality if and only if , as seen from the optimization problem. ∎

###### Theorem 5.2.

Let be a probability distribution over , and let be the uniform distribution over . Then,

 Pχ(m,q)≤PU(m,q),

with equality if and only if .

###### Proof.

From , build a new distribution by selecting two elements in from the previous distribution , and replacing their two probabilities and with , their average. By Lemma 5.1,

 PχN+1(m,q)≥PχN(m,q)

We construct a sequence such that is a non-decreasing, infinite sequence with limit . This shows that . Furthermore, if , then the sequence is constant, meaning for all . By Lemma 5.1, this is only possible if, for each step, , meaning that Then, is a constant sequence with limit , hence . ∎

This principle is the driving force behind the attack on the Decision-PLWE problem, as described in the following sections.

### 5.2. The Smearing Decision Problem

The foundation of the smearing attack is in what we call the “smearing decision”: given a large number of samples from some probability distribution over , decide, with some certainty, whether that distribution is the uniform distribution , or a certain non-uniform distribution . We do this in the following way.

1. Choose the parameters , indicating the number of trials to be done, and , the number of samples to be taken per trial.

must be odd, while

must be picked such that while . Since for all , and both and , as functions of , have range , such an exists almost always.

2. For each trial, take samples, and check whether they smear on . If smearing happens for more than half of the trials, conclude that the samples were taken from the uniform distribution over . If, on the other hand, the smearing happens for less than half of the trials, conclude that the samples were taken from .

To give an intuitive explanation of this decision process, consider the two graphs shown in Figures 4 and Figure 5. Figure 4 shows the smearing probability for the uniform distribution (in green), and a non-uniform distribution (in blue) as a function of . The non-uniform distribution is a mapped Gaussian distribution, with the parameters , ,

(the standard deviation of the initial Gaussian), and

. As expected, for both curves, when the number of samples is small, the probability of smearing is (or close to ), while when the number of samples if large, smearing occurs almost always. The uniform and non-uniform curves can really be differentiated only for some intermediate range of .

We describe here a simple example of the smearing decision and the smearing attack in general. Suppose that or have equal probability. Assume that if smearing happens, the distribution is assumed to be uniform, and if smearing does not happen, the distribution is assumed to be non-uniform. Then, the the probability that the decision is correct, is

 Pr(decision is correct) =Pr(decision is correct|U)Pr(U)+Pr(decision is % correct|χ)Pr(χ) =Pr(smearing happens|U)⋅12+Pr(% smearing doesn't happen|χ)⋅12 =12(PU(m,q)+(1−Pχ(m,q))) =12+12(PU(m,q)−Pχ(m,q)).

Notice that since , the probability is strictly greater than half, and increases linearly with the difference in the smearing probabilities of the uniform and non-uniform distributions. The graph of the probabilities for different values of is shown in Figure 5. As expected, the probability is highest when the distance between the two smearing probability curves is the greatest.

The following proposition formalizes the idea of the smearing decision.

###### Proposition 5.3.

Let be the uniform distribution over , and be some non-uniform distribution over . Let be an integer such that and . Then, given arbitrarily small , there exists an such that the smearing decision with trials is correct, in the case where the true distribution is , with probability , and in the case where the true distribution is , with probability .

###### Proof.

Consider the case in which the unknown distribution about which the decision is being made is actually . Define to be a random variable denoting the number of trials for which the samples smear. In this case, for each trial, smearing happens with probability (which we denote as simply for convenience), and the trials are independent from one another. Hence, , so and . In this case, the probability that the smearing decision is incorrect is the probability that fewer than trials smear. Using Chebyshev’s Inequality, which states that for a random variable , and any ,

 Pr(|X−E(X)|≥cσ)≤1c2,

we conclude that

 Pr(decision is incorrect|U) =Pr(XN(PU−12)) ≤Pr(|NPU−X|≥N(PU−12)) ≤PU(1−PU)N(PU−12)2.

Hence,

 limN→∞Pr(decision is incorrect|U)=0,

so, in particular, we can choose large enough such that this quantity is less than . On the other hand, in the case where the unknown distribution is , (where we denote by for convenience), the decision is incorrect whenever smearing happens in more than trials. By a similar argument as above,

 Pr(decision is incorrect|χ) =