1 Introduction
Superresolution concerns recovering a resolution beyond the essential size of the point spread function of a sensor. For instance, a particularly stylised example concerns multiple point sources which, because of the finite resolution or bandwidth of the sensor, may not be visually distinguishable. Various instances of this problem exist in applications such as astronomy [3], imaging in chemistry, medicine and neuroscience [4, 5, 6, 7, 8, 9, 10, 11]
, spectral estimation
[12, 13], geophysics [14], and system identification [15]. Often in these application much is known about the point spread function of the sensor, or can be estimated and, given such model information, it is possible to identify point source locations with accuracy substantially below the essential width of the sensor point spread function. Recently there has been substantial interest from the mathematical community in posing algorithms and proving superresolution guarantees in this setting, see for instance [16, 17, 18, 19, 20, 21, 22, 23]. Typically these approaches borrow notions from compressed sensing [24, 25, 26]. In particular, the aforementioned contributions to superresolution consider what is known as the Total Variation norm minimisation over measures which are consistent with the samples. In this manuscript we show first that, for suitable point spread functions, such as the Gaussian, any discrete nonnegative measure composed of point sources is uniquely defined from of its samples, and moreover that this uniqueness is independent of the separation between the point sources. We then show that by simply imposing nonnegativity, which is typical in many applications, any nonnegative measure suitably consistent with the samples is similarly close to the discrete nonnegative measure which would generate the noise free samples. These results substantially simply results by [1, 2] and show that, while regularisers such as Total Variation may be particularly effective, in the setting of nonnegative point sources such regularisers are not necessary to achieve stability.1.1 Problem setup
Throughout this manuscript we consider nonnegative measures in relation to discrete measures. To be concrete, let be a discrete nonnegative Borel measure supported on the interval , given by
(1) 
Consider also realvalued and continuous functions and let be the possibly noisy measurements collected from by convolving against sampling functions :
(2) 
where with can represent additive noise. Organising the samples from (2) in matrix notation by letting
(3) 
allows us to state the program we investigate:
(4) 
with . Herein we characterise nonnegative measures consistent with measurements (2) in relation to the discrete measure (1). That is, we consider any nonnegative Borel measure from the Program (4) ^{1}^{1}1An equivalent formulation of Program (4) minimises over all nonnegative measures on (without any constraints). In this context, however, we find it somewhat more intuitive to work with Program (4), particularly considering the importance of the case . and show that any such is close to given by (1) in an appropriate metric, see Theorems 4, 5, 11, 12 and 13. Moreover, we show that the from (1) is the unique solution to Program (4) when ; e.g. in the setting of exact samples, for all . Program (4) is particularly notable in that there is no regulariser of beyond imposing nonnegativity and, rather than specify an algorithm to select a which satisfies Program (4), we consider all admissible solutions. The admissible solutions of Program (4) are determined by the source and sample locations, which we denote as
(5) 
respectively, as well as the particular functions used to sample the sparse nonnegative measure from (1). Lastly, we introduce the notions of minimum separation and sample proximity, which we use to characterise solutions of Program (4).
Definition 1.
(Minimum separation and sample proximity) For finite , let be the minimum separation between the points in along with the endpoints of , namely
(6) 
We define the sample proximity to be the number such that, for each source location , there exists a closest sample location to with
(7) 
We describe the nearness of solutions to Program (4) in terms of an additional parameter associated with intervals around the sources ; that is we let and define intervals as:
(8) 
where , and set and to be the complements of these sets with respect to . In order to make the most general result of Theorems 11 and 12 more interpretable, we turn to presenting them in Section 1.2 for the case of being shifted Gaussians.
1.2 Main results simplified to Gaussian window
In this section we consider to be shifted Gaussians with centres at the source locations , specifically
(9) 
We might interpret (9) as the “point spread function” of the sensing mechanism being a Gaussian window and the sample locations in the sense that
(10) 
evaluates the “filtered” copy of at locations where denotes convolution.
As an illustration, Figure 1 shows the discrete measure in blue for , the continuous function in red, and the noisy samples at the sample locations represented as the black circles.
Conditions 2.
(Gaussian window conditions) When the window function is a Gaussian , we require its width and the source and sampling locations from (5) to satisfy the following conditions:

Samples define the interval boundaries: and ,

Samples near sources: for every , there exists a pair of samples , one on each side of , such that and , for small enough; which is quantified in Lemma 24.

Sources away from the boundary: for every and ,

Minimum separation of sources: and , where the minimum separation of the sources is defined in Definition 1.
The four properties in Conditions 2 can be interpreted as follows: Property 1 imposes that the sources are within the interval defined by the minimum and maximum sample; Property 2 ensures that there is a pair of samples near each source which translates into a sampling density condition in relation to the minimum separation between sources and in particular requires the number of samples ; Property 3 is a technical condition to ensure sources are not overly near the sampling boundary; and Property 4 relates the minimum separation between the sources to the width of the Gaussian window.
We can now present our main results on the robustness of Program (4) as they apply to the Gaussian window; these are Theorem 4, which follows from Theorem 11, and Theorem 5, which follows from Theorem 12. However, before stating the stability results, it is important to note that, in the setting of exact samples, , the solution of Program (4) is unique when .
Proposition 3.
Proposition 3 states that Program (4) successfully localises the impulses present in given only measurements when are shifted Gaussians whose centres are in . Theorems 4 and 5 extend this uniqueness condition to show that any solution to Program (4) with is proportionally close to the unique solution when .
Theorem 4.
(Wasserstein stability of Program (4) for Gaussian) Let and consider a sparse nonnegative measure supported on . Consider also an arbitrary increasing sequence and, for positive , let be defined in (9), which form according to (3). If and Conditions 2 hold, then Program (4) with is stable in the sense that
(11) 
for all where is the generalised Wasserstein distance as defined in (19) and the exact expression of is given in the proof (see (64) in Section 3.4.2). In particular, for and , we have:
(12) 
if
(13) 
where are universal constants and is given by (59) in Section 3.4.1
The central feature of Theorem 4 is that the proportionality to and of the Wasserstein distance between any solution to Program (4) and the unique solution for is of the form (11). The particular form of is not believed to be sharp; in particular, the exponential dependence on in (12) follows from bounding the determinant of a matrix similar to (see (128
)) by a lower bound on the minimum eigenvalue to the
power. The scaling with respect to is a feature of in Program (4) not being normalized with respect to which, for and fixed, decays with due to the increased localisation of the Gaussian. Note that the dependence is a feature of the proof and the which minimises the bound in (11) is proportional to to some power as determined by from (12). Theorem 4 follows from the more general result of Theorem 11, whose proof is given in Section 3 and the appendices.As an alternative to showing stability of Program (4) in the Wasserstein distance, we also prove in Theorem 5 that any solution to Program (4) is locally consistent with the discrete measure in terms of local averages over intervals as given in (8). Moreover, for Theorem 5, we make Property 2 of Conditions 2 more transparent by using the sample proximity from Definition 1; that is, defined in Conditions 2 is related to the sample proximity from Definition 1 by .
Theorem 5.
(Average stability of Program (4) for Gaussian: source proximity dependence) Let and consider a ksparse nonnegative measure supported on and sample locations as given in (5) and for positive , let as defined in (9). If the Conditions 2 hold, then, in the presence of additive noise, Program (4) is stable and it holds that, for any solution of Program (4) with :
(14)  
(15) 
where the exact expressions of and are given in the proof (see (70) in Section 3.4.3), provided that , and satisfy (27). In particular, for , and , we have and:
(16) 
Above, are universal constants and is given by (60) in Section 3.4.1.
The bounds in Theorems 4 and 5 are intentionally similar, and though their proofs make use of the same bounds, they have some fundamental differences. While both (11) and (14) have the same proportionality to and , the role of in particular differs substantially in that Theorem 5 considers averages of over . Also different in their form is the dependence on and in Theorems 4 and 5 respectively. The presence of in Theorem 5 is a feature of the proof which we expect can be removed and replaced with by proving any solution of Program (4) is necessarily bounded due to the sampling proximity condition of Definition 1. It is also worth noting that (14) avoids an unnatural dependence present in (11). Theorem 5 follows from the more general result of Theorem 12, whose proof is given in Section 3.4.3.
1.3 Organisation and summary of contributions
Organisation:
The majority of our contributions were presented in the context of Gaussian windows in Section 1.2. These are particular examples of a more general theory for windows that form a Chebyshev system, commonly abbreviated as Tsystem, see Definition 7. A Tsystem is a collection of continuous functions that loosely behave like algebraic monomials. It is a general and widelyused concept in classical approximation theory [27, 28, 29] that has also found applications in modern signal processing [1, 2]. The framework we use for these more general results is presented in Section 2.1, the results presented in Section 2.2, and their proof sketched in Section 3. Proofs of the lemmas used to develop the results are deferred to the appendices.
Summary of contributions:
We begin discussing results for general window function with Proposition 8, which establishes that for exact samples, namely , a Tsystem, and from measurements, the unique solution to Program (4) with is the sparse measure given in (1). In other words, we show that the measurement operator in (3) is an injective map from sparse nonnegative measures on to when form a Tsystem. No minimum separation between impulses is necessary here and need only to be continuous. As detailed in Section 1.4, Proposition 8 is more general and its derivation is far simpler and more intuitive than what the current literature offers. Most importantly, no explicit regularisation is needed in Program (4) to encourage sparsity: the solution is unique.
Our main contributions are given in Theorems 11 and 12, namely that solutions to Program (4) with are proportionally close to the unique solution (1) with ; these theorems consider nearness in terms of the Wasserstein distance and local averages respectively. Furthermore, Theorem 11 allows to be a general nonnegative measure, and shows that solutions to Program (4) must be proportional to both how well might be approximated by a sparse measure, , with minimum source separation , and a proportional distance between and solutions to Program (4). These theorems require and looselyspeaking the measurement apparatus forms a T*system, which is an extension of a Tsystem to allow the inclusion of an additional function which may be discontinuous, and enforcing certain properties of minors of . To derive the bounds in Theorems 4 and 5 we show that shifted Gaussians as given in (9) augmented with a particular piecewise constant function form a T*system.
Lastly, in Section 2.2.1, we consider an extension of Theorem 12 where the minimum separation between sources is smaller than . We extend the intervals from (8) to in (31), where intervals which overlap are combined. The resulting Theorem 13 establishes that, while sources closer than may not be identifiable individually by Program (4), the local average over of both in (1) and any solution to Program (4) will be proportionally within of each other.
To summarise, the results and analysis in this work simplify, generalise and extend the existing results for gridfree and nonnegative superresolution. These extensions follow by virtue of the nonnegativity constraint in Program (4), rather than the common approach based on the TV norm as a sparsifying penalty. We further put these results in the context of existing literature in Section 1.4.
1.4 Comparison with other techniques
We show in Proposition 8 that a nonnegative sparse discrete measure can be exactly reconstructed from samples (provided that the atoms form a system, a property satisfied by Gaussian windows for example) by solving a feasibility problem. This result is in contrast to earlier results in which a TV norm minimisation problem is solved. De Castro and Gamboa [2] proved exact reconstruction using TV norm minimisation, provided the atoms form a homogeneous Tsystem (one which includes the constant function). An analysis of TV norm minimisation based on Tsystems was subsequently given by Schiebinger et al. in [1], where it was also shown that Gaussian windows satisfy the given conditions. We show in this paper that the TV norm can be entirely dispensed with in the case of nonnegative superresolution. Moreover, analysis of Program (4) is substantially simpler than its alternatives. In particular, Proposition 8 for noisefree superresolution immediately follows from the standard results in the theory of Tsystems. The fact that Gaussian windows form a Tsystem is immediately implied by wellknown results in the Tsystem theory, in contrast to the heavy calculations involved in [1].
While neither of the above works considers the noisy setting or model mismatch, Theorems 11 and 12 in our work show that solutions to the nonnegative superresolution problem which are both stable to measurement noise and model inaccuracy can also be obtained by solving a feasibility program. The most closely related prior work is by Doukhan and Gamboa [30], in which the authors bound the maximum distance between a sparse measure and any other measure satisfying noisecorrupted versions of the same measurements. While [30]
does not explicitly consider reconstruction using the TV norm, the problem is posed over probability measures, that is those with TV norm equal to one. Accuracy is captured according to the Prokhorov metric. It is shown that, for sufficiently small noise the Prokhorov distance between the measures is bounded by
, where is the noise level and depends upon properties of the window function. In contrast, we do not make any total variation restrictions on the underlying sparse measure, we extend to consider model inaccuracy and we consider different error metrics (the generalised Wasserstein distance and the local averaged error).More recent results on noisy nonnegative superresolution all assume that an optimisation problem involving the TV norm is solved. Denoyelle et al. [21] consider the nonnegative superresolution problem with a minimum separation between source locations. They analyse a TV normpenalized least squares problem and show that a sparse discrete measure can be stably approximated provided the noise scales with , showing that the minimum separation condition exhibits a certain stability to noise. In the gridded setting, stability results for noisy nonnegative superresolution were obtained in the case of Fourier convolution kernels in [31] under the assumption that the spike locations satisfy a Rayleigh regularity property, and these results were extended to the case of more general convolution kernels in [32].
Superresolution in the more general setting of signed measures has been extensively studied. In this case, the story is rather different, and stable identification is only possible if sources satisfy some separation condition. The required minimum separation is dictated by the resolution of the sensing system, e.g., the Rayleigh limit of the optical system or the bandwidth of the radar receiver. Indeed, it is impossible to resolve extremely close sources with equal amplitudes of opposite signs; they nearly cancel out, contributing virtually nothing to the measurements. A nonexhaustive list of references is [33, 17, 18, 19, 20, 22, 23].
In Theorem 12 we give an explicit dependence of the error on the sampling locations. This result relies on local windows, hence it requires samples near each source, and we give a condition that this distance must satisfy. The condition that there are samples near each source in order to guarantee reconstruction also appears in a recent manuscript on sparse deconvolution [34]. However, this work relies on the minimum separation and differentiability of the convolution kernel, which we overcome in Theorem 12.
2 Stability of Program (4) to inexact samples for Tsystems
The main results stated in the introduction, Theorems 4 and 5, are for Gaussian windows, which allows the results to omit technical details of the more general results of Theorems 1113. These more general results apply to windows that form Chebyshev systems, see Definition 7, and an extension to systems, see Definition 9, which allows for explicit control of the stability of solutions to Program (4). These Chebyshev systems and other technical notions needed are introduced in Section 2.1 and our most general contributions are presented using these properties in Section 2.2.
2.1 Chebyshev systems and sparse measures
Before establishing stability of Program (4) to inexact samples, we show that solutions to Program (4) with , that is with in (2) having , has from (1) as its unique solution once . This result relies on forming a Chebyshev system, commonly abbreviated Tsystem [27].
Definition 7.
(Chebyshev, Tsystem [27]) Realvalued and continuous functions form a Tsystem on the interval if the matrix is nonsingular for any increasing sequence .
Example of Tsystems include the monomials on any closed interval of the real line. In fact, Tsystems generalise monomials and in many ways preserve their properties. For instance, any “polynomial” of a Tsystem has at most distinct zeros on . Or, given distinct points on , there exists a unique polynomial in
that interpolates these points. Note also that linear independence of
is a necessary condition for forming a Tsystem, but not sufficient. Let us emphasise that Tsystem is a broad and general concept with a range of applications in classical approximation theory and modern signal processing. In the context of superresolution for example, translated copies of the Gaussian window, as given in (9), and many other measurement windows form a Tsystem on any interval. We refer the interested reader to [27, 29] for the role of Tsystems in classical approximation theory and to [35] for their relationship to totally positive kernels.2.1.1 Sparse nonnegative measure uniqueness from exact samples
Our analysis based on TSystems has been inspired by the work by Schiebinger et al. [1], where the authors use the property of TSystems to construct the dual certificate for the spike deconvolution problem and to show uniqueness of the solution to the TV norm minimisation problem without the need of a minimum separation. The theory of TSystems has also been used in the same context by De Castro and Gamboa in [2]. However, both [1] and [2] focus on the noisefree problem exclusively, while we will extend the TSystems approach to the noisy case as well, as we will see later.
Our work, in part, simplifies the prior analysis considerably by using readily available results on TSystems and we go one step further to show uniqueness of the solution of the feasibility problem, which removes the need for TV norm regularisation in the results of Schiebinger et al. [1]; this simplification in the presence of exact samples is given in Proposition 8.
Proposition 8.
Proposition 8 states that Program (4) successfully localises the impulses present in given only measurements when form a Tsystem on . Note that only need to be continuous and no minimum separation is required between the impulses. Moreover, as discussed in Section 1.4, the noisefree analysis here is substantially simpler as it avoids the introduction of the TV norm minimisation and is more insightful in that it shows that it is not the sparsifying property of TV minimisation which implies the result, but rather it follows from the nonnegativity constraint and the Tsystem property, see Section 3.1.
2.1.2 T*systems in terms of source and sample configuration
While Proposition 8 implies that Tsystems ensure unique nonnegative solutions, more is needed to ensure stability of these results to inexact samples; that is . This is to be expected as Tsystems imply invertibility of the linear system in (3) for any configuration of sources and samples as given in (5), but doe not limit the condition number of such a system. We control the condition number of by imposing further conditions on the source and sample configuration, such as those stated in Conditions 2, which is analogous to imposing conditions that there exists a dual polynomial which is sufficiently bounded away from zero in regions away from sources, see Section 2.2. In particular, we extend the notion of Tsystem in Definition 7 to a T*system which includes conditions on samples at the boundary of the interval, additional conditions on the window function, and a condition ensuring that there exist samples sufficiently near sources as given by the notation (8) but stated in terms of a new variable so as to highlight its different role here.
Definition 9.
(T*system) For an even integer , realvalued functions form a T*system on if the following holds for every when is sufficiently small. For any increasing sequence such that

, ,

except exactly three points, namely , , and say , the other points belong to ,

every contains an even number of points,
we have that

the determinant of the matrix is positive, and

the magnitudes of all minors of along the row containing approach zero at the same rate^{2}^{2}2A function approaches zero at the rate when . See, for example [36], page 44. when .
Let us briefly discuss T*systems as an alternative to Tsystems in Definition 7. The key property of a Tsystem to our purpose is that an arbitrary polynomial of a Tsystem on has at most zeros. Polynomials of a T*system may not have such a property as Tsystems allow arbitrary configurations of points while T*systems only ensure the determinant in condition 1 of Definition 9 be positive for configurations where the majority of points in are paired in . However, as the analysis later shows, condition 1 in Definition 9 is designed for constructing dual certificates for Program (4). We will also see later that condition 2 in Definition 9 is meant to exclude trivial polynomials that do not qualify as dual certificates. Lastly, rather than any increasing sequence , Definition 9 only considers subsets that mainly cluster around the support , whereas in our use all but one entry in is taken from the set of samples ; this is only intended to simplify the burden of verifying whether a family of functions form a T*system. While the first and third bullet points in Definition 9 require that there need to be at least two samples per interval as well as samples which define the interval endpoints which gives a sampling complexity , we typically require to include additional samples, , due to the location of being unknown. In fact, as is unknown, the third bullet point imposes a sampling density of being proportional to the inverse of the minimum separation of the sources . The additional point is not taken from the set , it instead acts as a free parameter to be used in the dual certificate. In Figure 2, we show an example of points which satisfy the conditions in Definition 9 for sources.
We will state some of our more general stability results for solutions of Program (4) in terms of the generalised Wasserstein distance [37] between and , both nonnegative measures supported on , defined as
(19) 
where the infimum is over all nonnegative Borel measures on such that . Here, is the total variation of measure , akin to the norm in finite dimensions, and is the standard Wasserstein distance, namely
(20) 
where the infimum is over all measures on that produce and as marginals. In a sense, extends to allow for calculating the distance between measures with different masses. ^{3}^{3}3 In [37], the authors consider the pWasserstein distance, where popular choices of are and . In our work, we only use the 1Wasserstein distance.
Moreover, in some of our most general results we consider the extension to where need not be a discrete measure, see Theorem 11. In that setting, we introduce an intermediate discrete measure which approximates in the metric. That is, given an integer and positive , let be a sparse separated measure supported on of size and with such that, for ,
(21) 
where the infimum is over all sparse separated nonnegative measures supported on and the parameter allows for near projections of onto the space of sparse separated measures.
Lastly, we also assume that the measurement operator in (3) is Lipschitz continuous, namely there exists such that
(22) 
for every pair of measures supported on .
2.2 Stability of Program (4)
Equipped with the definitions of T and T*systems, Definitions 7 and 9 respectively, we are able to characterise any solution to Program (4) for which form a Tsystem and suitable source and sample configurations (5). We control the stability to inexact measurements by introducing two auxiliary functions in Definition 10, which quantify the dual polynomials and associated with Program (4) to be at least away from the necessary constraints for all values of at least away from the sources. Specifically, for and defined below, we will require that and for all .
Definition 10.
(Dual polynomial separators) Let be a bounded function with , be positive constants, and the neighbourhoods as defined in (8). We then define
(23) 
Moreover, let be an arbitrary sign pattern. We define as
(24) 
We defer the introduction of dual polynomials and and the precise role of the above dual polynomial separators to Section 3, but state our most general results characterising the solutions to Program (4) in terms of these separators.
Theorem 11.
(Wasserstein stability of Program (4) for a Tsystem) Consider a nonnegative measure supported on and assume that the measurement operator is Lipschitz, see (3) and (22). Consider a sparse nonnegative discrete measure supported on and fix , see (6), and consider functions and as defined in Definition 10. For , suppose that

form a Tsystem on ,

form a T*system on , and

form a T*system on for any sign pattern .
Let be a solution of Program (4) with
(25) 
Then there exist vectors
such that(26) 
where the minimum is over all sign patterns and the vectors above are the vectors of coefficients of the dual polynomials and associated with Program (4), see Lemmas 16 and 17 in Section 3 for their precise definitions.
Theorem 4 follows from Theorem 11 by considering Gaussian as stated in (9) which is known to be a Tsystem [27], and introducing Conditions 2 on the source and sample configuration (5) such that the conditions of Theorem 11 can be proved and the dual coefficients and bounded; the details of these proofs and bounds are deferred to Section 3 and the appendices.
The particular form of and in Theorem 11, constant away from the support of , is purely to simplify the presentation and proofs. Note also that the error depends both on the noise level and the residual , not unlike the standard results in finitedimensional sparse recovery and compressed sensing [24, 38]. In particular, when , we approach the setting of Proposition 8, where we have uniqueness of sparse nonnegative measures from exact samples.
Note that the noise level and the residual are not independent; that is, specifies confidence in the samples and the model for how the samples are taken while reflects nearness to the model of discrete measures. Corollary 6 show that the parameter can be removed, for shifted Gaussians, in the setting where is discrete, that is , in which case is bounded by .
The more general variant of Theorem 5 follows from Theorem 12 by introducing alternative conditions on the source and sample configuration and omitting the need for the functions , which is the cause of the unnatural dependence in Theorem 4.
Theorem 12.
(Average stability for Program (4) for a Tsystem) Let be a solution of Program (4) and consider the function as defined in Definition 10. Suppose that:

form a Tsystem on ,

form a T*system on , and

and from Definition 1 satisfy
(27)
Then, for any and for all ,
(28)  
(29) 
where:

,

is the Lipschitz constant of ,
Theorem 12 bounds the difference between the average over the interval of any solution to Program (4) and the discrete measure whose average is simply . The condition on to satisfy (27) is used to ensure the matrix from (30) is strictly diagonally dominant. It relies on the windows being sufficiently localised about zero. Though Theorem 12 explicitly states that the location of the closest samples to each source is less than , this can be achieved without knowing the locations of the sources by placing the samples uniformly at interval which gives a sampling complexity of . Lastly, a similar bound on the integral of over is given by Lemma 16 in Section 3.
2.2.1 Clustering of indistinguishable sources
Theorems 11 and 12 give uniform guarantees for all sources in terms of the minimum separation condition , which measures the worst proximity of sources. One might imagine that, for example, if all but two sources are sufficiently well separated, then Theorem 12 might hold for the sources that are well separated; moreover, assuming is fixed, then if two sources and with magnitudes and are closer than , namely , we might imagine that a variant of Theorem 12 might hold but with sources and approximated with source near and and with .
In this section we extend Theorem 12 to this setting by considering fixed and alternative intervals a partition of such that each contains a group of consecutive sources (with weights respectively) which are within at most of each other. Define
(31) 
for , so that we have
(32) 
Theorem 13.
(Average stability for Program (4): grouped sources) Let be a solution of Program (4) and be partitioned as described by (31). If the samples are placed uniformly at interval where satisfies (27) with , then there exist with such that
(33) 
where the constants are the same as in (12) and the matrix is
Note that Lemma 16 still holds if we replace any group of sources from an interval with some , so the bound from Lemma 16 on remains valid without modification.
As an exemplar source location where Theorem 13 might be applied, consider the situation where the source locations comprising are drawn uniformly at random in , where we have that (from [39] page 42, Exercise 22)
Then, the cumulative distribution function is
and so the distribution of is
Comments
There are no comments yet.