On alternative quantization for doubly weighted approximation and integration over unbounded domains

It is known that for a ρ-weighted L_q-approximation of single variable functions f with the rth derivatives in a ψ-weighted L_p space, the minimal error of approximations that use n samples of f is proportional to ω^1/α_L_1^αf^(r)ψ_L_pn^-r+(1/p-1/q)_+, where ω=ρ/ψ and α=r-1/p+1/q. Moreover, the optimal sample points are determined by quantiles of ω^1/α. In this paper, we show how the error of best approximations changes when the sample points are determined by a quantizer κ other than ω. Our results can be applied in situations when an alternative quantizer has to be used because ω is not known exactly or is too complicated to handle computationally. The results for q=1 are also applicable to ρ-weighted integration over unbounded domains.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

08/04/2019

Optimal sampling strategies for multivariate function approximation on general domains

In this paper, we address the problem of approximating a multivariate fu...
12/15/2019

Boosted optimal weighted least-squares

This paper is concerned with the approximation of a function u in a give...
02/17/2020

On the Approximability of Weighted Model Integration on DNF Structures

Weighted model counting admits an FPRAS on DNF structures. We study weig...
01/18/2022

Weighted ℓ_q approximation problems on the ball and on the sphere

Let L_q,μ, 1≤ q<∞, μ≥0, denote the weighted L_q space with the classic...
07/04/2022

Optimal numerical integration and approximation of functions on ℝ^d equipped with Gaussian measure

We investigate the numerical approximation of integrals over ℝ^d equippe...
01/24/2022

Tractability of approximation in the weighted Korobov space in the worst-case setting

In this paper we consider L_p-approximation, p ∈{2,∞}, of periodic funct...
03/30/2021

Equivalence between Sobolev spaces of first-order dominating mixed smoothness and unanchored ANOVA spaces on ℝ^d

We prove that a variant of the classical Sobolev space of first-order do...

1 Introduction

In various applications, continuous objects (signals, images, etc.) are represented (or approximated) by their discrete counterparts. That is, we deal with quantization

. From a pure mathematics point of view, quantization often leads to approximating functions from a given space by step functions or, more generally, by (quasi-)interpolating piecewise polynomials of certain degree. Then it is important to know which quantizer should be used, or how to select

break points (knots) to make the error of approximation as small as possible.

It is well known that for approximation on a compact interval in the space of real-valued functions such that the choice of an optimal quantizer is not a big issue, since equidistant knots lead to approximations with optimal error

(1)

where depends only on , , and , and where . The problem becomes more complicated if we switch to weighted approximation on unbounded domains. A generalization of (1) to this case was given in [5], and it reads as follows. Assume for simplicity that the domain Let be two positive and integrable weight functions. For a positive integer and consider the -weighted approximation in the linear space of functions with absolutely (locally) continuous st derivative and such that the -weighted norm of is finite, i.e., Note that the spaces have been introduced in [7], and the role of is to moderate their size.

Denote

(2)

and suppose that and are nonincreasing on and that

(3)

It was shown in [5, Theorem 1] that then one can construct approximations using knots with -weighted error at most

This means that if (3) holds true, then the upper bound on the worst-case error is proportional to . The convergence rate is optimal and a corresponding lower bound implies that if (3) is not satisfied then the rate cannot be reached (see [5, Theorem 3]).

The optimal knots

are determined by quantiles of to be more precise,

(4)

In order to use the optimal quantizer (4) one has to know ; otherwise he has to rely on some approximations of Moreover, even if is known, it may be a complicated and/or non-monotonic function and therefore difficult to handle computationally. Driven by this motivation, the purpose of the present paper is to generalize the results of [5] even further to see how the quality of best approximations will change if the optimal quantizer is replaced in (4) by another quantizer

A general answer to the aforementioned question is given in Theorems 1 and 3 of Section 2. They show, respectively, tight (up to a constant) upper and lower bounds for the error when a quantizer with instead of is used to determine the knots. To be more specific, define

(5)

and

(6)

(Note that (5) and (6) are consistent for .) If then the best achievable error is proportional to

This means, in particular, that for the error to behave as it is sufficient (but not necessary) that decreases no faster than as For instance, if the optimal quantizer is Gaussian, then the optimal rate is still preserved if its exponential substitute with arbitrary is used. It also shows that, in case is not exactly known, it is much safer to overestimate than underestimate it, see also Example 5.

The use of a quantizer as above results in approximations that are worse than the optimal approximations by the factor of

In Section 3, we calculate the exact values of this factor for various combinations of weights , and , including: Gaussian, exponential, log-normal, logistic, and -Student. It turns out that in many cases is quite small, so that the loss in accuracy of approximation is well compensated by simplification of the weights.

The results for are also applicable for problems of approximating -weighted integrals

More precisely, the worst case errors of quadratures that are integrals of the corresponding piecewise interpolation polynomials approximating functions are the same as the errors for the -weighted approximations. Hence their errors, proportional to , are (modulo a constant) the best possible among all quadratures. These results are especially important for unbounded domains, e.g., or . For such domains, the integrals are often approximated by Gauss-Laguerre rules and Gauss-Hermite rules, respectively, see, e.g., [1, 3, 6]; however, their efficiency requires smooth integrands and the results are asymptotic. Moreover, it is not clear which Gaussian rules should be used when is not a constant function. But, even for , it is likely that the worst case errors (with respect to ) of Gaussian rules are much larger than , since the Weierstrass theorem holds only for compact . A very interesting extension of Gaussian rules to functions with singularities has been proposed in [2]. However, the results of [2] are also asymptotic and it is not clear how the proposed rules behave for functions from spaces . In the present paper, we deal with functions of bounded smoothness () and provide worst-case error bounds that are minimal. We stress here that the regularity degree is a fixed but arbitrary positive integer. The paper [4] proposes a different approach to the weighted integration over unbounded domains; however, it is restricted to regularity only.

The paper is organized as follows. In the following section, we present ideas and results about alternative quantizers. The main results are Theorems 1 and 3. In Section 3, we apply our results to some specific cases for which numerical values of are calculated.

2 Optimal versus alternative quantizers

We consider -weighted approximation in the space as defined in the introduction; however, in contrast to [5], we do not assume that the weights and are nonincreasing. Although the results of this paper pertain to domains being an arbitrary interval, to begin with we assume that

We will explain later what happens in the general case including

Let the knots be determined by a nonincreasing function (quantizer) satisfying i.e.,

(7)

Let be a piecewise Taylor approximation of with break-points (7),

We remind the reader of the definition of the quantity in (5) and (6), which will be of importance in the following theorem.

Theorem 1

Suppose that

Then for every we have

(8)

where

  • We proceed as in the proof of [5, Theorem 1] to get that for

    Since (cf. [5, p.36])

    the error is upper bounded as follows:

    (9)

    Now we maximize the right hand side of (2) subject to

    After the substitution

    this is equivalent to

    maximizing  subject to .

    We have two cases:

    For , we set and use Jensen’s inequality to obtain

    Hence the maximum equals and it is attained at for and otherwise. In this case, the maximum is upper bounded by which means that

    For we use the method of Lagrange multipliers and find this way that the maximum equals

    and is attained at

    Since by the probabilistic version of Jensen’s inequality with density we have

    This implies that

    and finally

    as claimed since .

Remark 2

If derivatives of are difficult to compute or to sample, a piecewise Lagrange interpolation can be used, as in [5]. Then the result is slightly weaker than that of the present Theorem 1; namely (cf. [5, Theorem 2]), there exists depending only on , and such that

We now show that the error estimate of Theorem

1 cannot be improved.

Theorem 3

There exists depending only on and with the following property. For any approximation that uses only information about function values and/or its derivatives (up to order ) at the knots given by (7), we have

(10)
  • We fix and consider first the weighted approximation on assuming that in this interval the weights are step functions with break points given by (7). Let and be correspondingly the values of and on successive intervals Then we clearly have that

    For simplicity, we write Let be functions supported on such that for and

    (11)

    We also normalize so that We stress that a positive in (11) exists and depends only on and

    Since all nullify at the knots the ‘sup’ (worst case error) in (10) is bounded from below by

    where we used the fact that For such we have

    Thus we arrive at a maximization problem that we already had in the proof of Theorem 1.

    For we have

    while for we have

    as claimed.

    For arbitrary weights, we replace and with the corresponding step functions with

    and go with to

We now comment on what happens when the domain is different from It is clear that Theorems 1 and 3 remain valid for being a compact interval, say with Consider

In this case, we assume that is nonincreasing on and nondecreasing on We have knots which are determined by the condition

(12)

(where ). Note that (12) automatically implies . The piecewise Taylor approximation is also correspondingly defined for negative arguments. With these modifications, the corresponding Theorems 1 and 3 have literally the same formulation for and for

Observe that the error estimates of Theorems 1 and 3 for arbitrary differ from the error for optimal by the factor

From this definition it is clear that for any we have

This quantity satisfies the following estimates.

Proposition 4

We have

(13)

The rightmost inequality is actually an equality whenever .

  • Assume without loss of generality that so that Then for any and

    which equals for For we have so that we can use Jensen’s inequality to get

    The remaining inequality is obvious.

Although the main idea of this paper is to replace by another function that is easier to handle, our results allow a further interesting observation that is illustrated in the following example.

Example 5

Let

and the weights

Then and and . Suppose that instead of we use

Since we have

The graph of is drawn in Fig. 1

. It follows that it is safer to overestimate the actual variance

than to underestimate it.

Figure 1: Plot of versus from Example 5

3 Special cases

Below we apply our results to specific weights , and specific values of and .

3.1 Gaussian and

Consider ,

for positive and . Since

for we have to have , and then

We propose using

Then and the points satisfying (12),

are given by

(14)

In particular, we have

We now consider the two cases and separately:

3.1.1 Case of

Clearly

and

Hence, for we have that

Note that does not depend on and (as long as ). For instance, we have the following rounded values:

3.1.2 Case of

We have now

where

where . This gives