1 Introduction
In various applications, continuous objects (signals, images, etc.) are represented (or approximated) by their discrete counterparts. That is, we deal with quantization
. From a pure mathematics point of view, quantization often leads to approximating functions from a given space by step functions or, more generally, by (quasi)interpolating piecewise polynomials of certain degree. Then it is important to know which quantizer should be used, or how to select
break points (knots) to make the error of approximation as small as possible.It is well known that for approximation on a compact interval in the space of realvalued functions such that the choice of an optimal quantizer is not a big issue, since equidistant knots lead to approximations with optimal error
(1) 
where depends only on , , and , and where . The problem becomes more complicated if we switch to weighted approximation on unbounded domains. A generalization of (1) to this case was given in [5], and it reads as follows. Assume for simplicity that the domain Let be two positive and integrable weight functions. For a positive integer and consider the weighted approximation in the linear space of functions with absolutely (locally) continuous st derivative and such that the weighted norm of is finite, i.e., Note that the spaces have been introduced in [7], and the role of is to moderate their size.
Denote
(2) 
and suppose that and are nonincreasing on and that
(3) 
It was shown in [5, Theorem 1] that then one can construct approximations using knots with weighted error at most
This means that if (3) holds true, then the upper bound on the worstcase error is proportional to . The convergence rate is optimal and a corresponding lower bound implies that if (3) is not satisfied then the rate cannot be reached (see [5, Theorem 3]).
The optimal knots
are determined by quantiles of to be more precise,
(4) 
In order to use the optimal quantizer (4) one has to know ; otherwise he has to rely on some approximations of Moreover, even if is known, it may be a complicated and/or nonmonotonic function and therefore difficult to handle computationally. Driven by this motivation, the purpose of the present paper is to generalize the results of [5] even further to see how the quality of best approximations will change if the optimal quantizer is replaced in (4) by another quantizer
A general answer to the aforementioned question is given in Theorems 1 and 3 of Section 2. They show, respectively, tight (up to a constant) upper and lower bounds for the error when a quantizer with instead of is used to determine the knots. To be more specific, define
(5) 
and
(6) 
(Note that (5) and (6) are consistent for .) If then the best achievable error is proportional to
This means, in particular, that for the error to behave as it is sufficient (but not necessary) that decreases no faster than as For instance, if the optimal quantizer is Gaussian, then the optimal rate is still preserved if its exponential substitute with arbitrary is used. It also shows that, in case is not exactly known, it is much safer to overestimate than underestimate it, see also Example 5.
The use of a quantizer as above results in approximations that are worse than the optimal approximations by the factor of
In Section 3, we calculate the exact values of this factor for various combinations of weights , and , including: Gaussian, exponential, lognormal, logistic, and Student. It turns out that in many cases is quite small, so that the loss in accuracy of approximation is well compensated by simplification of the weights.
The results for are also applicable for problems of approximating weighted integrals
More precisely, the worst case errors of quadratures that are integrals of the corresponding piecewise interpolation polynomials approximating functions are the same as the errors for the weighted approximations. Hence their errors, proportional to , are (modulo a constant) the best possible among all quadratures. These results are especially important for unbounded domains, e.g., or . For such domains, the integrals are often approximated by GaussLaguerre rules and GaussHermite rules, respectively, see, e.g., [1, 3, 6]; however, their efficiency requires smooth integrands and the results are asymptotic. Moreover, it is not clear which Gaussian rules should be used when is not a constant function. But, even for , it is likely that the worst case errors (with respect to ) of Gaussian rules are much larger than , since the Weierstrass theorem holds only for compact . A very interesting extension of Gaussian rules to functions with singularities has been proposed in [2]. However, the results of [2] are also asymptotic and it is not clear how the proposed rules behave for functions from spaces . In the present paper, we deal with functions of bounded smoothness () and provide worstcase error bounds that are minimal. We stress here that the regularity degree is a fixed but arbitrary positive integer. The paper [4] proposes a different approach to the weighted integration over unbounded domains; however, it is restricted to regularity only.
2 Optimal versus alternative quantizers
We consider weighted approximation in the space as defined in the introduction; however, in contrast to [5], we do not assume that the weights and are nonincreasing. Although the results of this paper pertain to domains being an arbitrary interval, to begin with we assume that
We will explain later what happens in the general case including
Let the knots be determined by a nonincreasing function (quantizer) satisfying i.e.,
(7) 
Let be a piecewise Taylor approximation of with breakpoints (7),
We remind the reader of the definition of the quantity in (5) and (6), which will be of importance in the following theorem.
Theorem 1
Suppose that
Then for every we have
(8) 
where

We proceed as in the proof of [5, Theorem 1] to get that for
Since (cf. [5, p.36])
the error is upper bounded as follows:
(9) Now we maximize the right hand side of (2) subject to
After the substitution
this is equivalent to
maximizing subject to . We have two cases:
For , we set and use Jensen’s inequality to obtain
Hence the maximum equals and it is attained at for and otherwise. In this case, the maximum is upper bounded by which means that
For we use the method of Lagrange multipliers and find this way that the maximum equals
and is attained at
Since by the probabilistic version of Jensen’s inequality with density we have
This implies that
and finally
as claimed since .
Remark 2
Theorem 3
There exists depending only on and with the following property. For any approximation that uses only information about function values and/or its derivatives (up to order ) at the knots given by (7), we have
(10) 

We fix and consider first the weighted approximation on assuming that in this interval the weights are step functions with break points given by (7). Let and be correspondingly the values of and on successive intervals Then we clearly have that
For simplicity, we write Let be functions supported on such that for and
(11) We also normalize so that We stress that a positive in (11) exists and depends only on and
Since all nullify at the knots the ‘sup’ (worst case error) in (10) is bounded from below by
where we used the fact that For such we have
Thus we arrive at a maximization problem that we already had in the proof of Theorem 1.
For we have
while for we have
as claimed.
For arbitrary weights, we replace and with the corresponding step functions with
and go with to
We now comment on what happens when the domain is different from It is clear that Theorems 1 and 3 remain valid for being a compact interval, say with Consider
In this case, we assume that is nonincreasing on and nondecreasing on We have knots which are determined by the condition
(12) 
(where ). Note that (12) automatically implies . The piecewise Taylor approximation is also correspondingly defined for negative arguments. With these modifications, the corresponding Theorems 1 and 3 have literally the same formulation for and for
Observe that the error estimates of Theorems 1 and 3 for arbitrary differ from the error for optimal by the factor
From this definition it is clear that for any we have
This quantity satisfies the following estimates.
Proposition 4
We have
(13) 
The rightmost inequality is actually an equality whenever .

Assume without loss of generality that so that Then for any and
which equals for For we have so that we can use Jensen’s inequality to get
The remaining inequality is obvious.
Although the main idea of this paper is to replace by another function that is easier to handle, our results allow a further interesting observation that is illustrated in the following example.
3 Special cases
Below we apply our results to specific weights , and specific values of and .
3.1 Gaussian and
Consider ,
for positive and . Since
for we have to have , and then
We propose using
Then and the points satisfying (12),
are given by
(14) 
In particular, we have
We now consider the two cases and separately:
3.1.1 Case of
Clearly
and
Hence, for we have that
Note that does not depend on and (as long as ). For instance, we have the following rounded values:
3.1.2 Case of
We have now
where
where . This gives