Green's function based unparameterised multi-dimensional kernel density and likelihood ratio estimator

12/09/2011 ∙ by Peter Kovesarki, et al. ∙ University of Bonn 0

This paper introduces a probability density estimator based on Green's function identities. A density model is constructed under the sole assumption that the probability density is differentiable. The method is implemented as a binary likelihood estimator for classification purposes, so issues such as mis-modeling and overtraining are also discussed. The identity behind the density estimator can be interpreted as a real-valued, non-scalar kernel method which is able to reconstruct differentiable density functions.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Theory of density estimation

Generally, a density estimator is an identity operator for the probability density function. If it is a linear operator, then it is an approximation of the Dirac-delta function:

Fixed kernel methods usually estimate the density with a bias, since the kernel does not converge to the delta function. The same can be seen on histograms with fixed bin sizes, and to make them unbiased certain adaptivity is necessary. Different kernels usually maintain different features of the , such as differentiability or easy recognizability of peaks [1]

. Non-adaptive methods are the estimation from the moments via Fourier integrals 

[2] or the estimations that target the integral probability density. A problem with the methods using Fourier integrals, is that they may require computationally expensive volume integrals, and the result is not necessarily a density function until sufficient number of moments are known.

To avoid the complexity of finding the global minimum of the error function, most of the overtraining problems and heavy volume integrals, we tried to define a successive adaptive kernel method to estimate the probability densities, using a kernel that is suppressed at large distances. The starting point is, that functionals like linear operators are preferred, as they can be successfully approximated with a sample of a distribution, because they act like expectation values of random variables.

where is a linear operator,

is a parameter vector and

is a probability measure. To have such an estimator for the probability density itself, an identity operator is needed:

where is a necessary post-processing to convert the linear operator’s output to a scalar. Identity operators can be constructed using Green functions of operators. A particularly interesting one is the Green function of the Laplace operator in dimensions111Although the Green’s function is different for , the same procedure can be repeated and it gives the same form of functions at the end, making method valid in 2 dimensions..

(1)

Here is a constant, the surface of a unit sphere in dimensions. For twice differentiable functions that disappear at the boundaries, , the identity operator can be constructed:

This can not be used directly for density estimations, since it is linear in instead of , but an identity can be constructed for the vector field of the first derivative:

The amplitude of is still a scalar function; hence for those probability measures, , where a exists222See the subsection targeted at the existence of such a . with , there must be at least one vector field consisting of unit vectors, for which the following identity holds:

(2)

Since the integral operator was derived from the Laplace operator, in three dimensions it is similar to the dipole interaction operator in electromagnetism, hence might be called a dipole field. What is important is that this field can be found without the knowledge of the true ; only the possibility of calculating the expectation values on the left-hand side of eq. (2) is required. In this case it is possible to calculate the left-hand side for any configuration of , until the expectation value has the same direction as at every :

(3)

A way to show that is to do an integration by parts first:

where the integral on the boundaries is zero, since the derivatives of the probability measures are also suppressed at infinity. It is possible to do an external differentiation now, calculating the divergence of , and using eq. (1):

For an arbitrary , and can only differ by a divergence-free vector field. In order to show , a certain measure is needed for the difference. With the square of the difference it becomes:

(4)

Expanding the square, it will have three terms:

The field energy can be expressed in terms of the dipole energy. Explicitly writing it out:

Performing integration by parts on the term, appears. Due to eq. (1) this can be substituted with a delta function. The remaining part has the derivative of the delta function, which can be easily evaluated:

As a result the field energy equals the dipole energy

so the eq. (4) can be modified accordingly:

The minimum is not necessarily zero, but it can be reached by finding the energy minimum of the dipole system.

1.1 Existence of the optimum

It was assumed for eq. (2), that an identity operator exists for a given probability density with a special configuration:

In other words, such a has to exist which has a gradient with specified absolute value . This only exists if can be made curl free, and all possible loop integrals should result in zero:

As an -dimensional cube has faces, from which share the same corner, linearly independent infinitesimal loops can be imagined. So the curl in -dimensions may have that many independent variables, which has to be zeroed out by a vector field with only independent variables. That seems to be non-trivial, but for differentiable , it turns out it can be done easily. The curl in question is:

(5)

where denotes the unit basis vector in the direction, and the bracketed indices are not summed up. In the basis , the curl is simplified:

As only certain derivatives of appear, one possibility to anull the curl is to set every subcomponent to zero:

In this differential equation only the relative change of appears, so setting at every is still possible. As a consequence, the vector field exists for any differentiable , making curl-free, and a function exists with the property , making eq. (2) an identity.

2 Density estimation from a finite sample

As long as a suitable field is found that satisfies eq. (3), it can be used to estimate any differentiable probability density function from a sample :

(6)

However, since the Green’s function has divergences, a finite sample may produce unnecessarily large fluctuations. Fortunately, volumes around most of the divergences can be neglected, as their integral is zero. A homogeneous , homotropic sphere surface around a point gives no contribution:

As a result, a small sphere of radius around the point of interest can be excluded from the summation in eq. (2), since the error will go as for any differentiable and . The only exception where the error is not is for functions, so these points have to be treated independently.

In order to have control on the fluctuations, one has to choose the radius of the exclusion volume in such a way that a thick shell around this sphere contains enough sample points to estimate the integral belonging to that area. In this case the points in this layer will have a dominant contribution to the sum between , since points at larger distances will have decreasing values. To have the contribution in the shell to be of the same order of magnitude, has to be around . This approximation is derived from the behaviour of the function, which becomes nearly flat in , with a value around . From this it follows that if one needs number of points in this stable layer, approximately points have to be discriminated, which eventually determines . In practice one may choose a different for the iterative search of the parameters and for the evaluation of the density estimation.

2.1 Finding the parameters

With a suitable , the set of that can be used for density estimations can be found with iterative search. The stopping criterium can be formulated with discretising eq. (3):

(7)

As this requirement is similar to looking for the ground state of the dipole system, a way to find it is the gradient descent on the energy function. With this direction is:

(8)

which is a rotation with the field in the directions perpendicular to . A gradient descent method typically contains a line search part, but it is not necessary here. It is enough to set a limit for the angular change of the , an order or magnitude smaller than the maximal possible, , seems to be a good choice, which ensures that the field won’t change significantly during the iteration, keeping it a valid gradient. Further speed increase can be achieved, while taking care not to overshoot the angle between and , since this is what minimises the energy. However, there might be several local energy minima, but the identity in eq. (2) is only true if is curl free. Following from the inequality in eq. (4) it means that the non-global minima solutions are not satisfying the identity and they are not curl free. These solutions will hold a bias in the density estimations, which can be relatively big in the low density regions. This is because the non-optimal minima have a remnant polarisation, for example the destructive interferences do not fully cancel at zero density regions.

As can be seen on fig. 2

, the algorithm actually performs as a density estimator. A sample with a size of two thousand points from a two dimensional Gaussian distribution was fed into the parameter search, and the density estimation was evaluated in the described manner, at the position of sample points. The figure shows the average of the estimated densities for a given radius, where the theoretical value is constant, because of the rotational symmetry of the chosen Gaussian distribution. The spread was calculated from the variance of the estimations on the same radius. The uncertainty can be directly compared with the simplest, kernel-less nearest

-neighbour method [1] in fig. 2. It can be seen, that the uncertainty for the discussed density estimator in fig. 2 is approximately half of that from the nearest -neighbour method with flat kernel in fig. 2. The resolution were chosen to be the same for the two figures, given that the exclusion radii, defined in Section 2, were the same as the radii that was used for volume calculation in the nearest -neighbour method.

Figure 1: Radial density estimate profile of a Gaussian distribution placed at the origin. The crosses show the spread of the Green’s function density estimate compared to the true value shown as the dashed red curve.
Figure 2: Profile plot for the density estimate made for the same Gaussian sample as in fig. 2 made with the nearest -neighbour method with a flat kernel.

3 Classification and overtraining

Although binary classification can be done with binary regression models, their results typically depend only on the fraction of the signal and background density , which is optimal due to the Neyman-Pearson Lemma [3]

. This is also true for an optimally trained neural network with a quadratic loss function, where the response is usually the following 

[1][4]:

In case the network is not optimally trained, the error of the response can be related to the uncertainty of the fraction. The same fraction can be calculated from a density estimator, giving a likelihood estimation. The uncertainty in the latter case is much better under control, since the calculation of the signal and background densities are made independently. For the method described in this article, a density estimation for a single point comes from the superposition of the kernels of the most dominant sample points, hence by construction the likelihood estimation can not come from a single point, as happens during overtraining in regressions.

Figure 4 and fig. 4 show the response distribution for a network trained on a signal consisting of twelve non-overlapping Gaussians and a flat background [5]. Unlike in many regression models, here there is no need for an independent test sample for an early stopping criterium. Figure 4 is evaluated on the full training sample, while fig. 4 was made on an independent sample. The shape of the two responses are essentially same, the slight difference is at the part where either the signal or the background distribution density is very low, since these areas have the largest uncertainty in density estimators.

Figure 3: Distribution of the response function on the training sample.
Figure 4: Distribution of the response on an independent test sample.

4 Conclusions

The article presents a novel class of density estimators, based on Green’s function identities. The investigated one is derived from the Green’s function of the Laplace operator which provides meaningful results for once differentiable probability density functions. It is possible to construct density estimators from other linear operators, but they are expected to require either higher order differentiability or other functional constraints on the investigated probability density. As it was shown, such a density estimator can be used for binary classification. In such case, due to the handling of the fluctuations, overtraining is not so prominent as it is in binary regression models. Therefore there is less need for independent test samples.

Acknowledgements

We would like to thank Eckhard von Törne for giving encouragement, for pointing out the important topics needed to be touched and for the sample he prepared and gave us to use for testing the classification power of the algorithm.

References

References