 # Uniform Transformation of Non-Separable Probability Distributions

A theoretical framework is developed to describe the transformation that distributes probability density functions uniformly over space. In one dimension, the cumulative distribution can be used, but does not generalize to higher dimensions, or non-separable distributions. A potential function is shown to link probability density functions to their transformation, and to generalize the cumulative. A numerical method is developed to compute the potential, and examples are shown in two dimensions.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In a variety of problems, a probability distribution is known, and it is necessary or desirable to compute an explicit mapping between the domain of that distribution, and a secondary domain in which the probability density is distributed uniformly. A simple example involves drawing samples from a non-analytical, data-specified distribution. With such a mapping, samples can be generated uniformly at random and transformed to produce the data distribution. In image processing such mappings are used to normalize image brightness (often as a pre-processing stage) to contend with varying illumination conditions 

. In the field of neural coding, such mappings can be used to compute optimal neural populations for encoding sensory features. In the field of optimal coding, such mappings can be used to specify a theoretically optimal companding function. In the context of machine learning, such mappings embody the probabilistic manifold on which the data lie. This work was motivated by manifold learning, and its application to user interface design. An observer may wish to manipulate one or more continuous sliders to navigate a manifold, and such mappings are needed to do so.

In one dimension, the transform has a simple and intuitive solution: the cumulative distribution is computed from the probability density, and used to transform the data. In higher dimensions however, and for non-separable distributions, a solution appears to be missing. The multidimensional cumulative distribution falls short for non-separable distributions, as each dimension is treated separably .

The need for a more general understanding of the problem is most clear within the the neural coding literature, which has extensively used the one dimensional cumulative distribution [3, 4, 5, 6, 7, 8, 9, 10]. As a recent example, the cumulative distribution can be used to derive optimal neural populations for one dimensional stimuli , thus building a theoretical framework with which to test the efficient coding hypothesis . Perceptual stimuli are however multidimensional. Prior work has considered multidimensional stimuli by assuming that distributions are separable, and using the multidimensional cumulative . Separability is however a strong assumption, and the neural coding literature has noted that a more general solution is needed [3, 7].

In this paper the problem is treated in its general form. We consider how multidimensional, non-separable distributions can be distributed uniformly. A theoretical framework is developed which shows that a potential function embodies the optimal transformation, and is a generalization of the cumulative. A numerical method is then developed to compute and visualize the potential and transformation.

For exposition, Section 2 begins with the one dimensional case, and connects the cumulative distribution with a more general theoretical framework that describes the optimal encoding. The one dimensional theory is then extended to multiple dimensions, and without assuming separability, Section 3. A potential function is shown to embody the optimal transformation. Section 4 gives intuition for the shape of the potential, and uses that potential to explain the limitations of the multidimensional cumulative transformation. A numerical method is then developed to compute the potential in two dimensions, Section 5. Lastly, results are presented for a variety of separable and non-separable distributions, Section 6, and we conclude in Section 7.

## 2 One Dimensional Distributions

For exposition, it is useful to begin in one dimension, where a simple solution is known to exist. A function is needed that transforms the domain of a non-uniform probability density to a new domain , where its density is uniform. Specifically, the transformation must satisfy

 ∫p(x)dx =  ∫u(^x)d^x  . (1)

Let specify how points should be displaced, with . Equation (1) becomes

 ∫p(x)dx =  ∫u(^x)d^xdxdx (2) =  ∫u(x)(1+f′(x))dx  , (3)

where may be replaced by to constrain the support of and to the same region. Differentiating both sides gives

 p/u =  1+f′ (4)

The derivative of the displacement, , therefore describes a normalized change in density. The displacement can be recovered by integrating , which reveals why the cumulative distribution of is effective. The displacement is found by solving a first-order differential equation over normalized probability densities. Computing the cumulative distribution of does the same, albeit without a normalization to align the support of and .

## 3 Multidimensional Distributions

In multiple dimensions the cumulative does not generalize because the multidimensional cumulative treats each dimension separably. The theoretical framework described in Section (2) can however be extended to multiple dimensions without assuming separability. Following the form of Equation (1),

 ∫p(x)dx =  ∫u(^x)d^x (5) =  ∫u(x)|J(^x)| dx  , (6)

where is the volume element, with now specifying a field, and is the Jacobian. The above follows from the change of variables theorem. The determinant specifies the volume of space after it is warped by the field . Taking the derivative gives

 p/u =  |J(x+f(x))| =  |I+J(f)|  , (7)

which is the multidimensional analog of Equation (4). This multidimensional expression however leaves the field under-constrained by the scalar change in density, .

The field, , can be further constrained by noting that it describes only the change in density at each point in space. The change in density under a field is embodied by the field divergence, rather than the curl. Divergence fields are conservative, and can be described as the gradient of a scalar potential function.

Let be a scalar potential function that embodies the conservative displacement field, . Equation (7) becomes

 p/u =  |I+J(∇g)| =  |I+H(g)|  , (8)

where denotes the Hessian. The transform is therefore . This is the multidimensional generalization of the one-dimensional case, Equation (4). In one dimension, .

## 4 The Manifold Potential

The potential function , links the density of a distribution to the displacement field that transforms to be uniform. Equivalently,

can viewed as a manifold that is non-uniformly distributed through space, and

embodies the points that lie on the manifold. Because that latter geometric interpretation leads to the theoretical framework in Section (3), we refer to as the manifold potential of a distribution.

### 4.1 Physical Interpretation

Interpreted physically, the Hessian of the manifold potential describes its local curvature, and it is that curvature which links the probability density to the displacement field , Equation (8). That local curvature describes how the volume of space is changed by the mapping . Specifically, the Jacobian determinant of the mapping, describes how the curvature of affects the volume of space.

When the manifold potential has no curvature, the Hessian is the zero matrix, and

, indicating that is the uniform distribution. As such, the displacement, , should be zero or constant. Because , is zero or constant, as required.

Where the manifold potential has positive curvature, expands to become less dense. For intuition, consider the case . At such points, , indicating that is a factor more dense than the uniform distribution, and must therefore must be rarified. The displacement should expand to a larger area having twice the size in each dimension, and thus the same density as the uniform distribution. Because the Jacobian determinant of the mapping, , is positive and non-zero, space is expanded as required.

Where the manifold potential has negative curvature, contracts to become more dense. For intuition, consider the case for . At such points, , indicating that is less dense than the uniform distribution, and therefore must be compressed. The displacement must therefore map density from elsewhere within the domain of to increase the density. Because the Jacobian determinant of the mapping, , is positive and less than one, space is contracted as required.

Lastly, the manifold potential obeys global physical constraints. The ratio , Equation (8) integrates to , the area of the support111The support of is defined to encompass , thus because , and is constant. of , which constrains the manifold potential. The one dimensional case gives intuition, where the right hand side of Equation (8) is . We require that , the area of the support region. Because is that area, . The curvature of the one dimensional potential must integrate to zero because it embodies the change in area, and that area must be preserved. That intuition carries into multiple dimensions, where is replaced by the Hessian , and the change in volume is given by the determinant . That change depends upon interactions between the dimensions, and cannot be easily separated as in the one dimensional case. Nonetheless, the curvature of , as embodied by the Hessian, is globally constrained to preserve the volume of the support region.

### 4.2 Relationship to the Multidimensional Cumulative

In a single dimension, the cumulative distribution can be used to transform between a probability distribution and the uniform distribution . That approach can be generalized into multiple dimensions under the assumption that is separable . The multidimensional cumulative distribution provides a transformation. The manifold potential can be used to prove the correctness of that special case, and gives insight into its limitations.

Claim. For separable distributions which have the form

 p(x) =  ∏ipi(xi)  , (9)

where is a one dimensional distribution over dimension in , the probability mass can be transformed to be uniform by

 ^xi =  aiPi(xi)+bi  , (10)

where is the cumulative distribution of , and the values and are an unknown affine transform that aligns the support of the uniform distribution to that of .

###### Proof.

From Section (3), the transform must satisfy

 p/u = |I+H(g)| (11) ^x = x+∇g  . (12)

Substituting for gives

 1u∏ipi(xi) = |I+H(g)|  , (13)

and Equation (12) can be used to compute the elements of :

 ^xi =xi+∂g∂xi?=aiPi(xi)+bi (14)

which gives

 ∂g∂xi =aiPi(xi)+bi−xi  , ∂2g∂x2i =aipi(xi)−1  , ∂2g∂xixj =0  . (15)

The diagonal elements of , which have the form , are therefore . The off-diagonal elements, which have the form , are zero. The determinant in Equation (13) therefore simplifies,

 1u∏ipi(xi) =  ∏iaipi(xi)  . (16)

The factors collapse to a single scalar multiplier, which may be defined . The transform specified by the multidimensional cumulative distribution is therefore correct for separable distributions up to an affine transform of the domain. ∎

The above proof shows that the multidimensional cumulative can be used for separable distributions because the normalized change in density, does not involve interactions between the dimensions, as described by the determinant. Those interactions are described by the off-diagonal terms in the Hessian of the manifold potential, which the multidimensional cumulative correctly assumes to be zero for separable distributions.

## 5 A Numerical Method in Two Dimensions

A method is needed to solve a system of equations having the form . Here the two dimensional case is considered as the simplest, non-trivial instance of the problem. In two dimensions, expanding the determinant gives

 p/u (17)

The potential can be computed over a discrete lattice. Rewriting Equation (17

) in vector form gives

 ρ (18)

where is the normalized changed in density. The vector comprises samples of over a -D lattice. Matrices compute the partial derivatives as discrete convolutions, and denotes the Hadamard product. The function is used as shorthand for the above expression.

A gradient based method can be used to solve Equation (18) by minimizing an error function,

 E(g) = ∥r(g)∥2 (19)

where is the residual, . The gradient is

 ∇E(g) = 2J[h(g)]Tr(g) (20)

where is the Jacobian. By inspection,

 J[h(g)] (21)

where the operator converts vectors to diagonal matrices. Substituting into Equation (20),

 ∇E(g) = CT\tiny\raisebox{-3.0pt}{yy}diag{C\tiny\raisebox{-3.0pt}{xx}g}r+CT\tiny\raisebox{-3.0pt}{xx}diag{C\tiny\raisebox{-3.0pt}{yy}g}r−2CT\tiny\raisebox{-3.0pt}{xy}diag{C\tiny\raisebox{-3.0pt}{xy}g}r+CT\tiny\raisebox{-3.0pt}{xx}r+CT\tiny\raisebox{-3.0pt}{yy}r (22) = C\tiny\raisebox{-3.0pt}{yy}(C\tiny\raisebox{-3.0pt}{xx}g∘r)+C\tiny\raisebox{-3.0pt}{xx}(C\tiny\raisebox{-3.0pt}{% yy}g∘r)−2C\tiny\raisebox{-3.0pt}{xy}(C\tiny\raisebox{-3.0pt}{xy}g∘r)+C\tiny\raisebox{-3.0pt}{xx}r+C\tiny\raisebox{-3.0pt}{yy}r  , (23)

where we have used the fact that the convolution matrices are symmetric. The error gradient can therefore be computed efficiently by separable convolutions.

### 5.1 Boundary Constraints

The potential must obey certain boundary constraints, or probability mass may be mapped outside of its finite domain (mass must be conserved). Specifically, components of that are orthogonal to the boundary must be zero at the boundary. The potential therefore becomes flat as one approaches the boundary, but may be sloped as one travels along the boundary. That constraint can be enforced implicitly by requiring that the boundary values of be repeated when computing convolutions.

Lastly, a constraint can be added to specify the scalar offset of the potential function. That offset is immaterial, since only the gradient is needed, but can be specified by fixing one boundary value to zero. This detail has not proven necessary in practice, and is omitted for simplicity.

### 5.2 Nonlinear Optimization

Equation (19) is solved by a non-linear conjugate gradient method222Polack-Ribiere method, line search by quadratic and cubic estimation, and Wolfe-Powell stopping criteria. , with the partials given by Equation (23). Computations are made efficient by separable filters . To ensure convergence, the optimization is wrapped in a multigrid scheme in which a low resolution version of the potential is first estimated. The potential is initialized to zero at the first level.

The above optimization is sufficient when, at the boundary, the probability density is equivalent between the data and uniform distributions ( at the boundary). The normalized change in density, , is therefore zero on the boundary and beyond.

For many distributions the density falls to zero at the boundary, while the uniform density does not. The change in density, , is therefore nonzero on the boundary (if falls to zero, falls to ). Outside of the domain, the field is necessarily zero. The potential function is therefore discontinuous at the boundary, and ringing-like artifacts will appear in the numerical solution. The discontinuity in the potential is therefore removed during optimization by windowing . Specifically,

is padded and circularly windowed to have a sigmoidal transition to zero. After optimization, the gradient field is cropped accordingly.

## 6 Results

To test the numerical method, the multigrid scheme was configured with each level being a factor of two larger than the preceding. The size of the lattice at the smallest level was samples, and four levels were used for a maximum resolution of samples. First derivatives were computed using -tap optimal filters , and second derivatives were computed by a second pass333Two passes afford computation of  .. The non-linear minimization was configured for line searches at each pyramid level, giving fixed runtimes of minutes444Implementation is in Matlab running on a modern Macbook Pro.. The optimization settings were not tuned.

A variety of probability distributions were tested. To determine how successful was the optimization for each case, the solution field was used to warp a uniform lattice into an estimate of the ground truth probability distribution. Specifically, the inverse of the solution field was used to warp a uniform lattice. The warped area of each cell was then used to compute the density of the space after warping. Under the correct field, the computed densities should match555Note that the densities are translated by the field, and must be regridded before comparing to ground-truth. the ground truth distribution.

Shown in Figure 1(a) is a separable, two dimensional Normal distribution. The potential, , and the inverse of its gradient field (b) specify how uniform space is warped to produce the distribution. The field is visualized (blue vectors) below the potential, and within the valid (non-windowed) region of its domain. Points on a uniform lattice are warped inward toward the center of the distribution. The magnitude of the field varies666The asymmetry of the distribution has a subtle effect on the gradient field. The gradient field of a circular Normal distribution forms an ideal radial pattern, whereas an asymmetric Normal distribution produces a field that is (relatively) oblique to the ideal radial orientation. intuitively according to the shape of the distribution. Shown in Figure 1(c) is the distribution, as reconstructed by warping the uniform lattice. Contour intervals for both distributions agree closely (d), with solid lines representing the ground truth distribution. The magnitude of the inverse field is scaled for visualization.

The estimated distribution can be quantitatively compared to ground truth by computing the Bhattacharyya coefficient, , which attains a maximum of when the estimated distribution is identical to ground truth; . An intuitive measure of error is therefore the deviation of from . For the separable Normal distribution, Figure 1, the error is .

Shown in Figure 2(a) is a bimodal Normal distribution. The height of the potential (b) varies around the perimeter, thus affording larger (or smaller) gradient magnitudes where probability mass must be translated further (or closer). The potential has two distinct minima at the modes of the distribution, and the inverse field (vectors) intuitively warps space toward the modes. Shown in Figure 2(c) is the estimated distribution, and in (c) are the contour intervals. The error is .

Shown in Figure 3(a) is a unimodal distribution with a concavity. The height of the potential (b) is greater across from the concavity, thus mapping the probability density further. The inverse gradient field visibly maps probability mass further to produce the concavity. The estimated distribution (c) and contour intervals (d) align; the error is .

Shown in Figure 4(a) is a distribution with a hole. Similar to the Normal distribution, Figure 1, the potential is symmetric (b). The difference can be seen in the slope of the potential, which is steeper near the middle of the domain, and nearly flat at the center. The steepness results from the more peaked shape of the distribution. The width of the flatter central region corresponds intuitively to the width of the central hole. At the center the potential increases (not visible), producing a field that is oriented outward toward the circular peak. The estimated distribution (c) and contours intervals (d) match, with contour levels shown for clarity. The error is .

## 7 Discussion

A theoretical framework was described that defines a mapping between the domain of a uniform distribution, and the domain of non-separable, multidimensional distributions. The framework was first used to explain why the cumulative distribution uniformly distributes one dimensional distributions, and subsequently extended to an arbitrary number of dimensions. That extension lead to a novel potential function that is associated with probability distributions, and embodies the (probabilistic) manifold.

The manifold potential describes the relationship between a probability density function, and the field that distributes the probability mass uniformly. The curvature of the potential describes how the density of space should be transformed. That potential was also shown to explain why the multidimensional cumulative can be used for separable distributions, and why it fails in the non-separable case. The manifold potential therefore generalizes the cumulative by accounting for both separable and non-separable distributions.

Lastly, a numerical method was developed to compute the manifold potential in two dimensions. That gave concrete examples of the manifold potential, and demonstrated that the differential equation is tractable. The total contribution of this work is therefore a formal and intuitive understanding of the problem, and introduction of a novel concept that is associated with probability distributions.

Minimal assumptions were made about the form of the distributions, requiring primarily that they be discretized for numerical solutions. One limitation is the use of discrete derivatives, which restrict the range of spatial frequencies that can be present in the distribution. Smaller filter widths can partially alleviate this. Nonetheless we anticipate that for many practical cases, distributions will be sufficiently smooth.

One limitation is the restriction of the numerical method to two dimensions. That restriction was intentional, as it provides results for the smallest non-trivial instance of the problem. The numerical optimization was found to converge reliably. In particular, no significant local minima were observed during the optimizations. Multiple minima have also not been observed. Numerical methods in higher dimensions are therefore promising. The positive numerical results motivate future work to develop a general numerical method in arbitrary numbers of dimensions. That work is in progress.

To conclude, a broader impact of this work is theoretical. Methods in manifold learning and theories in neuroscience describe encodings for distributions. Those encodings might now be compared to the theoretical optimum that is described by the manifold potential, and novel manifold learning methods might be designed to converge toward the optimum.

## References

•  Russ, J. C., [Image Processing Handbook ], CRC Press, Inc., Boca Raton, FL, 4th ed. (2002).
•  Gersho, A. and Gray, R. M., [Vector Quantization and Signal Compression ], Kluwer Publishers, Norwell, MA (1991).
• 

Ganguli, D. and Simoncelli, E. P., “Efficient Sensory Encoding and Bayesian Inference with Heterogeneous Neural Populations,”

Neural computation (2014).
•  Ganguli, D., Efficient Coding and Bayesian Estimation with Neural Populations, PhD thesis, New York University (2012).
•  Wei, X.-X. and Stocker, A., “Efficient Coding Provides a Direct Link Between Prior and Likelihood in Perceptual Bayesian Inference,” in [Advances in Neural Information Processing Systems 25 ], (2012).
•  Wei, X.-X. and Stocker, A., “Bayesian Inference with Efficient Neural Population Codes,” in [

Artificial Neural Networks and Machine Learning

], Lecture Notes in Computer Science 7552, Springer Berlin Heidelberg (2012).
• 

Ganguli, D. and Simoncelli, E. P., “Implicit Encoding of Prior Probabilities in Optimal Neural Populations,” in [

Advances in Neural Information Processing Systems 23 ], (2010).
• 

McDonnell, M. D. and Stocks, N. G., “Maximally Informative Stimuli and Tuning Curves for Sigmoidal Rate-Coding Neurons and Populations,”

Physical Review Letters 101 (2008).
•  Brunel, N. and Nadal, J.-P., “Mutual Information, Fisher Information, and Population Coding,” Neural Computation 10(7) (1998).
•  Nadal, J.-P. and Parga, N., “Nonlinear Neurons in the Low-Noise Limit: A Factorial Code Maximizes Information Transfer,” Network: Computation in Neural Systems 5(4) (1994).
•  Barlow, H. B., [Possible Principles Underlying the Transformation of Sensory Messages ], MIT Press, Cambridge, MA (1961).
•  Rasmussen, C. E. and Nickisch, H., “Gaussian Processes for Machine Learning (GPML) Toolbox,” Journal of Machine Learning Research 11 (2010).
•  Farid, H. and Simoncelli, E., “Differentiation of Discrete Multidimensional Signals,” IEEE Transactions on Image Processing 13(4) (2004).