# The Kernel Pitman-Yor Process

In this work, we propose the kernel Pitman-Yor process (KPYP) for nonparametric clustering of data with general spatial or temporal interdependencies. The KPYP is constructed by first introducing an infinite sequence of random locations. Then, based on the stick-breaking construction of the Pitman-Yor process, we define a predictor-dependent random probability measure by considering that the discount hyperparameters of the Beta-distributed random weights (stick variables) of the process are not uniform among the weights, but controlled by a kernel function expressing the proximity between the location assigned to each weight and the given predictors.

## Authors

• 4 publications
• 3 publications
• 16 publications
• ### Beta-Binomial stick-breaking non-parametric prior

A new class of nonparametric prior distributions, termed Beta-Binomial s...
08/19/2019 ∙ by María F. Gil-Leyva, et al. ∙ 0

• ### Beta processes, stick-breaking, and power laws

The beta-Bernoulli process provides a Bayesian nonparametric prior for m...
06/03/2011 ∙ by Tamara Broderick, et al. ∙ 0

• ### On the validity of kernel approximations for orthogonally-initialized neural networks

In this note we extend kernel function approximation results for neural ...
04/13/2021 ∙ by James Martens, et al. ∙ 0

• ### The Dependent Dirichlet Process and Related Models

Standard regression approaches assume that some finite number of the res...
07/12/2020 ∙ by Fernand A. Quintana, et al. ∙ 0

• ### Black-box constructions for exchangeable sequences of random multisets

We develop constructions for exchangeable sequences of point processes t...
08/17/2019 ∙ by Creighton Heaukulani, et al. ∙ 2

• ### The Mondrian Kernel

We introduce the Mondrian kernel, a fast random feature approximation to...
06/16/2016 ∙ by Matej Balog, et al. ∙ 0

• ### Next Hit Predictor - Self-exciting Risk Modeling for Predicting Next Locations of Serial Crimes

Our goal is to predict the location of the next crime in a crime series,...
12/13/2018 ∙ by Yunyi Li, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Nonparametric Bayesian modeling techniques, especially Dirichlet process mixture (DPM) models, have become very popular in statistics over the last few years, for performing nonparametric density estimation

[1, 2, 3]. This theory is based on the observation that an infinite number of component distributions in an ordinary finite mixture model (clustering model) tends on the limit to a Dirichlet process (DP) prior [2, 4]

. Eventually, the nonparametric Bayesian inference scheme induced by a DPM model yields a posterior distribution on the proper number of model component densities (inferred clusters)

[5], rather than selecting a fixed number of mixture components. Hence, the obtained nonparametric Bayesian formulation eliminates the need of doing inference (or making arbitrary choices) on the number of mixture components (clusters) necessary to represent the modeled data.

An interesting alternative to the Dirichlet process prior for nonparametric Bayesian modeling is the Pitman-Yor process (PYP) prior [6]. Pitman-Yor processes produce power-law distributions that allow for better modeling populations comprising a high number of clusters with low popularity and a low number of clusters with high popularity [7]. Indeed, the Pitman-Yor process prior can be viewed as a generalization of the Dirichlet process prior, and reduces to it for a specific selection of its parameter values. In [8], a Gaussian process-based coupled PYP method for joint segmentation of multiple images is proposed.

A different perspective to the problem of nonparametric data modeling was introduced in [9]

, where the authors proposed the kernel stick-breaking process (KSBP). The KSBP imposes the assumption that clustering is more probable if two feature vectors are close in a prescribed (general) space, which may be associated explicitly with the spatial or temporal position of the modeled data. This way, the KSBP is capable of exploiting available prior information regarding the spatial or temporal relations and dependencies between the modeled data.

Inspired by these advances, and motivated by the interesting properties of the PYP, in this paper we come up with a different approach towards predictor-dependent random probability measures for non-parametric Bayesian clustering. We first introduce an infinite sequence of random spatial or temporal locations. Then, based on the stick-breaking construction of the Pitman-Yor process, we define a predictor-dependent random probability measure by considering that the discount hyperparameters of the Beta-distributed random weights (stick variables) of the process are not uniform among the weights, but controlled by a kernel function expressing the proximity between the location assigned to each weight and the given predictors. The obtained random probability measure is dubbed the kernel Pitman-Yor process (KPYP) for non-parametric clustering of data with general spatial or temporal interdependencies. We empirically study the performance of the KPYP prior in unsupervised image segmentation and text-dependent speaker identification, and compare it to the kernel stick-breaking process, and the Dirichlet process prior.

The remainder of this paper is organized as follows: In Section 2, we provide a brief presentation the Pitman-Yor process, as well as the kernel stick-breaking process, and its desirable properties in clustering data with spatial or temporal dependencies. In Section 3, the proposed nonparametric prior for clustering data with temporal or spatial dependencies is introduced, its relations to existing methods are discussed, and an efficient variational Bayesian algorithm for model inference is derived.

## 2 Theoretical Background

### 2.1 The Pitman-Yor Process

Dirichlet process (DP) models were first introduced by Ferguson [11]. A DP is characterized by a base distribution and a positive scalar , usually referred to as the innovation parameter, and is denoted as . Essentially, a DP is a distribution placed over a distribution. Let us suppose we randomly draw a sample distribution from a DP, and, subsequently, we independently draw random variables from :

 G|α,G0∼DP(α,G0) (1)
 Θ∗m|G∼G,m=1,…M (2)

Integrating out

, the joint distribution of the variables

can be shown to exhibit a clustering effect. Specifically, given the first samples of , , it can be shown that a new sample is either (a) drawn from the base distribution with probability , or (b) is selected from the existing draws, according to a multinomial allocation, with probabilities proportional to the number of the previous draws with the same allocation [12]. Let be the set of distinct values taken by the variables . Denoting as the number of values in that equal to , the distribution of given can be shown to be of the form [12]

 p(Θ∗M|{Θ∗m}M−1m=1,α,G0)= αα+M−1G0 (3) +C∑c=1fM−1cα+M−1δΘc

where denotes the distribution concentrated at a single point .

The Pitman-Yor process [6] functions similar to the Dirichlet process. Let us suppose we randomly draw a sample distribution from a PYP, and, subsequently, we independently draw random variables from :

 G|d,α,G0∼PY(d,α,G0) (4)

with

 Θ∗m|G∼G,m=1,…M (5)

where is the discount parameter of the Pitman-Yor process, is its innovation parameter, and the base distribution. Integrating out , similar to Eq. (3), we now yield

 p(Θ∗M|{Θ∗m}M−1m=1,d,α,G0)= α+dCα+M−1G0 (6) +C∑c=1fM−1c−dα+M−1δΘc

As we observe, the PYP yields an expression for quite similar to that of the DP, also possessing the rich-gets-richer clustering property, i.e., the more samples have been assigned to a draw from , the more likely subsequent samples will be assigned to the same draw. Further, the more we draw from , the more likely a new sample will again be assigned to a new draw from . These two effects together produce a power-law distribution where many unique values are observed, most of them rarely [6]. In particular, for , the number of unique values scales as , where is the total number of draws. Note also that, for , the Pitman-Yor process reduces to the Dirichlet process, in which case the number of unique values grows more slowly at [13].

A characterization of the (unconditional) distribution of the random variable drawn from a PYP, , is provided by the stick-breaking construction of Sethuraman [14]. Consider two infinite collections of independent random variables , , where the are drawn from a Beta distribution, and the are independently drawn from the base distribution . The stick-breaking representation of is then given by [13]

 G=∞∑c=1ϖc(v)δΘc (7)

where

 p(vc)=Beta(vc|1−d,α+dc) (8)
 v=(vc)∞c=1 (9)
 ϖc(v)=vcc−1∏j=1(1−vj)∈[0,1] (10)

and

 ∞∑c=1ϖc(v)=1 (11)

### 2.2 The Kernel Stick-Breaking Process

An alternative to the above approaches, allowing for taking into account additional prior information regarding spatial or temporal dependencies in the modeled datasets, is the kernel stick-breaking process introduced in [9]. The basic notion in the formulation of the KSBP consists in the introduction of a predictor-dependent prior, which promotes clustering of adjacent data points in a prescribed (general) space.

Let us consider that the observed data points are associated with positions where measurement was taken , arranged on a -dimensional lattice. For example, in cases of sequential data modeling, the observed data points

are naturally associated with an one-dimensional lattice that depicts their temporal succession, i.e. the time point these measurements were taken. In cases of computer vision applications, we might be dealing with observations

measured on different locations on a two-dimensional or three-dimensional space .

To take this prior information into account, the KSBP postulates that the random process in (1) comprises a function of the predictors related to the observable data points , expressing their location in the prescribed space . Specifically, it is assumed that

 G=∞∑c=1ϖc(v(x))δΘc (12)

where

 ϖc(v(x))=vc(x,Γc;ψc)c−1∏j=1(1−vj(x,Γj;ψj))∈[0,1] (13)
 v(x)=(vc(x,Γc;ψc))∞c=1 (14)
 vc(x,Γc;ψc)=Vck(x,Γc;ψc) (15)
 p(Vc)=Beta(Vc|1,α) (16)

and is a kernel function centered at with hyperparameter .

By selecting an appropriate form of the kernel function

, KSBP allows for obtaining prior probabilities

for the derived clusters that depend on the values of the predictors (spatial or temporal locations) . Indeed, the closer the location of an observation is to the location assigned to the th cluster, the higher the prior probability becomes. Thus, the KSBP prior promotes by construction clustering of (spatially or temporally) adjacent data points. For example, a typical selection for the kernel

is the radial basis function (RBF) kernel

 k(x,Γc;ψc)=exp[−||x−Γc||2ψ2c] (17)

## 3 Proposed Approach

### 3.1 Model Formulation

We aim to obtain a clustering algorithm which takes into account the prior information regarding the (temporal or spatial) adjacencies of the observed data in the locations space , promoting clustering of data adjacent in the space , and discouraging clustering of data points relatively near in the feature space but far in the locations space . For this purpose, we seek to provide a location-dependent nonparametric prior for clustering the observed data .

Motivated by the definition and the properties of the Pitman-Yor process discussed in the previous section, to effect these goals, in this work we introduce a random probability measure under which, given the first samples drawn from , a new sample associated with a measurement location is distributed according to

 p(Θ∗M| {Θ∗m}M−1m=1;x,k,α,^X,G0) (18) = α+∑Cc=1[1−k(x,^xc;ψc)]α+M−1G0 +C∑c=1fM−1c+k(x,^xc;ψc)−1α+M−1δΘc

where is the number of values in that equal to , is the set of distinct values taken by the variables , is the employed base measure, is the location assigned to the th cluster, , is a bounded kernel function taking values in the interval , such that

 limx→^xk(x,^x;ψ)=1 (19)
 dist(x,^x)→∞limk(x,^x;ψ)=0 (20)

is the innovation parameter of the process, conditioned to satisfy , and is the distance metric used by the employed kernel function. We dub this random probability measure the kernel Pitman-Yor process, and we denote

 Θ∗m|x;G∼G(x),m=1,…M (21)

with

 G(x)|k,α,^X,G0∼KPYP(x;k,α,^X,G0) (22)

The stick-breaking construction of the KPYP follows directly from the above definition (18), and the relevant discussions of section 2. Considering a KPYP with cluster locations set , kernel function satisfying the constraints (19) and (20), and innovation parameter , we have

 G(x)=∞∑c=1ϖc(v(x))δΘc (23)

where

 vc(x)∼Beta(k(x,^xc;ψc),α+c[1−k(x,^xc;ψc)]) (24)

and

 ϖc(v(x))=vc(x)c−1∏j=1(1−vj(x))∈[0,1] (25)

Proposition 1. The stochastic process defined in (23)-(25) is a valid random probability measure.
Proof. We need to show that

 ∞∑c=1ϖc(v(x))=1 (26)

For this purpose, we follow an approach similar to [9]. From (25), we have

 1−C−1∑c=1ϖc(v(x))=C−1∏c=1[1−vc(x)] (27)

Then, in the limit as , and taking logs in both sides of (27), we have

 ∞∑c=1ϖc(v(x))=1ifandonlyif∞∑c=1log[1−vc(x)]=−∞ (28)

Based on Kolmogorov three series theorem, the summation on the right is over independent random variables and is equal to if and only if . However, follows a Beta distribution, which means , thus , and hence its expectation is negative; thus, the condition is satisfied, and (26) holds true.

### 3.2 Relation to the KSBP

Indeed, the proposed KPYP shares some common ideas with the KSBP of [9]. The KSBP considers that

 G(x)=∞∑c=1ϖc(v(x))δΘc (29)

where

 ϖc(v(x))=vc(x)c−1∏j=1(1−vj(x))∈[0,1] (30)
 vc(x)=Vck(x,^xc;ψc) (31)
 p(Vc)=Beta(Vc|1,α) (32)

From this definition, we observe that there is a key difference between the KPYP and the KSBP: the KSBP multiplies stick variables sharing the same Beta prior with a bounded kernel function centered at a location unique for each stick, to obtain a predictor (location)-dependent random probability measure. Instead, the KPYP considers stick variables with different Beta priors, with the prior of each stick variable employing a different “discount hyperparameter,” defined as a bounded kernel centered at a location unique for each stick. This way, the KPYP controls the assignment of observations to clusters by discounting clusters the centers of which are too far from the clustered data points in the locations space .

It is interesting to compute the mean and variance of the stick variables

for these two stochastic processes, for a given observation location and cluster center . In the case of the KPYP, we have

 E[vc(x)]=k(x,^xc;ψc)k(x,^xc;ψc)+αc (33)
 V[vc(x)]=k(x,^xc;ψc)αc(k(x,^xc;ψc)+αc)2(k(x,^xc;ψc)+αc+1) (34)

where

 αc≜α+c(1−k(x,^xc;ψc)) (35)

On the contrary, for the KSBP we have

 E[vc(x)]=k(x,^xc;ψc)1+α (36)
 V[vc(x)]=k(x,^xc;ψc)2α(1+α)2(α+2) (37)

From (33) and (36), we observe that the for a given observation location and cluster center , same increase in the value of the kernel function induces a much greater increase in the expected value of the stick variable employed by the KPYP compared to the increase in the expectation of the stick variable employed by the KSBP. Hence, the predictor (location)-dependent prior probabilities of cluster assignment of the KPYP appear to vary more steeply with the employed kernel function values compared to the KSBP.

### 3.3 Variational Bayesian Inference

Inference for nonparametric models can be conducted under a Bayesian setting, typically by means of variational Bayes (e.g., [15]), or Monte Carlo techniques (e.g., [16]). Here, we prefer a variational Bayesian approach, due to its better computational costs. For this purpose, we additionally impose a Gamma prior over the innovation parameter , with

 p(α)=G(α|η1,η2). (38)

Let us a consider a set of observations with corresponding locations . We postulate for our observed data a likelihood function of the form

 p(yn|zn=c)=p(yn|θc) (39)

where the hidden variables are defined such that if the th data point is considered to be derived from the th cluster. We impose a multinomial prior over the hidden variables , with

 p(zn=c|xn)=ϖc(v(xn)) (40)

where the are given by (25), with the prior over the given by (24). We also impose a suitable conjugate exponential prior over the likelihood parameters .

Our variational Bayesian inference formalism consists in derivation of a family of variational posterior distributions which approximate the true posterior distribution over , , and , and the innovation parameter . Apparently, under this infinite dimensional setting, Bayesian inference is not tractable. For this reason, we fix a value and we let the variational posterior over the have the property , i.e. we set equal to zero for , .

Let be the set of the parameters of our truncated model over which a prior distribution has been imposed, and be the set of the hyperparameters of the model, comprising the and the hyperparameters of the priors imposed over the innovation parameter and the likelihood parameters of the model. Variational Bayesian inference consists in derivation of an approximate posterior by maximization (in an iterative fashion) of the variational free energy

 L(q)=∫dWq(W)logp(X,Y,W|Ξ)q(W) (41)

Having considered a conjugate exponential prior configuration, the variational posterior is expected to take the same functional form as the prior, [17]. The variational free energy of our model reads

 L(q)=∫dαq(α){logp(α|η1,η2)q(α)+C−1∑c=1N∑n=1∫dvc(xn)q(vc(xn))logp(vc(xn)|α)q(vc(xn))}+C∑c=1∫dθcq(θc)logp(θc)q(θc)+C∑c=1N∑n=1q(zn=c)×{∫dv(xn)q(v(xn))logp(zn=c|xn)−logq(zn=c)+∫dθcq(θc)logp(yn|θc)} (42)

### 3.4 Variational Posteriors

Let us denote as the posterior expectation of a quantity. We have

 q(vc(xn))=Beta(vc(xn)|~βc,n,^βc,n) (43)

where

 ~βc,n=k(xn,^xc;ψc)+∑m:xm=xnq(zm=c) (44)
 ^βc,n= ⟨α⟩+c[1−k(xn,^xc;ψc)] (45) +∑m:xm=xnC∑c′=c+1q(zm=c′)

and

 q(α)=G(α|^η1,^η2) (46)

where

 ^η1=η1+N(C−1) (47)
 ^η2=η2−C−1∑c=1N∑n=1[ψ(^βc,n)−ψ(~βc,n+^βc,n)] (48)

denotes the Digamma function, and

 ⟨α⟩=^η1^η2 (49)

Further, the cluster assignment variables yield

 q(znc=1)∝exp(⟨logϖc(v(xn))⟩)exp(φnc) (50)

where

 ⟨logϖc(v(xn))⟩=c−1∑c′=1⟨log(1−vc′(xn))⟩+⟨logvc(xn)⟩ (51)
 φnc=⟨logp(yn|θc)⟩q(θc) (52)

and

 ⟨logvc(xn)⟩=ψ(~βc,n)−ψ(~βc,n+^βc,n) (53)
 ⟨log(1−vc(xn))⟩=ψ(^βc,n)−ψ(~βc,n+^βc,n) (54)

Regarding the parameters , we obtain

 logq(θc)∝logp(θc)+N∑n=1q(zn=c)logp(yn|θc) (55)

Finally, regarding the model hyperparameters , we obtain the hyperparameters of the employed kernel functions by maximization of the lower bound

, and we heuristically select the values of the rest.

### 3.5 Learning the cluster locations ^xc

Regarding determination of the locations assigned to the obtained clusters, , these can be obtained by either random selection or maximization of the variational free energy over them. The latter procedure can be conducted by means of any appropriate iterative maximization algorithm; here, we employ the popular L-BFGS algorithm [18] for this purpose. Both random selection and estimation by means of variational free energy optimization, using the L-BFGS algorithm, shall be evaluated in the experimental section of our paper.

## Acknowledgment

The authors would like to thank Dr. David B. Dunson for the enlightening discussion regarding the correct way to implement the MCMC sampler for the KSBP.

## References

• [1] S. Walker, P. Damien, P. Laud, and A. Smith, “Bayesian nonparametric inference for random distributions and related functions,” J. Roy. Statist. Soc. B, vol. 61, no. 3, pp. 485–527, 1999.
• [2]

R. Neal, “Markov chain sampling methods for Dirichlet process mixture models,”

J. Comput. Graph. Statist., vol. 9, pp. 249–265, 2000.
• [3] P. Muller and F. Quintana, “Nonparametric Bayesian data analysis,” Statist. Sci., vol. 19, no. 1, pp. 95–110, 2004.
• [4] C. Antoniak, “Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems.” The Annals of Statistics, vol. 2, no. 6, pp. 1152–1174, 1974.
• [5] D. Blei and M. Jordan, “Variational methods for the Dirichlet process,” in

21st Int. Conf. Machine Learning

, New York, NY, USA, July 2004, pp. 12–19.
• [6] J. Pitman and M. Yor, “The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator,” in Annals of Probability, vol. 25, 1997, pp. 855–900.
• [7]

S. Goldwater, T. Griffiths, and M. Johnson, “Interpolating between types and tokens by estimating power-law generators,” in

Advances in Neural Information Processing Systems, vol. 18, 2006.
• [8] E. B. Sudderth and M. I. Jordan, “Shared segmentation of natural scenes using dependent pitman-yor processes,” in Advances in Neural Information Processing Systems, 2008, pp. 1585–1592.
• [9] D. B. Dunson and J.-H. Park, “Kernel stick-breaking processes,” Biometrika, vol. 95, pp. 307–323, 2007.
• [10] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei, “Sharing clusters among related groups: Hierarchical Dirichlet processes,” in Advances in Neural Information Processing Systems (NIPS), 2005, pp. 1385–1392.
• [11] T. Ferguson, “A Bayesian analysis of some nonparametric problems,” The Annals of Statistics, vol. 1, pp. 209–230, 1973.
• [12] D. Blackwell and J. MacQueen, “Ferguson distributions via Pólya urn schemes,” The Annals of Statistics, vol. 1, no. 2, pp. 353–355, 1973.
• [13] Y. W. Teh, “A hierarchical Bayesian language model based on Pitman-Yor processes,” in Proc. Association for Computational Linguistics, 2006, pp. 985–992.
• [14] J. Sethuraman, “A constructive definition of the Dirichlet prior,” Statistica Sinica, vol. 2, pp. 639–650, 1994.
• [15] D. M. Blei and M. I. Jordan, “Variational inference for Dirichlet process mixtures,” Bayesian Analysis, vol. 1, no. 1, pp. 121–144, 2006.
• [16] Y. Qi, J. W. Paisley, and L. Carin, “Music analysis using hidden Markov mixture models,” IEEE Transactions on Signal Processing, vol. 55, no. 11, pp. 5209–5224, 2007.
• [17] C. M. Bishop, Pattern Recognition and Machine Learning.   New York: Springer, 2006.
• [18] D. Liu and J. Nocedal, “On the limited memory method for large scale optimization,” Mathematical Programming B, vol. 45, no. 3, pp. 503–528, 1989.
• [19] R. Caruana, “Multitask learning,” Machine Learning, vol. 28, pp. 41–75, 1997.
• [20] J. L. i. r. Baxter, “Learning internal representations,” in

COLT: Proceedings of the workshop on computational learning theory

, 1995.
• [21] T. Evgeniou, C. Micchelli, and M. Pontil, “Learning multiple tasks with kernel methods,” Journal of Machine Learning Research, vol. 6, pp. 615–637, 2005.
• [22] N. Lawrence and J. Platt, “Learning to learn with the informative vector machine,” in In Proceedings of the 21st International Conference on Machine Learning, 2004.
• [23] K. Yu, A. Schwaighofer, V. Tresp, W.-Y. Ma, and H. Zhang, “Collaborative ensemble learning: Combining collaborative and content-based information filtering via hierarchical bayes,” in

In Proceedings of the 19th conference on uncertainty in artificial intelligence

, 2003.
• [24] K. Yu, A. Schwaighofer, and V. Tresp, “Learning gaussian processes from multiple tasks,” in In Proceedings of the 22nd international conference on machine learning, 2005.
• [25] D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proc. 8th Int’l Conf. Computer Vision, Vancouver, Canada, July 2001, pp. 416–423.
• [26] Q. An, C. Wang, I. Shterev, E. Wang, L. Carin, and D. B. Dunson, “Hierarchical kernel stick-breaking process for multi-task image analysis,” in Proceedings of the 25th international conference on Machine learning - ICML ’08, pp. 17–24, 2008.
• [27] R. Unnikrishnan, C. Pantofaru, and M. Hebert, “A measure for objective evaluation of image segmentation algorithms,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, San Diego, CA, USA, June 2005, pp. 34–41.
• [28] ——, “Toward objective evaluation of image segmentation algorithms,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 929–944, 2007.
• [29] G. Mori, “Guiding model search using segmentation.” in Proc. 10th IEEE Int. Conf. on Computer Vision (ICCV), 2005.
• [30]

M. Varma and A. Zisserman, “Classifying images of materials: Achieving view-point and illumination independence.” in

Proc. 7th IEEE European Conf. on Computer Vision (ECCV), 2002.
• [31] M. Kudo, J. Toyama, and M. Shimbo, “Multidimensional curve classification using passing-through regions,” Pattern Recognition Letters, vol. 20, no. 11-13, pp. 1103–1111, 1999.
• [32] A. Asuncion and D. Newman, “UCI machine learning repository,” 2007. [Online]. Available: http://www.ics.uci.edu/mlearn/{MLR}epository.html