 # Projection onto the probability simplex: An efficient algorithm with a simple proof, and an application

We provide an elementary proof of a simple, efficient algorithm for computing the Euclidean projection of a point onto the probability simplex. We also show an application in Laplacian K-modes clustering.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Projection onto the probability simplex

Consider the problem of computing the Euclidean projection of a point onto the probability simplex, which is defined by the following optimization problem:

 minx∈RD 12∥x−y∥2 (1a) s.t. x⊤1=1 (1b) x≥0. (1c)

This is a quadratic program and the objective function is strictly convex, so there is a unique solution which we denote by with a slight abuse of notation.

## 2 Algorithm

The following algorithm finds the solution  to the problem:

The complexity of the algorithm is dominated by the cost of sorting the components of . The algorithm is not iterative and identifies the active set exactly after at most steps. It can be easily implemented (see section 4).

The algorithm has the following geometric interpretation. The solution can be written as , , where is chosen such that . Place the values as points on the X axis. Then the solution is given by a rigid shift of the points such that the points to the right of the Y axis sum to .

The pseudocode above appears in Duchi et al. (2008), although earlier papers (Brucker, 1984; Pardalos and Kovoor, 1990) solved the problem in greater generality.

### Other algorithms

The problem (1) can be solved in many other ways, for example by particularizing QP algorithms such as active-set methods, gradient-projection methods or interior-point methods. It can also be solved by alternating projection onto the two constraints in a finite number of steps (Michelot, 1986). Another way (Boyd and Vandenberghe, 2004, Exercise 4.1, solution available at http://see.stanford.edu/materials/lsocoee364b/hw4sol.pdf) is to construct a Lagrangian formulation by dualizing the equality constraint and then solve a 1D nonsmooth optimization over the Lagrange multiplier. Algorithm 1 has the advantage of being very simple, not iterative, and identifying exactly the active set at the solution after at most steps (each of cost ) after sorting.

## 3 A simple proof

A proof of correctness of Algorithm 1 can be found in Shalev-Shwartz and Singer (2006) and Chen and Ye (2011), but we offer a simpler proof which involves only the KKT theorem.

We apply the standard KKT conditions for the problem (Nocedal and Wright, 2006). The Lagrangian of the problem is

 L(x,λ,β)=12∥x−y∥2−λ(x⊤1−1)−β⊤x

where and are the Lagrange multipliers for the equality and inequality constraints, respectively. At the optimal solution the following KKT conditions hold:

 xi−yi−λ−βi =0,i=1,…,D (2a) xi ≥0,i=1,…,D (2b) βi ≥0,i=1,…,D (2c) xiβi =0,i=1,…,D (2d) D∑i=1xi =1. (2e)

From the complementarity condition (2d), it is clear that if , we must have and ; if , we must have and , whence . Obviously, the components of the optimal solution  that are zeros correspond to the smaller components of . Without loss of generality, we assume the components of are sorted and uses the same ordering , i.e.,

 y1≥⋯≥yρ≥yρ+1≥⋯≥yD, x1≥⋯≥xρ>xρ+1=⋯=xD,

and that , . In other words, is the number of positive components in the solution . Now we apply the last condition and have

 1=D∑i=1xi=ρ∑i=1xi=ρ∑i=1(yi+λ)

which gives . Hence is the key to the solution. Once we know (there are only possible values of it), we can compute , and the optimal solution is obtained by just adding to each component of and thresholding as in the end of Algorithm 1. (It is easy to check that this solution indeed satisfies all KKT conditions.) In the algorithm, we carry out the tests for if . We now prove that the number of times this test turns out positive is exactly . The following theorem is essentially Lemma 3 of Shalev-Shwartz and Singer (2006).

###### Theorem 1.

Let be the number of positive components in the solution , then

 ρ=max{1≤j≤D: yj+1j(1−∑ji=1yi)>0}.
###### Proof.

Recall from the KKT conditions (2) that , for and for . In the sequel, we show that for , the test will continue to be positive until and then stay non-positive afterwards, i.e., for , and for .

• For , we have

 yρ+1ρ(1−ρ∑i=1yi)=yρ+λ=xρ>0.
• For , we have

 yj+1j(1−j∑i=1yi)=1j(jyj+1−j∑i=1yi)=1j(jyj+ρ∑i=j+1yi+1−ρ∑i=1yi)=1j(jyj+ρ∑i=j+1yi+ρλ)=1j(j(yj+λ)+ρ∑i=j+1(yi+λ)).

Since for , we have .

• For , we have

 yj+1j(1−j∑i=1yi)=1j(jyj+1−j∑i=1yi)=1j(jyj+1−ρ∑i=1yi−j∑i=ρ+1yi)=1j(jyj+ρλ−j∑i=ρ+1yi)=1j(ρ(yj+λ)+j∑i=ρ+1(yj−yi)).

Notice for , and for since is sorted, therefore .

### Remarks

1. We denote . At the -th test, can be considered as a guess of the true (indeed, ). If we use this guess to compute a tentative solution where , then it is easy to see that for , and . In other words, the first components of are positive and sum to . If we find (or ), then we know we have found the optimal solution and because satisfies all KKT conditions.

2. To extend the algorithm to a simplex with a different scale, i.e., for , replace the terms with in Algorithm 1.

## 4 Matlab code

The following vectorized Matlab code implements algorithm 1. It projects each row vector in the matrix  onto the probability simplex in dimensions.

function X = SimplexProj(Y)

[N,D] = size(Y);
X = sort(Y,2,’descend’);
Xtmp = (cumsum(X,2)-1)*diag(sparse(1./(1:D)));
X = max(bsxfun(@minus,Y,Xtmp(sub2ind([N,D],(1:N)’,sum(X>Xtmp,2)))),0);


## 5 An application: Laplacian K-modes clustering

Consider the Laplacian -modes clustering algorithm of Wang and Carreira-Perpiñán (2013) (which is an extension of the -modes algorithm of Carreira-Perpiñán and Wang, 2013a). Given a dataset and suitably defined affinity values between each pair of points and (for example, Gaussian), we define the objective function

 minZ,C (3a) s.t. K∑k=1znk=1, for n=1,…,N (3b) znk≥0, for n=1,…,N, k=1,…,K. (3c)

is a matrix of , with , where are the soft assignments of point to clusters , and

are modes of the kernel density estimates defined for each cluster (where

gives a Gaussian). The problem of projection on the simplex appears in the training problem, i.e., in finding a (local) minimum of (3), and in the out-of-sample problem, i.e., in assigning a new point to the clusters.

### Training

The optimization of (3) is done by alternating minimization over  and . For fixed , the problem over  is a quadratic program of variables:

 minZ λtr(Z⊤LZ)−tr(B⊤Z) (4a) s.t. Z1K=1N (4b) Z≥0, (4c)

where , , , and  is the graph Laplacian computed from the pairwise affinities . The problem is convex since  is positive semidefinite. One simple and quite efficient way to solve it is to use an (accelerated) gradient projection algorithm. The basic gradient projection algorithm iteratively takes a step in the negative gradient from the current point and projects the new point onto the constraint set. In our case, this projection is simple: it separates over each row of  (i.e., the soft assignment of each data point, ), and corresponds to a projection on the probability simplex of a -dimensional row vector, i.e., the problem (1).

### Out-of-sample mapping

Given a new, test point , we wish to assign it to the clusters found during training. We are given the affinity vectors and , where is the affinity between  and , , and , , respectively. A natural and efficient way to define an out-of-sample mapping is to solve a problem of the form (3) with a dataset consisting of the original training set augmented with , but keeping  and  fixed to the values obtained during training (this avoids having to solve for all points again). Hence, the only free parameter is the assignment vector  for the new point . After dropping constant terms, the optimization problem (3) reduces to the following quadratic program over variables:

 minz 12∥z−(¯z+γg)∥2 (5a) s.t. z⊤1K=1 (5b) z≥0 (5c)

where and

is a weighted average of the training points’ assignments, and so is itself an average between this and the point-to-centroid affinities. Thus, the solution is the projection of the -dimensional vector onto the probability simplex, i.e., the problem (1) again.

### Lass

In general, the optimization problem of eq. (4) is called Laplacian assignment model (LASS) by Carreira-Perpiñán and Wang (2013b). Here, we consider items and categories and want to learn a soft assignment of each item to each category, given an item-item graph Laplacian matrix  of and an item-category similarity matrix  of . The training is as in eq. (4) and the out-of-sample mapping as in eq. (5), and the problem of projection on the simplex appears in both cases.

## References

• Boyd and Vandenberghe (2004) S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, Cambridge, U.K., 2004.
• Brucker (1984) P. Brucker. An algorithm for quadratic knapsack problems. Operations Research Letters, 3(3):163–166, Aug. 1984.
• Carreira-Perpiñán and Wang (2013a) M. Á. Carreira-Perpiñán and W. Wang. The -modes algorithm for clustering. Unpublished manuscript, arXiv:1304.6478, Apr. 23 2013a.
• Carreira-Perpiñán and Wang (2013b) M. Á. Carreira-Perpiñán and W. Wang. A simple assignment model with Laplacian smoothing. Unpublished manuscript, 2013b.
• Chen and Ye (2011) Y. Chen and X. Ye. Projection onto a simplex. Unpublished manuscript, arXiv:1101.6081, Feb. 10 2011.
• Duchi et al. (2008) J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the -ball for learning in high dimensions. In A. McCallum and S. Roweis, editors,

Proc. of the 25th Int. Conf. Machine Learning (ICML’08)

, pages 272–279, Helsinki, Finland, July 5–9 2008.
• Michelot (1986) C. Michelot. A finite algorithm for finding the projection of a point onto the canonical simplex of . J. Optimization Theory and Applications, 50(1):195–200, July 1986.
• Nocedal and Wright (2006) J. Nocedal and S. J. Wright. Numerical Optimization. Springer Series in Operations Research and Financial Engineering. Springer-Verlag, New York, second edition, 2006.
• Pardalos and Kovoor (1990) P. M. Pardalos and N. Kovoor. An algorithm for a singly constrained class of quadratic programs subject to upper and lower bounds. Math. Prog., 46(1–3):321–328, Jan. 1990.
• Shalev-Shwartz and Singer (2006) S. Shalev-Shwartz and Y. Singer. Efficient learning of label ranking by soft projections onto polyhedra. J. Machine Learning Research, 7:1567–1599, 2006.
• Wang and Carreira-Perpiñán (2013) W. Wang and M. Á. Carreira-Perpiñán. Laplacian -modes clustering. Unpublished manuscript, 2013.