1 Projection onto the probability simplex
Consider the problem of computing the Euclidean projection of a point onto the probability simplex, which is defined by the following optimization problem:
(1a)  
s.t.  (1b)  
(1c) 
This is a quadratic program and the objective function is strictly convex, so there is a unique solution which we denote by with a slight abuse of notation.
2 Algorithm
The following algorithm finds the solution to the problem:
The complexity of the algorithm is dominated by the cost of sorting the components of . The algorithm is not iterative and identifies the active set exactly after at most steps. It can be easily implemented (see section 4).
The algorithm has the following geometric interpretation. The solution can be written as , , where is chosen such that . Place the values as points on the X axis. Then the solution is given by a rigid shift of the points such that the points to the right of the Y axis sum to .
The pseudocode above appears in Duchi et al. (2008), although earlier papers (Brucker, 1984; Pardalos and Kovoor, 1990) solved the problem in greater generality^{1}^{1}1http://www.cs.berkeley.edu/~jduchi/projects/DuchiShSiCh08.html.
Other algorithms
The problem (1) can be solved in many other ways, for example by particularizing QP algorithms such as activeset methods, gradientprojection methods or interiorpoint methods. It can also be solved by alternating projection onto the two constraints in a finite number of steps (Michelot, 1986). Another way (Boyd and Vandenberghe, 2004, Exercise 4.1, solution available at http://see.stanford.edu/materials/lsocoee364b/hw4sol.pdf) is to construct a Lagrangian formulation by dualizing the equality constraint and then solve a 1D nonsmooth optimization over the Lagrange multiplier. Algorithm 1 has the advantage of being very simple, not iterative, and identifying exactly the active set at the solution after at most steps (each of cost ) after sorting.
3 A simple proof
A proof of correctness of Algorithm 1 can be found in ShalevShwartz and Singer (2006) and Chen and Ye (2011), but we offer a simpler proof which involves only the KKT theorem.
We apply the standard KKT conditions for the problem (Nocedal and Wright, 2006). The Lagrangian of the problem is
where and are the Lagrange multipliers for the equality and inequality constraints, respectively. At the optimal solution the following KKT conditions hold:
(2a)  
(2b)  
(2c)  
(2d)  
(2e) 
From the complementarity condition (2d), it is clear that if , we must have and ; if , we must have and , whence . Obviously, the components of the optimal solution that are zeros correspond to the smaller components of . Without loss of generality, we assume the components of are sorted and uses the same ordering , i.e.,
and that , . In other words, is the number of positive components in the solution . Now we apply the last condition and have
which gives . Hence is the key to the solution. Once we know (there are only possible values of it), we can compute , and the optimal solution is obtained by just adding to each component of and thresholding as in the end of Algorithm 1. (It is easy to check that this solution indeed satisfies all KKT conditions.) In the algorithm, we carry out the tests for if . We now prove that the number of times this test turns out positive is exactly . The following theorem is essentially Lemma 3 of ShalevShwartz and Singer (2006).
Theorem 1.
Let be the number of positive components in the solution , then
Proof.
Recall from the KKT conditions (2) that , for and for . In the sequel, we show that for , the test will continue to be positive until and then stay nonpositive afterwards, i.e., for , and for .

For , we have

For , we have
Since for , we have .

For , we have
Notice for , and for since is sorted, therefore .
∎
Remarks

We denote . At the th test, can be considered as a guess of the true (indeed, ). If we use this guess to compute a tentative solution where , then it is easy to see that for , and . In other words, the first components of are positive and sum to . If we find (or ), then we know we have found the optimal solution and because satisfies all KKT conditions.

To extend the algorithm to a simplex with a different scale, i.e., for , replace the terms with in Algorithm 1.
4 Matlab code
The following vectorized Matlab code implements algorithm 1. It projects each row vector in the matrix onto the probability simplex in dimensions.
function X = SimplexProj(Y) [N,D] = size(Y); X = sort(Y,2,’descend’); Xtmp = (cumsum(X,2)1)*diag(sparse(1./(1:D))); X = max(bsxfun(@minus,Y,Xtmp(sub2ind([N,D],(1:N)’,sum(X>Xtmp,2)))),0);
5 An application: Laplacian modes clustering
Consider the Laplacian modes clustering algorithm of Wang and CarreiraPerpiñán (2013) (which is an extension of the modes algorithm of CarreiraPerpiñán and Wang, 2013a). Given a dataset and suitably defined affinity values between each pair of points and (for example, Gaussian), we define the objective function
(3a)  
s.t.  (3b)  
(3c) 
is a matrix of , with , where are the soft assignments of point to clusters , and
are modes of the kernel density estimates defined for each cluster (where
gives a Gaussian). The problem of projection on the simplex appears in the training problem, i.e., in finding a (local) minimum of (3), and in the outofsample problem, i.e., in assigning a new point to the clusters.Training
The optimization of (3) is done by alternating minimization over and . For fixed , the problem over is a quadratic program of variables:
(4a)  
s.t.  (4b)  
(4c) 
where , , , and is the graph Laplacian computed from the pairwise affinities . The problem is convex since is positive semidefinite. One simple and quite efficient way to solve it is to use an (accelerated) gradient projection algorithm. The basic gradient projection algorithm iteratively takes a step in the negative gradient from the current point and projects the new point onto the constraint set. In our case, this projection is simple: it separates over each row of (i.e., the soft assignment of each data point, ), and corresponds to a projection on the probability simplex of a dimensional row vector, i.e., the problem (1).
Outofsample mapping
Given a new, test point , we wish to assign it to the clusters found during training. We are given the affinity vectors and , where is the affinity between and , , and , , respectively. A natural and efficient way to define an outofsample mapping is to solve a problem of the form (3) with a dataset consisting of the original training set augmented with , but keeping and fixed to the values obtained during training (this avoids having to solve for all points again). Hence, the only free parameter is the assignment vector for the new point . After dropping constant terms, the optimization problem (3) reduces to the following quadratic program over variables:
(5a)  
s.t.  (5b)  
(5c) 
where and
is a weighted average of the training points’ assignments, and so is itself an average between this and the pointtocentroid affinities. Thus, the solution is the projection of the dimensional vector onto the probability simplex, i.e., the problem (1) again.
Lass
In general, the optimization problem of eq. (4) is called Laplacian assignment model (LASS) by CarreiraPerpiñán and Wang (2013b). Here, we consider items and categories and want to learn a soft assignment of each item to each category, given an itemitem graph Laplacian matrix of and an itemcategory similarity matrix of . The training is as in eq. (4) and the outofsample mapping as in eq. (5), and the problem of projection on the simplex appears in both cases.
References
 Boyd and Vandenberghe (2004) S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, Cambridge, U.K., 2004.
 Brucker (1984) P. Brucker. An algorithm for quadratic knapsack problems. Operations Research Letters, 3(3):163–166, Aug. 1984.
 CarreiraPerpiñán and Wang (2013a) M. Á. CarreiraPerpiñán and W. Wang. The modes algorithm for clustering. Unpublished manuscript, arXiv:1304.6478, Apr. 23 2013a.
 CarreiraPerpiñán and Wang (2013b) M. Á. CarreiraPerpiñán and W. Wang. A simple assignment model with Laplacian smoothing. Unpublished manuscript, 2013b.
 Chen and Ye (2011) Y. Chen and X. Ye. Projection onto a simplex. Unpublished manuscript, arXiv:1101.6081, Feb. 10 2011.

Duchi et al. (2008)
J. Duchi, S. ShalevShwartz, Y. Singer, and T. Chandra.
Efficient projections onto the ball for learning in high
dimensions.
In A. McCallum and S. Roweis, editors,
Proc. of the 25th Int. Conf. Machine Learning (ICML’08)
, pages 272–279, Helsinki, Finland, July 5–9 2008.  Michelot (1986) C. Michelot. A finite algorithm for finding the projection of a point onto the canonical simplex of . J. Optimization Theory and Applications, 50(1):195–200, July 1986.
 Nocedal and Wright (2006) J. Nocedal and S. J. Wright. Numerical Optimization. Springer Series in Operations Research and Financial Engineering. SpringerVerlag, New York, second edition, 2006.
 Pardalos and Kovoor (1990) P. M. Pardalos and N. Kovoor. An algorithm for a singly constrained class of quadratic programs subject to upper and lower bounds. Math. Prog., 46(1–3):321–328, Jan. 1990.
 ShalevShwartz and Singer (2006) S. ShalevShwartz and Y. Singer. Efficient learning of label ranking by soft projections onto polyhedra. J. Machine Learning Research, 7:1567–1599, 2006.
 Wang and CarreiraPerpiñán (2013) W. Wang and M. Á. CarreiraPerpiñán. Laplacian modes clustering. Unpublished manuscript, 2013.