An ADMM algorithm for solving a proximal bound-constrained quadratic program

12/29/2014 ∙ by Miguel Á. Carreira-Perpiñán, et al. ∙ 0

We consider a proximal operator given by a quadratic function subject to bound constraints and give an optimization algorithm using the alternating direction method of multipliers (ADMM). The algorithm is particularly efficient to solve a collection of proximal operators that share the same quadratic form, or if the quadratic program is the relaxation of a binary quadratic problem.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

We consider the convex proximal bound-constrained quadratic program (QP):

(1)

where , and the matrix  is symmetric positive definite or semidefinite. The problem has the form of a convex proximal operator (Combettes and Pesquet, 2011; Moreau, 1962; Rockafellar, 1976) (with ), and a unique solution. We will also be interested in solving a collection of such QPs that all have the same matrix

 but different vectors

, ,  and .

It is possible to solve problem (1) in different ways, but we want to take advantage of the structure of our problem, namely the existence of the strongly convex term, and the fact that the problems have the same matrix . We describe here one algorithm that is very simple, has guaranteed convergence without line searches, and takes advantage of the structure of the problem. It is based on the alternating direction method of multipliers (ADMM), combined with a direct linear solver and caching the Cholesky factorization of .

Motivation

Problem (1) arises within a step in the binary hashing algorithm of Carreira-Perpiñán and Raziperchikolaei (2015). The step involves independent subproblems

Intuitively, subproblem tries to map linearly a binary vector onto a real vector with minimal error, given a mapping of matrix . Hence, each subproblem is of the form of (1), but where is a least-squares function (where the rectangular matrix  is common to all subproblems) and the variables are binary. Relaxing these subproblems results in proximal bound-constrained QPs of the form of (1).

2 Solving a QP using ADMM

We briefly review how to solve a quadratic program (QP) using the alternating direction method of multipliers (ADMM), following Boyd et al. (2011). Consider the QP

(2)
s.t. (3)

over , where  is positive (semi)definite. To use ADMM, we first introduce new variables so that we replace the inequalities with an indicator function , which is zero in the nonnegative orthant and otherwise. Then we write the problem as

(4)
s.t. (5)

where

is the original objective with its domain restricted to the equality constraint. The augmented Lagrangian is

(6)

and the ADMM iteration has the form:

where

 is the dual variable (the Lagrange multiplier estimates for the constraint

), and the updates are applied in order and modify the variables immediately. Here, we use the scaled form of the ADMM iteration, which is simpler. It is obtained by combining the linear and quadratic terms in and using a scaled dual variable :

Since is the indicator function for the nonnegative orthant, the solution of the -update is simply to threshold each entry in by taking is nonnegative part. Finally, the ADMM iteration is:

(7a)
(7b)
(7c)

where the updates are applied in order and modify the variables immediately, applies elementwise, and is the Euclidean norm. The penalty parameter is set by the user, and are the Lagrange multiplier estimates for the inequalities. The -update is a quadratic objective whose solution is given by a linear system. The ADMM iteration consists of very simple updates to the relevant variables, but its success crucially relies in being able to solve the -update efficiently. Convergence of the ADMM iteration (7) to the global minimum of problem (2) in value and to a feasible point is guaranteed for any .

3 Application to our QP

In order to apply ADMM to our QP (1), we make the identification , defined over the entire , and , defined for , and want to minimize s.t. . The augmented Lagrangian is

and the ADMM iteration is:

We now solve the steps over  and . The step over  is an unconstrained strongly convex quadratic function whose unique minimizer is given by a linear system. The step over  separates over each component of  because both the objective and the constraints are separable. For each component , , we have to minimize an upwards parabola defined in , whose solution is the median of , and the parabola vertex (see fig. 1). Finally, we obtain the following updates, which are iterated in order until convergence:

(8a)
(8b)
(8c)

where the median applies elementwise to its vector arguments,  are the primal variables,  the auxiliary (consensus) variables, and  the scaled Lagrange multiplier estimates for . Moving the inequalities into the  subproblem results in a very simple step simply involving elementwise thresholding operations.

The iteration (8) has a number of advantages. It is very simple to implement. It requires no line searches and has only one user parameter, the penalty parameter , for which a good default value can be computed (see below). The algorithm converges for any positive value of . The steps are fast and we can stop at any time with a feasible iterate. The algorithm is particularly convenient if we have to solve proximal operators of the form (1) that have the same matrix , since the Cholesky factor is computed just once and used by all operators. Finally, if the QP is the relaxation of a binary problem (i.e.,  was originally in but was relaxed to

), we need not converge to high accuracy because the final result will be binarized a posteriori.

z[][] Mz[][] 0[B][B] 1[B][B] 00[B][B] 01[B][B]

Figure 1: Left: three possible cases for the location of the solution for the scalar box-constrained QP in the  step. Right: the solution is the median of , and .

Computational complexity

Each step in (8) is except for the  step, which is the most costly because of the linear system. This may be solved efficiently in one of the two following ways:

  • Preferably, by caching the Cholesky factorization of (i.e., where  is upper triangular). If  is dense, computing the Cholesky factor is a one-time cost of , and solving the linear system is by solving two triangular systems. If  is large and (sufficiently) sparse, computing the Cholesky factor and solving the linear system are both possibly . One should use a good permutation to reduce fill-in. This is feasible with relatively small dimensions if using a dense , or with large dimensions as long as  is sufficiently sparse that the Cholesky factorization does not add so much fill that it can be stored.

  • By using an iterative linear solver such as conjugate gradients (Nocedal and Wright, 2006), initialized with a warm start, preconditioned, and exiting it before convergence, so as to carry out faster, inexact -updates. This is better for large problems where the Cholesky factorization adds too much fill.

Thus, each iteration of the algorithm is cheap. In practice, for good values of , the algorithm quickly approaches the solution in the first iterations and then converges slowly, as is known with ADMM algorithms in general. However, since each iteration is so cheap, we can run a large number of them if high accuracy is needed. As a sample runtime, for a collection of problems in variables having the same, dense matrix , solving all problems to high accuracy takes around  s in a PC.

Initialization

If the QP problem (1) is itself a subproblem in a larger problem, one should warm-start the iteration of eq. (8) from the values of  and  in the previous outer-loop iteration. Otherwise, we can simply initialize and .

Stopping criterion

We stop when the relative change in  is less than a set tolerance (e.g. ). We return as approximate solution the value of , which is feasible by construction, while  need not be (upon convergence, ).

Optimal penalty parameter

The speed at which ADMM converges depends significantly on the quadratic penalty parameter (Boyd et al., 2011). Little work exists on how to select so as to achieve fastest convergence. Recently, for QPs, Ghadimi et al. (2013) suggest to use where and

are the smallest (nonzero) and largest eigenvalue of the matrix

.

Matlab code

A Matlab implementation of the algorithm is available from the author. The following Matlab code implements a basic form of the algorithm, using the Cholesky factor in the -update linear system, and assuming the bounds are the same for each of the quadratic programs. It applies the algorithm synchronously to all the QPs, which means each QP runs the same number of iterations. This is inefficient, because the runtime is driven by the slowest variable to converge, but the code is better vectorized in Matlab. An implementation in C should handle each QP separately, particularly in a parallel or distributed implementation.

function [Z,Y,X] = proxbqp(V,m,A,B,l,u,Z,Y,r,maxit,tol)

[D,N] = size(V); R = chol(A+r*eye(D,D)); Rt = R’; Zold = zeros(D,N);
for i=1:maxit
  X = R \ (Rt \ (B + r*(Z-Y)));
  Z = (m*V+r*(X+Y))/(m+r); Z(Z<l) = l; Z(Z>u) = u;
  Y = Y + X - Z;
  if max(abs(Z(:)-Zold(:))) < tol break; end; Zold = Z;
end

References

  • Boyd et al. (2011) S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers.

    Foundations and Trends in Machine Learning

    , 3(1):1–122, 2011.
  • Carreira-Perpiñán and Raziperchikolaei (2015) M. Á. Carreira-Perpiñán and R. Raziperchikolaei.

    Hashing with binary autoencoders.

    Unpublished manuscript, Jan. 2015.
  • Combettes and Pesquet (2011) P. L. Combettes and J.-C. Pesquet. Proximal splitting methods in signal processing. In H. H. Bauschke, R. S. Burachik, P. L. Combettes, V. Elser, D. R. Luke, and H. Wolkowicz, editors, Fixed-Point Algorithms for Inverse Problems in Science and Engineering, Springer Series in Optimization and Its Applications, pages 185–212. Springer-Verlag, 2011.
  • Ghadimi et al. (2013) E. Ghadimi, A. Teixeira, I. Shames, and M. Johansson. Optimal parameter selection for the alternating direction method of multipliers (ADMM): Quadratic problems. arXiv:1306.2454 [math.OC], July 3 2013.
  • Moreau (1962) J.-J. Moreau. Fonctions convexes duales et points proximaux dans un espace hilbertien. C. R. Acad. Sci. Paris Sér. A Math., 255:2897–2899, 1962.
  • Nocedal and Wright (2006) J. Nocedal and S. J. Wright. Numerical Optimization. Springer Series in Operations Research and Financial Engineering. Springer-Verlag, New York, second edition, 2006.
  • Rockafellar (1976) R. T. Rockafellar. Monotone operators and the proximal point algorithm. SIAM J. Control and Optim., 14(5):877–898, 1976.