We consider a proximal operator given by a quadratic function subject to bound constraints and give an optimization algorithm using the alternating direction method of multipliers (ADMM). The algorithm is particularly efficient to solve a collection of proximal operators that share the same quadratic form, or if the quadratic program is the relaxation of a binary quadratic problem.

## Authors

• 22 publications
• ### Iteration complexity analysis of a partial LQP-based alternating direction method of multipliers

In this paper, we consider a prototypical convex optimization problem wi...
03/31/2021 ∙ by Jianchao Bai, et al. ∙ 0

• ### An Efficient ADMM Algorithm for Structural Break Detection in Multivariate Time Series

We present an efficient alternating direction method of multipliers (ADM...
11/22/2017 ∙ by Alex Tank, et al. ∙ 0

• ### Solving Large Scale Quadratic Constrained Basis Pursuit

Inspired by alternating direction method of multipliers and the idea of ...
04/02/2021 ∙ by Jirong Yi, et al. ∙ 0

• ### Dissimilarity-based Sparse Subset Selection

Finding an informative subset of a large collection of data points or mo...
07/25/2014 ∙ by Ehsan Elhamifar, et al. ∙ 0

• ### Learning Shapes by Convex Composition

We present a mathematical and algorithmic scheme for learning the princi...
02/23/2016 ∙ by Alireza Aghasi, et al. ∙ 0

• ### Learning Convex Regularizers for Optimal Bayesian Denoising

We propose a data-driven algorithm for the maximum a posteriori (MAP) es...
05/16/2017 ∙ by Ha Q. Nguyen, et al. ∙ 0

• ### Presburger Arithmetic with algebraic scalar multiplications

We study complexity of integer sentences in S_α = (R, <, +,Z, x α x), wh...
05/09/2018 ∙ by Philipp Hieronymi, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

We consider the convex proximal bound-constrained quadratic program (QP):

 minx∈RDf(x)+μ2∥x−v∥2 s.t.\ l≤x≤uf(x)=12xTAx−bTx (1)

where , and the matrix  is symmetric positive definite or semidefinite. The problem has the form of a convex proximal operator (Combettes and Pesquet, 2011; Moreau, 1962; Rockafellar, 1976) (with ), and a unique solution. We will also be interested in solving a collection of such QPs that all have the same matrix

but different vectors

, ,  and .

It is possible to solve problem (1) in different ways, but we want to take advantage of the structure of our problem, namely the existence of the strongly convex term, and the fact that the problems have the same matrix . We describe here one algorithm that is very simple, has guaranteed convergence without line searches, and takes advantage of the structure of the problem. It is based on the alternating direction method of multipliers (ADMM), combined with a direct linear solver and caching the Cholesky factorization of .

### Motivation

Problem (1) arises within a step in the binary hashing algorithm of Carreira-Perpiñán and Raziperchikolaei (2015). The step involves independent subproblems

 minxn∈{0,1}D12∥Cxn−dn∥2+μ2∥xn−vn∥2n=1,…,N.

Intuitively, subproblem tries to map linearly a binary vector onto a real vector with minimal error, given a mapping of matrix . Hence, each subproblem is of the form of (1), but where is a least-squares function (where the rectangular matrix  is common to all subproblems) and the variables are binary. Relaxing these subproblems results in proximal bound-constrained QPs of the form of (1).

## 2 Solving a QP using ADMM

We briefly review how to solve a quadratic program (QP) using the alternating direction method of multipliers (ADMM), following Boyd et al. (2011). Consider the QP

 minx 12xTPx+qTx (2) s.t. Ax=b, x≥0 (3)

over , where  is positive (semi)definite. To use ADMM, we first introduce new variables so that we replace the inequalities with an indicator function , which is zero in the nonnegative orthant and otherwise. Then we write the problem as

 minx f(x)+g(z) (4) s.t. x=z (5)

where

 f(x)=12xTPx+qTx,dom(f)={x∈RD: Ax=b}

is the original objective with its domain restricted to the equality constraint. The augmented Lagrangian is

 L(x,z,y;ρ)=f(x)+g(z)+yT(x−z)+ρ2∥x−z∥2 (6)

and the ADMM iteration has the form:

 x ←argminxL(x,z,y;ρ) z ←argminzL(x,z,y;ρ) y ←y+ρ(x−z)

where

is the dual variable (the Lagrange multiplier estimates for the constraint

), and the updates are applied in order and modify the variables immediately. Here, we use the scaled form of the ADMM iteration, which is simpler. It is obtained by combining the linear and quadratic terms in and using a scaled dual variable :

 x ←argminx(f(x)+ρ2∥x−z+ζ∥2) z ←argminz(g(z)+ρ2∥x−z+ζ∥2) ζ ←ζ+x−z.

Since is the indicator function for the nonnegative orthant, the solution of the -update is simply to threshold each entry in by taking is nonnegative part. Finally, the ADMM iteration is:

 x ←argminx(f(x)+ρ2∥x−z+ζ∥2) (7a) z ←(x+ζ)+ (7b) ζ ←ζ+x−z (7c)

where the updates are applied in order and modify the variables immediately, applies elementwise, and is the Euclidean norm. The penalty parameter is set by the user, and are the Lagrange multiplier estimates for the inequalities. The -update is a quadratic objective whose solution is given by a linear system. The ADMM iteration consists of very simple updates to the relevant variables, but its success crucially relies in being able to solve the -update efficiently. Convergence of the ADMM iteration (7) to the global minimum of problem (2) in value and to a feasible point is guaranteed for any .

## 3 Application to our QP

In order to apply ADMM to our QP (1), we make the identification , defined over the entire , and , defined for , and want to minimize s.t. . The augmented Lagrangian is

 x z ←argminz(μ2∥z−v∥2+ρ2∥x−z+ζ∥2) s.t. l≤z≤u ζ ←ζ+x−z.

We now solve the steps over  and . The step over  is an unconstrained strongly convex quadratic function whose unique minimizer is given by a linear system. The step over  separates over each component of  because both the objective and the constraints are separable. For each component , , we have to minimize an upwards parabola defined in , whose solution is the median of , and the parabola vertex (see fig. 1). Finally, we obtain the following updates, which are iterated in order until convergence:

 x ←(A+ρI)−1(b+ρ(z−ζ)) (8a) z ←M(l,u,μv+ρ(x+ζ)μ+ρ) (8b) ζ ←ζ+x−z (8c)

where the median applies elementwise to its vector arguments,  are the primal variables,  the auxiliary (consensus) variables, and  the scaled Lagrange multiplier estimates for . Moving the inequalities into the  subproblem results in a very simple step simply involving elementwise thresholding operations.

The iteration (8) has a number of advantages. It is very simple to implement. It requires no line searches and has only one user parameter, the penalty parameter , for which a good default value can be computed (see below). The algorithm converges for any positive value of . The steps are fast and we can stop at any time with a feasible iterate. The algorithm is particularly convenient if we have to solve proximal operators of the form (1) that have the same matrix , since the Cholesky factor is computed just once and used by all operators. Finally, if the QP is the relaxation of a binary problem (i.e.,  was originally in but was relaxed to

), we need not converge to high accuracy because the final result will be binarized a posteriori.

### Computational complexity

Each step in (8) is except for the  step, which is the most costly because of the linear system. This may be solved efficiently in one of the two following ways:

• Preferably, by caching the Cholesky factorization of (i.e., where  is upper triangular). If  is dense, computing the Cholesky factor is a one-time cost of , and solving the linear system is by solving two triangular systems. If  is large and (sufficiently) sparse, computing the Cholesky factor and solving the linear system are both possibly . One should use a good permutation to reduce fill-in. This is feasible with relatively small dimensions if using a dense , or with large dimensions as long as  is sufficiently sparse that the Cholesky factorization does not add so much fill that it can be stored.

• By using an iterative linear solver such as conjugate gradients (Nocedal and Wright, 2006), initialized with a warm start, preconditioned, and exiting it before convergence, so as to carry out faster, inexact -updates. This is better for large problems where the Cholesky factorization adds too much fill.

Thus, each iteration of the algorithm is cheap. In practice, for good values of , the algorithm quickly approaches the solution in the first iterations and then converges slowly, as is known with ADMM algorithms in general. However, since each iteration is so cheap, we can run a large number of them if high accuracy is needed. As a sample runtime, for a collection of problems in variables having the same, dense matrix , solving all problems to high accuracy takes around  s in a PC.

### Initialization

If the QP problem (1) is itself a subproblem in a larger problem, one should warm-start the iteration of eq. (8) from the values of  and  in the previous outer-loop iteration. Otherwise, we can simply initialize and .

### Stopping criterion

We stop when the relative change in  is less than a set tolerance (e.g. ). We return as approximate solution the value of , which is feasible by construction, while  need not be (upon convergence, ).

### Optimal penalty parameter ρ

The speed at which ADMM converges depends significantly on the quadratic penalty parameter (Boyd et al., 2011). Little work exists on how to select so as to achieve fastest convergence. Recently, for QPs, Ghadimi et al. (2013) suggest to use where and

are the smallest (nonzero) and largest eigenvalue of the matrix

.

### Matlab code

A Matlab implementation of the algorithm is available from the author. The following Matlab code implements a basic form of the algorithm, using the Cholesky factor in the -update linear system, and assuming the bounds are the same for each of the quadratic programs. It applies the algorithm synchronously to all the QPs, which means each QP runs the same number of iterations. This is inefficient, because the runtime is driven by the slowest variable to converge, but the code is better vectorized in Matlab. An implementation in C should handle each QP separately, particularly in a parallel or distributed implementation.

function [Z,Y,X] = proxbqp(V,m,A,B,l,u,Z,Y,r,maxit,tol)

[D,N] = size(V); R = chol(A+r*eye(D,D)); Rt = R’; Zold = zeros(D,N);
for i=1:maxit
X = R \ (Rt \ (B + r*(Z-Y)));
Z = (m*V+r*(X+Y))/(m+r); Z(Z<l) = l; Z(Z>u) = u;
Y = Y + X - Z;
if max(abs(Z(:)-Zold(:))) < tol break; end; Zold = Z;
end


## References

• Boyd et al. (2011) S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers.

Foundations and Trends in Machine Learning

, 3(1):1–122, 2011.
• Carreira-Perpiñán and Raziperchikolaei (2015) M. Á. Carreira-Perpiñán and R. Raziperchikolaei.

Hashing with binary autoencoders.

Unpublished manuscript, Jan. 2015.
• Combettes and Pesquet (2011) P. L. Combettes and J.-C. Pesquet. Proximal splitting methods in signal processing. In H. H. Bauschke, R. S. Burachik, P. L. Combettes, V. Elser, D. R. Luke, and H. Wolkowicz, editors, Fixed-Point Algorithms for Inverse Problems in Science and Engineering, Springer Series in Optimization and Its Applications, pages 185–212. Springer-Verlag, 2011.
• Ghadimi et al. (2013) E. Ghadimi, A. Teixeira, I. Shames, and M. Johansson. Optimal parameter selection for the alternating direction method of multipliers (ADMM): Quadratic problems. arXiv:1306.2454 [math.OC], July 3 2013.
• Moreau (1962) J.-J. Moreau. Fonctions convexes duales et points proximaux dans un espace hilbertien. C. R. Acad. Sci. Paris Sér. A Math., 255:2897–2899, 1962.
• Nocedal and Wright (2006) J. Nocedal and S. J. Wright. Numerical Optimization. Springer Series in Operations Research and Financial Engineering. Springer-Verlag, New York, second edition, 2006.
• Rockafellar (1976) R. T. Rockafellar. Monotone operators and the proximal point algorithm. SIAM J. Control and Optim., 14(5):877–898, 1976.