Solving Large Scale Quadratic Constrained Basis Pursuit

04/02/2021
by   Jirong Yi, et al.
0

Inspired by alternating direction method of multipliers and the idea of operator splitting, we propose a efficient algorithm for solving large-scale quadratically constrained basis pursuit. Experimental results show that the proposed algorithm can achieve 50  100 times speedup when compared with the baseline interior point algorithm implemented in CVX.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

12/29/2014

An ADMM algorithm for solving a proximal bound-constrained quadratic program

We consider a proximal operator given by a quadratic function subject to...
07/01/2019

The Constrained L_p-L_q Basis Pursuit Denoising Problem

In this paper, we consider the constrained L_p-L_q basis pursuit denoisi...
04/04/2014

Orthogonal Rank-One Matrix Pursuit for Low Rank Matrix Completion

In this paper, we propose an efficient and scalable low rank matrix comp...
09/01/2017

Iteratively Linearized Reweighted Alternating Direction Method of Multipliers for a Class of Nonconvex Problems

In this paper, we consider solving a class of nonconvex and nonsmooth pr...
09/09/2019

An Efficient Algorithm for Multiple-Pursuer-Multiple-Evader Pursuit/Evasion Game

We present a method for pursuit/evasion that is highly efficient and and...
07/14/2021

Solving discrete constrained problems on de Rham complex

The main difficulty in solving the discrete constrained problem is its p...
04/07/2017

"RAPID" Regions-of-Interest Detection In Big Histopathological Images

The sheer volume and size of histopathological images (e.g.,10^6 MPixel)...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Theoretical guarantees

We reformulate (0.1) as

(1.1)

where , and is an indicator function defined as

(1.2)

The Lagrangian dual of (1.2) is

(1.3)

where is the dual variable, is the convex conjugate of at , and is the convex conjugate of at , i.e.,

(1.4)

and

(1.5)

The dual problem can be formulated as

(1.6)

Assume the Slater’s condition holds, i.e., there exists such that , then the convexity of problem (0.1) implies that the optimal solution will achieve zero duality gap, i.e.,

(1.7)

From KKT conditions, the optimal solution must satisfy (1.7) and

(1.8)

Thus, the (1.7) and (1.8) can be used as optimality certificates or stopping criterion in algorithm design. More specifically, we define primal residual, dual residual, and duality gap with respect to a certain tuple as

(1.9)

2 Algorithm design based on ADMM

We adopt ideas from alternating projection methods, and reformulate (1.1) as

(2.1)

where is defined as

(2.2)

The augmented Lagrangian of (2.2) becomes

(2.3)

where is the dual variable, and is a parameter. Define

(2.4)

and we get the iterations in ADMM are

(2.5)

More specifically,

(2.6)

or simply

(2.7)

where is the proximator of function at which is defined as

(2.8)

The and are defined as

(2.9)

More specifically, the proximator of at is

(2.10)

where is the elementwise soft thresholding function, i.e.,

(2.11)

The proximator of at is

(2.12)

The updating rule for can be specified as

(2.13)

where is the projection of onto , i.e., the solution to

(2.14)

Define

(2.15)

and the updating rule for dual variable can be written as

(2.16)

2.1 Analytic solution to (2.14)

Since (2.14) is convex, from the KKT conditions of (2.14), we know that are the optimal solution to (2.14) if and only if there exists such that

(2.17)

which implies that the optimal can be obtained from solving the following linear system

(2.18)

Remarks: (1) the matrix is highly sparse, and this structure can be combined with other potential structured of to simplify the computation; (2) even simple elimination can be used to simplify the problem, i.e.,

(2.19)

or

(2.20)

Both the two matrices and are positive definite, thus factorization techniques can be used to accelerate the computation; (3) since , the (2.19) will be more efficient; (4) apply Cholesky decomposition once to get ; (5) calculate once; (6) solve for backward, i.e., ;

2.2 Algorithm in pseudocodes

The algorithm can be summarized as in Algorithm 1.

Computational complexity - running time: (1) line 5, 7, and 8 takes ; (2) line 6 takes for Cholesky decomposition over , for once, for backward solving using (2.19); (3) line 9 and 10 takes . Thus, but only once in total;

Computational complexity - space or memory: ;

Baseline algorithm, CVX using interior point method: (1) but multiple times.

1:  Input: , and
2:  Parameters: , , , , and
3:  Initialization: , and
4:  while  do
5:     Solve via (2.7), i.e.,
6:     Solve via (2), i.e., via (2.19)
7:     Solve via (2.16), i.e.,
8:     Get via (2.15), i.e.,
9:     Calculate primal residual via (1.9), i.e.,
10:     Calculate dual residual via (1.9), i.e.,
11:     Calculate duality gap via (1.9), i.e.,
12:     if  and and  then
13:        break
14:     else
15:        
16:     end if
17:  end while
18:  if  then
19:     Algorithm does not converge in iterations
20:     Return NOT CONVERGED
21:  else
22:     Algorithm converges within iterations
23:     Return
24:  end if
Algorithm 1 Algorihm for solving large scale QCBP

3 Numerical experiments

Computational environment: (1) desktop with Intel(R) Core(TM) i7-6700 CPU 3.40GHz 3.40 GHz, 32.0 GB RAM; (2) OS Windows 10 Education; (3) MATLAB R2018a; (4) baseline CVX which solves (0.1) using interior point method;

Computational setup: (1) is assumed to be sparse with cardinality , and generated randomly; (2) , and generate randomly; (3) generate noise randomly and normalize it to have magnitude ; (4) is assumed to be generated via ;

Results: see Table 1 and Figure 1

Time (sec) 100 400 1600 6400 25600
CVX 0.7 1.16 44 NA NA
Algorithm 1 0.01 0.02 0.31 4.82 104.91
Table 1: Computational performance comparisons: , ,
Figure 1: , , ,

References