A Coordinate-wise Optimization Algorithm for Sparse Inverse Covariance Selection

11/19/2017 ∙ by Ganzhao Yuan, et al. ∙ SUN YAT-SEN UNIVERSITY 0

Sparse inverse covariance selection is a fundamental problem for analyzing dependencies in high dimensional data. However, such a problem is difficult to solve since it is NP-hard. Existing solutions are primarily based on convex ℓ_1 approximation and iterative hard thresholding, which only lead to sub-optimal solutions. In this work, we propose a coordinate-wise optimization algorithm to solve this problem which is guaranteed to converge to a coordinate-wise minimum point. The algorithm iteratively and greedily selects one variable or swaps two variables to identify the support set, and then solves a reduced convex optimization problem over the support set to achieve the greatest descent. As a side contribution of this paper, we propose a Newton-like algorithm to solve the reduced convex sub-problem. Finally, we demonstrate the efficacy of our method on synthetic data and real-world data sets. As a result, the proposed method achieves state-of-the-art performance in term of accuracy.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this paper, we mainly focus on the following nonconvex optimization problem:

(1)

where is a given symmetric covariance matrix of the input data set, counts the number of non-diagonal and non-zero elements of a square matrix, and is a positive integer that specifies the sparsity level of the solution. means is positive definite. stands for the standard inner product.

The optimization problem in (1) is known as sparse inverse covariance selection in the literature [9, 13]

. It provides a good way of analyzing dependencies in high dimensional data and captures varieties of applications in computer vision and machine learning (e.g. biomedical image analysis

[10], scene labeling [27], brain functional network classification [38]

). The log-determinant function is introduced for maximum likelihood estimation, and the

norm is used to reduce over-fitting and improve the interpretability of the model. We remark that when the sparsity constraint is absent, one can set the gradient of the objective function to zero (i.e. ) and output as the optimal solution.

Problem (1) is very challenging due to the introduction of the combinatorial norm. Existing solutions can be categorized into two classes: convex approximation and iterative hard thresholding. Convex approximation simply replaces the norm by its tightest convex relaxation norm. In the past decades, a plethora of approaches have been proposed to solve the norm approximation problem, which include projected sub-gradient method [8], (linearized) alternating direction method [26, 34], quadratic approximation method [25, 24, 12, 11], block coordinate descent method [9, 1], Nesterov’s first-order optimal method [18, 19, 6], primal-dual interior point method [17]. Despite the popularity of convex methods, they fail to control the sparsity of the solution and often lead to sub-optimal accuracy for the original non-convex problem. Recent attention has been paid to solving the original non-convex problem directly by the researchers [31, 36, 5].

Iterative hard thresholding method considers iteratively setting the small elements (in magnitude) to zero in a gradient descent manner. By using this strategy, it is able to control the sparsity of the solution directly and exactly. Due to its simplicity, it has been widely used and incorporated into the optimization framework of penalty decomposition algorithm [20] and mean doubly alternating direction method [7]. In [20], it is shown that for the general sparse optimization problem, any accumulation point of the sequence generated by the penalty decomposition algorithm always satisfies the first-order optimality condition of the problem.

Recently, A. Beck and Y. Vaisbourd present and analyze a new optimality criterion which is based on coordinate-wise optimality [3]

. They show that coordinate-wise optimality is strictly stronger than the optimality criterion based on hard thresholding. They apply their algorithm to principal component analysis and show that their method consistently outperforms the well-known truncated power method

[33]. Inspired by this work, we extend their method to solve sparse inverse covariance selection problem. We are also aware of the work [21] where a cyclic coordinate descent decent algorithm (combined with a randomized initialization strategy) is considered to solve the sparse inverse covariance selection problem. However, their method only addresses the norm regularized optimization problem and it fails to control the sparsity level of the solution.

Contributions: The contributions of this work are three-fold. (i) We propose a new coordinate-wise optimization algorithm for sparse inverse covariance selection. The algorithm iteratively and greedily selects one variable or swap two variables to identify the support set, and then solves a reduced convex optimization problem over the support set (See Section 2). (ii) An efficient Hessian-free Newton-like algorithm to solve the convex subproblem is proposed (Section 3). (iii) We provide some theoretical analysis for the proposed Coordinate-Wise Optimization Algorithm (CWOA) and the Newton-Like Optimization Algorithm (NLOA). We prove that CWOA is guaranteed to converge to a coordinate-wise minimum point of the original nonconvex problem and the NLOA is guaranteed to converge to the global optimal solution of the convex subproblem with global linear rate and local quadratic convergence rate (Section 4). (iv) Extensive experiments have shown that our method consistently outperforms existing solutions in terms of accuracy (Section 5).

Notations:

In this paper, boldfaced lowercase letters denote vectors and uppercase letters denote real-valued matrices. We denote

as the eigenvalues of

in increasing order. All vectors are column vectors and superscript denotes transpose. stacks the columns of the matrix into a column vector and converts into a matrix. Thus, and . We use and to denote the Euclidean inner product and Kronecker product of and , respectively. For any matrix and any , we denote by the element of in row and column and use to denote the position of . Therefore, we have . We denote as a unit vector with a 1 in the entry and 0 in all other entries. We use to denote any position in a square matrix of size where is known from the context, and use and to denote the corresponding row and column for . We denote is a square symmetric matrix with the entries and equal 1 and 0 in all other ones. Note that when , we have . Finally, for any matrices and , we define and .

2 Coordinate-wise Optimization Algorithm

This section presents our coordinate-wise optimization algorithm which is guaranteed to converge after a finite amount of iterations to a coordinate-wise minimum point [2, 3]. We denote and as the index of non-diagonally non-zero elements and zero elements of , respectively.

First of all, we notice that when the support set is known, problem (1) reduces to the following convex optimization problem:

(2)

Our algorithm iteratively and greedily selects one variable or swaps two variables to identify the support set , and then solves a reduced convex sub-problem in (2) to achieve the greatest descent.

  Input: Sparsity level .
  Output: The solution .
  Initialization: Set , where is a diagonal matrix with . Set .
  while true do
      Greedy Pursuit Stage
     while  do
        for every  do
           
(3)
        end for
        
        if  then
           Solve (2) to get with .
           
        end if
     end while
      
      Swap Coordinates Stage
     for every  do
        for every  do
           
(4)
        end for
     end for
     
     if  then
        Solve (2) to get with .
        
     else
        set
        break
     end if
  end while
Algorithm 1 CWOA: A Coordinate-wise Optimization Algorithm for Sparse Inverse Covariance Selection.

We summarize our proposed method in Algorithm 1 and have a few remarks on it below.

Two-stage algorithm. At each iteration of the algorithm, one or two variables of the solution are updated. At the first greedy pursuit stage, the algorithm greedily picks one coordinate that leads to the greatest descent from . This strategy is also known as forward greedy selection in the literature [28, 37]. At the second swap coordinates stage, the algorithm enumerates all the possible pairs with and that leads to the greatest descent and changes the two coordinates from zero/non-zero to non-zero/zero. At both stages, once the support set has been updated, Algorithm 1 runs a convex subproblem procedure to solve (2) over the support set to compute a more ‘compact’ solution.

One-dimensional sub-problem. The problems in (3) and (4) reduce to the following optimization problem:

(5)

with for (3) and for (4). We now discuss how to simplify problem (5). We define and obtain the following equations: , where . Noticing that and , we can simplify problem (5) to the following one-dimensional convex problem:

where is a constant. Noting that is differentiable, we set the gradient of to zero, we obtain . There are two solution to this equation. However, only one of these satisfies the bound constraint . Thus, the optimal solution can be computed as:

Fast matrix computation. In our algorithm, we assume that is available. This can be achieved by using the follow strategy. We keep a record of in every iteration. Once the solution is changed to , we quickly estimate using the well-known Sherman-Morrison-Woodbury formula 111 . Specifically, we rewrite as and apply the Sherman-Morrison-Woodbury formula with and , leading to the following equation: , where diag is a diagonal matrix with as the main diagonal entries and

is an identity matrix. Finally, we obtain the following results:

, where , , , , and denotes the -th column of .

Remarks: (i) Algorithm 1 can be viewed as an improved version of classical greedy pursuit method for solving the sparsity-constrained inverse covariance selection problem. Given the fact that greedy pursuit methods achieve state-of-the-art performance in varieties of non-convex optimization problems (e.g. compressed sensing [28], kernel learning [14], and sensor selection [15]), our proposed method is expected to achieve state-of-the-art performance as well. (ii) Algorithm 1 is also closely related to forward-backward greedy method in the literature [37]. To obtain the greatest descent, while forward-backward strategy considers the removal step and adding step sequentially, the swapping strategy (refer to the swap coordinates stage in Algorithm 1) considers these two steps simultaneously. Thus, the swapping strategy is generally stronger than the forward-backward strategy.

3 Convex Optimization Over Support Set

After the support set has been determined, one need to solve the reduced convex sub-problem as in (2), which is equivalent to the following convex composite minimization problem:

(6)

where and is an indicator function of the convex set with . In what follows, we present an efficient Newton-Like Optimization Algorithm (NLOA) to tackle this problem. This method has the good merits of greedy descent and fast convergence.

Following [29, 35, 16, 32], we develop a quadratic approximation around any solution for the objective function using second-order Taylor expansion:

where the first-order and second-order derivatives of the objective function can be expressed as [12]:

Then, we keep the non-smooth function and build a quadratic approximation for the smooth function by:

(7)

Once the Newton direction has been computed, one can employ an Arimijo-rule based step size selection to ensure positive definiteness and sufficient descent of the next iterate. We summarize our Newton-like algorithm in Algorithm 2. Note that the initial point has to be a feasible solution and the positive definiteness of all the following iterates will be guaranteed by the step size selection procedure (see step 7 in Algorithm 3). For notational convenience, we use the shorthand

to denote the objective value, first-order gradient, hessian matrix and the search direction at the point , respectively.

1:  Input: such that and .
2:  Output:
3:  Initialize
4:  for  to  do
5:     Solve Problem (7) by Algorithm 3 to obtain .
6:      Perform step-size search to get such that:
7:        (1) is positive definite and
8:         (2) there is sufficient decrease in the objective.
9:     Increment by 1
10:  end for
Algorithm 2 NLOA: Newton-Like Optimization Algorithm to Solve (2) for Optimization Over Support Set.

3.1 Computing the Search Direction

This subsection focuses on finding the search direction in (7). With the choice of and , (7) boils down to the following optimization problem:

(8)

It appears that (8) is very difficult to solve. First, it involves computing and storing an Hessian matrix . Second, it is a constrained optimization program with variables and equality constraints.

We carefully analyze (8) and consider the following solutions. For the first issue, one can exploit the Kronecker product structure of the Hessian matrix to avoid storing it. Recall that . Given any vector , using the fact that the Hessian matrix can be computed as , the Hessian-vector product can be computed efficiently as: , which only involves matrix-matrix computation. For the second issue, (8) is, in fact, a unconstrained quadratic program with variables. In order to deal with the variables indexed by , one can explicitly enforce the entries of for current solution and its corresponding gradient to 0. Therefore, the constraint can always be satisfied. Finally, linear conjugate gradient method can be used to solve (8).

We summarize our modified linear conjugate gradient method for computing the search direction in Algorithm 3. The algorithm involves a parameter controlling the maximum number of iterations. Empirically, we found that a value of usually leads to good overall efficiency.

  Input: , and current gradient , Specify the maximum iteration
  Output: Newton direction
  ,
  
  for  to  do
     
     ,
     ,
     
  end for
Algorithm 3 A Modified Linear Conjugate Gradient to Find the Newton Direction as in (8).

3.2 Computing the Step Size

Once the Newton direction is computed, we need to find a step size in order to ensure the positive definiteness of the next iterated result, i.e. , so that a sufficient decrease of the objective function will be resulted. We use Armijo’s rule and try step size with a constant decrease rate until we find the smallest with such that is (i) positive definite, and (ii) satisfies the following sufficient decrease condition [29]:

where . In our experiments, we set and .

We verify positive definiteness of the solution when we compute its Cholesky factorization (taking flops). We note that the Cholesky factorization dominates the computational cost in the step-size computations. To reduce the computation cost, we can reuse the Cholesky factor in the previous iteration when evaluating the objective function (that requires the computation of ) and the gradient (that requires the computation of ).

4 Theoretical Analysis

4.1 Convergence Analysis of Algorithm 1

We present the convergence results for Algorithm 1, which are analogous to the results in [3].

Proposition 1.

Let be the sequence generated by algorithm 1. Algorithm 1 outputs a coordinate-wise minimum point with for every , where .

Proof.

Note that it takes finite iterations for any convex optimization algorithm to produce an optimal solution with a given support set. Combining with the monotonicity of Algorithm 1, we conclude that the sequence of function values are monotonically decreasing and Algorithm 1 stops after a finite number of iterations. We define:

for all . Clearly, we have . Now we assume that point is generated by Algorithm 1.

For the case , is a global optimal point generated by the convex optimization subproblem on the given support set. Therefore, for any .

For the case , we notice that Algorithm 1 terminates only if after the swap coordinates stage. For any and , we have the following inequality:

Therefore, we have that , which implicates that we cannot find any swap from support set and non-support set to achieve descent on the objective value. Thus, for any .

For the case , Algorithm 1 must perform greedy pursuit stage before entering the swap coordinates stage. The greedy stage terminates only if for any ,

It implies that we have selected the element that leads to greatest descent as a new member of non-zero elements when . We conclude that , for any .

Therefore, we finish the proof of this lemma. ∎

4.2 Convergence Analysis of Algorithm 2

This subsection provides some convergence analysis for the proposed Newton-like optimization algorithm in Algorithm 2. We denote as the sequence generated by the algorithm and as the global optimal solution set for the convex problem in (2). Throughout this subsection, we make the following assumption.

Assumption 1.

The objective function is strongly convex with the modulus and gradient Lipschitz continues with constant for all with .

Remarks: This assumption is mild and equivalent to assuming the solution is bounded since it holds that

where as the eigenvalues of in increasing order with .

The following lemma characterizes the optimality of . It is nearly identical to Lemma 1 in [30]. For completeness, we present the proof here.

Lemma 1.

It holds that

(9)
Proof.

Noticing is the minimizer of (7), we have:

Letting where is any constant with , we obtain:

where the last inequality uses the convexity of . Rearranging terms yields:

Since , we have . Letting , we obtain (9).

Theorem 1.

(Global Convergence). We have the following results: (i) There exists a strictly positive constant such that the positive definiteness and sufficient descent conditions (refer to step 7-8 of Algorithm 2) are satisfied. Here denotes a sufficient small positive constant. and are some constants which are independent of the current solution . (ii) The sequence is non-increasing and any cluster point of the sequence is the global optimal solution of (6).

Proof.

(i) First, we focus on the positive definiteness condition. We now bound . By Lemma 1, we have :

(10)

where the second step uses the fact that and the inequality that ; the third step uses the definition of ; the last step uses the inequalities that . Solving the quadratic inequality in (10) gives . Therefore, we have:

where the first steps uses the fact that ; the second step uses and ; the last step uses .

Second, we focus on the sufficient decrease condition. For any , we have:

(11)

where the first step uses the -Lipschitz continuity of the gradient of that: ; the second step uses the lower bound of the Hessian matrix that ; the third step uses (9) that ; the last step uses the choice that .

Combining the positive definiteness condition, sufficient decrease condition and the fact that , we finish the proof of the first part of this lemma.

(ii) From (11) and (9), we have:

(12)

Therefore, the sequence is non-increasing. Summing the inequality in (1) over and using the fact that , we have:

As , we have . We further derive the following results: . Based on the fact that , , and , we conclude that is the global optimal solution for the convex optimization problem. Therefore, any cluster point of the sequence is the global optimal solution.

Remarks: Due to the strongly convexity and gradient Lipschitz continuity of the objective function, there always exists a strictly positive step size such that both sufficient decrease condition and positive definite condition can be satisfied for all . This is very crucial for the global convergence of Algorithm 2.

We now prove the global linear convergence rate of Algorithm 2. The following lemma characterizes the relation between and for any .

Lemma 2.

If is not the global optimal solution of (6), there exists a constant such that .

Proof.

First, we prove that . This can be achieved by contradiction. Assuming that , we obtain: , which implies that is the stationary point. Since (6) is a strongly convex optimization problem, we have , which contradicts with the condition that