Multi-dimensional sparse structured signal approximation using split Bregman iterations

03/21/2013
by   Yoann Isaac, et al.
0

The paper focuses on the sparse approximation of signals using overcomplete representations, such that it preserves the (prior) structure of multi-dimensional signals. The underlying optimization problem is tackled using a multi-dimensional split Bregman optimization approach. An extensive empirical evaluation shows how the proposed approach compares to the state of the art depending on the signal features.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

09/29/2016

Multi-dimensional signal approximation with sparse structured priors using split Bregman iterations

This paper addresses the structurally-constrained sparse decomposition o...
02/15/2022

Normalized K-Means for Noise-Insensitive Multi-Dimensional Feature Learning

Many measurement modalities which perform imaging by probing an object p...
05/17/2019

Multilinear Compressive Learning

Compressive Learning is an emerging topic that combines signal acquisiti...
03/20/2021

Simple sufficient condition for inadmissibility of Moran's single-split test

Suppose that a statistician observes two independent variates X_1 and X_...
03/29/2022

Connections between Deep Equilibrium and Sparse Representation models with Application to Hyperspectral Imaging

In this study, the problem of computing a sparse representation of multi...
12/18/2007

A Class of LULU Operators on Multi-Dimensional Arrays

The LULU operators for sequences are extended to multi-dimensional array...
10/09/2017

A Sequential Thinning Algorithm For Multi-Dimensional Binary Patterns

Thinning is the removal of contour pixels/points of connected components...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Dictionary-based representations proceed by approximating a signal via a linear combination of dictionary elements, referred to as atoms. Sparse dictionary-based representations, where each signal involves few atoms, have been thoroughly investigated for their good properties, as they enable robust transmission (compressed sensing [donoho2006compressed]) or image in-painting [mairal2008sparse]. The dictionary is either given, based on the domain knowledge, or learned from the signals [tosic2011dictionary].

The so-called sparse approximation algorithm aims at finding a sparse approximate representation of the considered signals using this dictionary, by minimizing a weighted sum of the approximation loss and the representation sparsity (see [rakotomamonjy2011surveying] for a survey). When available, prior knowledge about the application domain can also be used to guide the search toward “plausible” decompositions.

This paper focuses on sparse approximation enforcing a structured decomposition property, defined as follows. Let the signals be structured (e.g. being recorded in consecutive time steps); the structured decomposition property then requires that the signal structure is preserved in the dictionary-based representation (e.g. the atoms involved in the approximation of consecutive signals have “close” weights). The structured decomposition property is enforced through adding a total variation (TV) penalty to the minimization objective.

In the 1D case, the minimization of the above overall objective can be tackled using the fused-LASSO approach first introduced in [tibshirani2005sparsity]. In the case of multi-dimensional (also called multi-channel) signals111Our motivating application considers electro-encephalogram (EEG) signals, where the number of sensors ranges up to a few tens. however, the minimization problem presents additional difficulties. The first contribution of the paper is to show how this problem can be handled efficiently, by extending the (mono-dimensional) split Bregman fused-LASSO solver presented in [ye2011split], to the multi-dimensional case. The second contribution is a comprehensive experimental study, comparing state-of-the-art algorithms to the presented approach referred to as Multi-SSSA and establishing their relative performance depending on diverse features of the structured signals.

This paper is organized as follows. The Section 2 introduces the formal background. The proposed optimization approach is described in Section 3. Section 4 presents our experimental settings and reports on the results. The presented approach is discussed w.r.t. related work in Section 5 and the paper concludes with some perspectives for further researches.

2 Problem statement

Let be a matrix made of -dimensional signals, and an overcomplete dictionary of normalized atoms (). We consider the linear model:

in which stands for the decomposition matrix and is a Gaussian noise matrix. The sparse structured decomposition problem consists of approximating the , , by decomposing them on the dictionary , such that the structure of the decompositions reflects that of the signals . This goal is formalized as the minimization of the objective function:222. The case corresponds to the classical Frobenius norm.

(1)

where and are regularization coefficients and encodes the signal structure (provided by the prior knowledge) as in [chen2010graph]. In the remainder of the paper, the considered structure is that of the temporal ordering of the signals, i.e. .

3 Optimization strategy

3.1 Algorithm description

Bregman iterations have shown to be very efficient for regularized problems [goldstein2009split]. For convex problems with linear constraints, the split Bregman iteration technique is equivalent to the method of multipliers and the augmented Lagrangian one [wu2010augmented]. The iteration scheme presented in [ye2011split] considers an augmented Lagrangian formalism. We have chosen here to present ours with the initial split Bregman formulation.

First, let us restate the sparse approximation problem:

(2)

This reformulation is a key step of the split Bregman method. It decouples the three terms and allows to optimize them separately within the Bregman iterations. To set-up this iteration scheme, Eq. (2) must be transform to an unconstrained problem:

The split Bregman scheme could then be expressed as [goldstein2009split]:

Thanks to the split of the three terms, the minimization of Eq. (3.1) could be performed iteratively by alternatively updating variables in the system:

(4)
(5)
(6)

Only few iterations of this system are necessary for convergence. In our implementation, this update is only performed once at each iteration of the global optimization algorithm.

Eq. (5) and Eq. (6) could be resolved with the soft-thresholding operator:

(7)
(8)

Solving Eq. (4) requires the minimization of a convex differentiable function which can be performed via classical optimization methods. We propose here to solve it deterministically. The main difficulty in extending [ye2011split] to the multi-dimensional signals case rely on this step. Let us define from Eq. (4) such as:

Differentiating this expression with respect to yields:

where

is the identity matrix. The minimum

of Eq. (4) is obtained by solving which is a Sylvester equation:

(10)

with , and . Fortunately, in our case, and are real symmetric matrices. Thus, they can be diagonalized as follows:

where and are orthogonal matrices. Eq. (10) becomes:

(11)

with and .
is then obtained by:

where the notation indices the column of matrices. Going back to could be performed with: .
and being independent of the iteration () considered, their diagonalizations are done only once and for all as well as the computation of the terms , . Thus, this update does not require heavy computations. The full algorithm is summarized below.

3.2 Multi-SSSA sum up

Inputs: , , .    Parameters: , , , , , ,

1:Init , , and set ,
2: and .
3:Compute , , and from and .
4:Precompute (), .
5:
6:while  and  do
7:     
8:     ; ;
9:     for  do
10:         
11:         for  do
12:              
13:         end for
14:         
15:         
16:         
17:     end for
18:     ; ;
19:     
20:     
21:     
22:end while

4 Experimental evaluation

The following experiment aims at assessing the efficiency of our approach in decomposing signals built with particular regularities. We compare it both to algorithms coding each signal separately, the orthogonal matching pursuit (OMP) [pati1993orthogonal] and the LARS [efron2004least] (a LASSO solver), and to methods performing the decomposition simultaneously, the simultaneous OMP (SOMP) and an proximal method solving the group-LASSO problem (FISTA [beck2009fast]).

4.1 Data generation

From a fixed random overcomplete dictionary , a set of signals having piecewise constant structures have been created. Each signal is synthesized from the dictionary and a built decomposition matrix with
The TV penalization of the fused-LASSO regularization makes him more suitable to deal with data having abrupt changes. Thus, the decomposition matrices of signals have been built as linear combinations of specific activities which have been generated as follows:

where , is the Heaviside function, is the index of an atom, is the center of the activity and its duration. Each decomposition matrix could then be written:

where is the number of activities appearing in one signal and the stand for the activation weights. An example of such signal is given in the Figure 1 below.

Figure 1: Built signal, with channels and atoms.

4.2 Experimental setting

Each method has been applied to the previously created signals. Then the distances between the estimated decomposition matrices

and the real ones have been calculated as follows:

The goal was to understand the influence of the number of activities () and the range of durations () on the efficiency of the fused-LASSO regularization compared to others sparse coding algorithms. The scheme of experiment described above has been carried out with the following grid of parameters:

  • ,


For each point in the above parameter grid, two sets of signals has been created: a train set allowing to determine for each method the best regularization coefficients and a test set designed for evaluate them with these coefficients.

Other parameters have been chosen as follows:

Model Activities

Dictionaries have been randomly generated using Gaussian independent distributions on individual elements and present low coherence.

4.3 Results and discussion

In order to evaluate the proposed algorithm, for each point in the above grid of parameters, the mean (among test signals) of the previously defined distance

has been computed for each method and compared to the mean obtained by the Multi-SSSA. A paired t-test (

) has then been performed to check the significance of these differences.
Results are displayed in Figure 2. In the ordinate axis, the number of patterns increases from the top to the bottom and in the abscissa axis, the duration grows from left to right. The left image displays the mean distances obtained by the Multi-SSSA. Unsurprisingly, the difficulty of finding the ideal decomposition increases with the number of patterns and their durations. The middle and right figures present its performances compared to other methods by displaying the differences (point to point) of mean distances in grayscale. These differences are calculated such that, negative values (darker blocks) means that our method outperform the other one. The white diamonds correspond to non-significant differences of mean distances. Results of the OMP and the LARS are very similar as well as those of the SOMP and the group-LASSO solver. We only display here the matrices comparing our method to the LARS and the group-LASSO solver.

Figure 2: Left: Mean distances obtained with the Multi-SSSA. Middle: Difference between the mean distances obtained with the Multi-SSSA and those obtained with the LARS. Right: Difference between the mean distances obtained with the Multi-SSSA and those obtained with the Group-LASSO solver. The white diamonds correspond to non-significant differences between the means distances.

Compared to the OMP and the LARS, our method obtains same results as them when only few atoms are active at the same time. It happens in our artificial signals when only few patterns have been used to create decomposition matrices and/or when the pattern durations are small. On the contrary, when many atoms are active simultaneously, the OMP and LARS are outperformed by the above algorithm which use inter-signal prior information to find better decompositions.
Compared to the SOMP and the group-LASSO solver, results depend more on the duration of patterns. When patterns are long and not too numerous, theirs performances is similar to the fused-LASSO one. The SOMP is outperformed in all other cases. On the contrary, the group-LASSO solver is outperformed only when patterns have short/medium durations.

5 Relation to prior works

The simultaneous sparse approximation of multi-dimensional signals has been widely studied during these last years [chen2006theoretical] and numerous methods developed [tropp2006algorithms1, tropp2006algorithms2, gribonval2008atoms, cotter2005sparse, rakotomamonjy2011surveying]. More recently, the concept of structured sparsity has considered the encoding of priors in complex regularizations [huang2011learning, jenatton00377732]. Our problem belongs to this last category with a regularization combining a classical sparsity term and a Total Variation one. This second term has been studied intensively for image denoising as in the ROF model [rudin1992nonlinear, darbon2005fast].
The combination of these terms has been introduced as the fused-LASSO [tibshirani2005sparsity]. Despite its convexity, the two non-differentiable terms make it difficult to solve. The initial paper [tibshirani2005sparsity] transforms it to a quadratic problem and uses standard optimization tools (SQOPT). Increasing the number of variables, this approach can not deal with large-scale problems. A path algorithm has been developed but is limited to the particular case of the fused-LASSO signal approximator [hoefling2010path]. More recently, scalable approaches based on proximal sub-gradient methods [liu2010efficient], ADMM [wahlberg2012admm] and split Bregman iterations [ye2011split] have been proposed for the general fused-LASSO.
To the best of our knowledge, the multi-dimensional fused-LASSO in the context of overcomplete representations has never been studied. The closest work we found considers a problem of multi-task regression [chen2010graph]. The final paper had been published under a different title [chen2010efficient] and proposes a new method based on the approximation of the fused-LASSO TV penalty by a smooth convex function as described in [nesterov2005smooth].

6 Conclusion and Perspectives

This paper has shown the efficiency of the proposed Multi-SSSA based on a split Bregman approach, in order to achieve the sparse structured approximation of multi-dimensional signals, under general conditions. Specifically, the extensive validation has considered different regimes in terms of the signal complexity and dynamicity (number of patterns simultaneously involved and average duration thereof), and it has established a relative competence map of the proposed Multi-SSSA approach comparatively to the state of the art. Further work will apply the approach to the motivating application domain, namely the representation of EEG signals.

References