# General Convolutional Sparse Coding with Unknown Noise

Convolutional sparse coding (CSC) can learn representative shift-invariant patterns from multiple kinds of data. However, existing CSC methods can only model noises from Gaussian distribution, which is restrictive and unrealistic. In this paper, we propose a general CSC model capable of dealing with complicated unknown noise. The noise is now modeled by Gaussian mixture model, which can approximate any continuous probability density function. We use the expectation-maximization algorithm to solve the problem and design an efficient method for the weighted CSC problem in maximization step. The crux is to speed up the convolution in the frequency domain while keeping the other computation involving weight matrix in the spatial domain. Besides, we simultaneously update the dictionary and codes by nonconvex accelerated proximal gradient algorithm without bringing in extra alternating loops. The resultant method obtains comparable time and space complexity compared with existing CSC methods. Extensive experiments on synthetic and real noisy biomedical data sets validate that our method can model noise effectively and obtain high-quality filters and representation.

## Authors

• 13 publications
• 21 publications
• 3 publications
• ### Noisy Expectation-Maximization: Applications and Generalizations

We present a noise-injected version of the Expectation-Maximization (EM)...
01/12/2018 ∙ by Osonde Osoba, et al. ∙ 0

• ### Kernel-Free Image Deblurring with a Pair of Blurred/Noisy Images

Complex blur like the mixup of space-variant and space-invariant blur, w...
03/26/2019 ∙ by Chunzhi Gu, et al. ∙ 4

• ### Learning the Morphology of Brain Signals Using Alpha-Stable Convolutional Sparse Coding

Neural time-series data contain a wide variety of prototypical signal wa...
05/22/2017 ∙ by Mainak Jas, et al. ∙ 0

• ### Scalable Online Convolutional Sparse Coding

Convolutional sparse coding (CSC) improves sparse coding by learning a s...
06/21/2017 ∙ by Yaqing Wang, et al. ∙ 0

• ### Online Convolutional Sparse Coding with Sample-Dependent Dictionary

Convolutional sparse coding (CSC) has been popularly used for the learni...
04/27/2018 ∙ by Yaqing Wang, et al. ∙ 0

• ### Low-rank Matrix Factorization under General Mixture Noise Distributions

Many computer vision problems can be posed as learning a low-dimensional...
01/06/2016 ∙ by Xiangyong Cao, et al. ∙ 0

• ### Noise-Robust Adaptation Control for Supervised System Identification Exploiting A Noise Dictionary

We present a noise-robust adaptation control strategy for block-online s...
07/03/2020 ∙ by Thomas Haubner, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

Given a set of samples, sparse coding tries to learn an over-complete dictionary and then represent each sample as a sparse combination (code) of the dictionary atoms. It has been used in various signal processing [1, 2]

and computer vision applications

[3, 4]. Albeit its popularity, sparse coding cannot capture shifted local patterns in the data. Hence, pre-processing (such as the extraction of sample patches) and post-processing (such as aggregating patch representation back into sample representation) are needed, otherwise redundant representations will be learned.

Convolutional sparse coding (CSC) [5] is a recent method which improves sparse coding by learning a shift-invariant dictionary. This is done by replacing the multiplication between codes and dictionary by convolution operation, which can capture local patterns of shifted locations in the data. Therefore, no pre-processing or post-processing are needed, and the sample can be optimized as a whole and represented as the sum of a set of filters from the dictionary convolved with the corresponding codes. It has been successfully used on various data types, including trajectories [6], images [7], audios [8], videos [9], multi-spectral and light field images [10] and biomedical data [11, 12, 13, 14]. It also succeeds on a variety of applications accompanying the data, such as recovering non-rigid structure from motion [6]

, image super resolution

[15], image denoising and inpainting [10], music transcription [8], video deblurring [9]

, neuronal assembly detection

[16] and so on.

All above CSC works use square loss, thus assume that the noise in the data is from Gaussian distribution. However, this can be restrictive and does not suit many real-world problems. For example, although CSC is popularly used for biomedical data sets [11, 13, 14, 16] where shifting patterns abound due to cell division, it cannot handle the various complicated noises in the data. In fact, biomedical data sets usually contain artifacts during recording, e.g., biomedical heterogeneities, large variations in luminance and contrast, and disturbance due to other small living animals [11, 17]. Moreover, as the target biomedical structures are often tiny and delicate, the existing of noises will heavily interfere the quality of the learned filters and representation [14].

Lots of algorithms have been proposed for CSC with square loss. While the objective is not convex, it is convex when codes of the dictionary are fixed. Thus, CSC is mainly solved by alternatively update the codes and dictionary by block coordinate descent (BCD) [18]. The difference of methods mainly lies in how to solve the subproblems (codes update or dictionary update) separately. The pioneering work Deconvolutional network [5] uses gradient descent for both subproblems. ConvCoD [19]

uses stochastic gradient descent for dictionary update and additionally learns an encoder to output the codes. Recently, other works

[7, 9, 20, 21, 22, 23] use alternating direction method of multipliers (ADMM) [24]. ADMM is favored since it can decompose the subproblem into smaller ADMM subproblems which usually have closed-form solutions. The decomposition allows solving CSC by separately performing faster convolution in frequency domain while enforcing the translation-variant constraints and regularizers in spatial domain. However, one needs another alternating loop between these ADMM subproblems so as to coordinate on the solution of the original subproblem.

Recently, there emerges one work which models the noise in CSC other than Gaussian. Jas et al. [14] proposed the alpha-stable CSC (CSC) which models the noise in 1D signals by the symmetric alpha-stable distribution [25]

. This distribution includes a range of heavy-tailed distributions, and is known to be more robust to noise and outliers. However, the probability density function of the alpha-stable distribution does not have an analytical form, and its inference needs to be approximated by the Markov chain Monte Carlo (MCMC)

[26] procedure, which is known to be computationally expensive. Moreover, as shown in Figure. 1, the alpha-stable distribution still restricts the noise to be of one particular type in advance, which is not appropriate due to unknown ground truth noise type.

In this paper, we propose a general CSC model (GCSC) which enables CSC to deal with complicated unknown noise. Specifically, we model the noise in CSC by the Gaussian mixture model (GMM), which can approximate any continuous probability density function. The proposed model is then solved by the Expectation-Maximization algorithm (EM). However, the maximization step becomes a weighted variant of the CSC problem which cannot be efficiently solved by existing algorithms, e.g., BCD and ADMM, since they bring extra inner loops in M-step. Besides, the weight matrix prevents us from solving the whole objective in the frequency domain to speed up the convolution. In our proposed method, we develop a new solver to update the dictionary and codes together by a nonconvex accelerated proximal algorithm without alternating loops. Moreover, we manage to efficiently speed up the convolution in the frequency domain and calculate the part involving the weight matrix in the spatial domain. The resultant algorithm achieves comparable time and space complexity compared with state-of-the-art CSC algorithms (for square loss). Extensive experiments are performed on both synthetic data and real-world biomedical data such as local field potential signals and retinal scan images. Results show that the proposed method can model the complex underlying data noise, and obtain high-quality filters and representations.

The rest of the paper is organized as follows. Section II briefly reviews CSC and proximal algorithm. Section III describes the proposed method, Experimental results are presented in Section IV, and the last section gives some concluding remarks.

Notations

: For vector

, its th element is denoted , its -norm is , its -norm is , and reshapes to a diagonal matrix with elements ’s. Given another vector , the convolution produces a vector , with . For matrix , denotes its transpose, denotes its complex conjugate, is its conjugate transpose (i.e., ).

denotes pointwise product. The identity matrix is denoted

.

is the fast Fourier transform that maps

from the spatial domain to the frequency domain, while is the inverse operator which maps back to .

## Ii Related Work

### Ii-a Convolutional Sparse Coding

Given samples ’s, where each , CSC learns a dictionary of filters ’s, each of length , such that each can be well represented as

 ~xi=K∑k=1dk∗zik. (1)

Here, is the (spatial) convolution operation, and ’s are the codes for , each of length . The filters and codes are obtained by solving the following optimization problem

 (2)

where ensures that the filters are normalized, and the regularizer encourages the codes to be sparse.

To solve (2), block coordinate descent (BCD) [18] is typically used [5, 7, 19, 20, 21, 22]. The codes and dictionary are updated in an alternating manner as follows.

#### Ii-A1 Code Update

Given ’s, the corresponding ’s are obtained as

 min{zik}12∥∥ ∥∥xi−K∑k=1dk∗zik∥∥ ∥∥22+βK∑k=1∥zik∥1. (3)

Convolution can be performed much faster in the frequency domain via the convolution theorem111, where

-dimensional. [27]. Combining this with the use of Parseval’s theorem222 For , . [27] and the linearity of FFT, problem (3) is reformulated in [20, 7, 21] as:

 min{zik}12P∥∥ ∥∥F(xi)−K∑k=1F(dk)⊙F(zik)∥∥ ∥∥22+βK∑k=1∥zik∥1. (4)

#### Ii-A2 Dictionary Update

Given ’s, ’s is updated by solving

 min{dk}∈D 12N∑i=1∥∥ ∥∥xi−K∑k=1dk∗zik∥∥ ∥∥22.

Similar to the code update, it is more efficient to perform convolution in the frequency domain, as:

 min{dk} 12PN∑i=1∥∥ ∥∥F(xi)−K∑k=1F(dk)⊙F(zik)∥∥ ∥∥22 (5) s.t. ∥CF−1(F(C⊤dk))∥22≤1,∀k,

where is a matrix with and for which is used to crop the extra dimension to recover the original spatial support, and can pad to be -dimensional. The constraint scales all filters to unit norm.

The alternating direction method of multipliers (ADMM) [24] has been commonly used for the code update and dictionary update subproblems (4) and (5)) [7, 20, 21, 22]. Each ADMM subproblem has a closed-form solution that is easy to compute. Besides, with the introduction of auxiliary variables, ADMM separates computations in the frequency domain (involving convolutions) and spatial domain (involving the regularizer and unit norm constraint).

### Ii-B Proximal Algorithm

The proximal algorithm [28] is used to solve composite optimization problems of the form

 minxF(x)≡f(x)+r(x),

where is smooth, is nonsmooth, and both are convex. To make the proximal algorithm efficient, its underlying proximal step (with stepsize )

 proxηr(z)=argminx12∥x−z∥22+ηr(x)

has to be inexpensive.

Recently, the proximal algorithm has been extended to nonconvex problems where both and can be nonconvex. A state-of-the-art is the nonconvex inexact accelerated proximal gradient (niAPG) algorithm [29], shown in Algorithm 1. It can efficiently converge to a critical point of the objective.

## Iii Proposed Method

The square loss in (2

) implicitly assumes that the noise is normally distributed. In this section, we relax this assumption, and assume the noise to be generated from a Gaussian mixture model (GMM). It is well-known that a GMM can approximate any continuous probability density function

[30].

As in other applications of GMM, we will use the EM algorithm for inference. However, as will be shown, the M-step involves a difficult weighted CSC problem. In Section III-C, we design an efficient solver based on nonconvex accelerated proximal algorithm, with comparable time and space complexity as state-of-the-art CSC algorithms (for the square loss).

### Iii-a GMM Noise

We assume that the noise associated with follows the GMM distribution:

 p(ϵi)=G∑g=1p(ϵi|ϕi=g)p(ϕi=g),

where is the number of Gaussian components, is the latent variable denoting which Gaussian component belongs to, and ’s are mixing coefficients with . Variable follows the multinomial distribution , and the conditional distribution of given follows the normal distribution with mean and diagonal covariance matrix . The regularizer in (2) corresponds to the prior Laplace distribution () on each th element of : .

### Iii-B Using Expectation Maximization (EM) Algorithm

Let denote the collection of all parameters ’s, ’s, ’s, ’s and

’s. The log posterior probability for

is:

 logP=N∑i=1(logG∑g=1p(xi|ϕi=g)+logG∑g=1p(ϕi=g)+K∑k=1P∑p=1logp(zik(p))). (6)

This can be maximized by the Expectation Maximization (EM) algorithm [31].

The E-step computes , the posterior probability that belongs to the th Gaussian given . Using Bayes rule, we have

 p(ϕi=g|xi) = πgp(xi|ϕi=g)∑Gg=1πgp(xi|ϕi=g), (7)

where , and in (1).

The M-step obtains by maximizing the following upper bound of in (6) as

 argmaxΘN∑i=1(G∑g=1γgilogp(xi,ϕi)γgi+K∑k=1P∑p=1logp(zik(p))) =argmaxΘN∑i=1(G∑g=1γgilogp(xi|ϕi=g)+βK∑k=1∥zik∥1) =argmaxΘN∑i=1⎛⎝βK∑k=1∥zik∥1+G∑g=1γgilogπg−γgi2log(|Σg|)−γgi2(xi−K∑k=1dk∗zik−μg)⊤Σ−1g(xi−K∑k=1dk∗zik−μg)⎞⎠, (9)

where .

#### Iii-B1 Updating πg’s, μg’s, Σg’s in Θ

Given ’s, ’s and ’s, let as in (1), we obtain ’s, ’s and ’s by optimizing (9) as:

 max{πg,μg,Σg}N∑i=1G∑g=1(γgilogπg−γgi2log(|Σg|)−γgi2(xi−~xi−μg)⊤Σ−1g(xi−~xi−μg)).

Taking the derivative of the objective to zero, the following closed-form solutions can be easily obtained:

 πg = 1NN∑i=1γgi, (10) μg = ∑Ni=1γgixi∑Ni=1γgi, (11) Σg = ∑Ni=1γgi(xi−~xi−μg)(xi−~xi−μg)⊤∑Ni=1γgi. (12)

#### Iii-B2 Updating dk’s and zik’s in Θ

Given ’s, ’s, ’s and ’s, we obtain ’s and ’s from (9) as:

 min{dk}∈D,{zik}N∑i=1⎛⎝βK∑k=1∥zik∥1−G∑g=1γgi2(xi−K∑k=1dk∗zik−μg)⊤Σ−1g(xi−K∑k=1dk∗zik−μg)⎞⎠.

This can be rewritten as

 min{dk}∈D,{zik}F({dk},{zik})≡f({dk},{zik})+r({dk},{zik}), (13)

where

 f({dk},{zik})≡12N∑i=1G∑g=1∥wgi⊙(xi−K∑k=1dk∗zik−μg)∥2F, (14)

with , and

 r({dk},{zik})≡βK∑k=1∥zik∥1+ID({dk}). (15)

Here, is the indicator function on (i.e., if , and otherwise).

Compared with the standard CSC problem in (2), problem (13) can be viewed as a weighted CSC problem (with weight ). A CSC variant [7] puts weights on , while ours are on . In [14], the model also leads to a weighted CSC problem. However, the authors there mentioned that it is not clear how to solve a weighted CSC problem in the frequency domain. Instead, they resorted to solving it in the spatial domain, which is less efficient as discussed in Section II-A.

### Iii-C Solving the Weighted CSC Subproblem (13)

In this section, we solve the weighted CSC problem in (13) using niAPG [29] (Algorithm 1). Note that the weights ’s in (14) is the cause that prevents us from transforming the whole objective (13) to the frequency domain. Recall that (3) is transformed to (4) by first transforming everything in norm to frequency domain by Parseval’s theorem, separately computing and by linearity of FFT, then replacing the convolution in by pointwise product using convolution theorem. However, with ’s, cannot use convolution theorem to speed up. Therefore, the key in designing an efficient solver is to only transform terms involving convolutions to the frequency domain, while leaving the weight in spatial domain. Hence we replace the term in (14) by .

The core steps in the niAPG algorithm are computing (i) the gradient w.r.t. ’s and ’s , and (ii) the proximal step . We first introduce the following Lemmas.

###### Lemma 1.

Let for . Then, , which reshapes to a diagonal matrix with elements ’s.

###### Proof.

, where . Then, . ∎

###### Lemma 2.

[32] For , and , where , is the th root of unity, and . Moreover, and .

###### Proposition 3.

For in (14),

 ∂f({dk},{zik})∂dk = −CF−1(N∑i=1(F(zik))⋆⊙F(ui)), ∂f({dk},{zik})∂zik = −F−1((F(dk))⋆⊙F(ui)),

where .

###### Proof.

can be rewritten as , where , , , , , , .

Using Lemma 1, , , and . Using Lemma 2, , , . Finally , and .

Combining all these, using chain rule for denominator layout, we obtain

 ∂f({dk},{zik})∂dk =∂g7k∂dk∂g6k∂g7k∂g4∂g6k∂g3∂g4∂g2∂g3∂g1∂g2∂f({dk},{zik})∂g1 =−CF−1(N∑i=1(F(zik))⋆⊙F(ui)), (17) ∂f({dk},{zik})∂zik =∂g5k∂zik∂g4∂g5k∂g3∂g4∂g2∂g3∂g1∂g2∂f({dk},{zik})∂g1 =−F−1((F(dk))⋆⊙F(ui)). (19)

Note that in (15) is separable333 is separable if .. This simplifies the associated proximal step, as shown by the following Lemma.

###### Lemma 4.

[28] If is separable, .

Using Lemma 4, the component proximal steps can be easily computed in closed form as [28]: , and . In the sequel, we avoid tuning by using line search [33], which also speeds up convergence empirically. The procedure for the solving the weighted CSC subproblem (14) is shown in Algorithm 2. The whole algorithm, which will be called general CSC (GCSC), is shown in Algorithm 3.

### Iii-D Complexity Analysis

In each EM iteration, the E-step in (7) takes time. The M-step is dominated by gradient computations in (17) and (19). These take time for the underlying FFT and inverse FFT operations, and time for the pointwise product. Thus, each EM iteration takes a total of time, where is the number of niAPG iterations. Empirically, is around 50. As for space, this is dominated by the -dimensional codes for each of the samples, leading to a space complexity of .

In comparison, the state-of-the-art batch CSC method (which uses the square loss) [7] takes time per iteration and space. Usually, .

### Iii-E Discussion with Existing CSC Works

Table I compares GCSC with existing CSC algorithms. The key differences are in noise modeling and algorithm design. First, all methods except GCSC and CSC model the noise by Gaussian distribution, and CSC uses symmetric alpha-stable distribution. Recall that GMM can approximate any continuous distribution, the noises considered previously all are special case of GMM noise. Second, all algorithms except GCSC use BCD, which alternatively updates codes and dictionary, and a majority of methods then update the codes and dictionary by ADMM separately. As GCSC already has one alternating loop between E-step and M-step, using BCD and then ADMM will bring in two more alternating loops, resulting a much slower algorithm compared with existing CSC algorithms. Therefore, we use niAPG to directly update codes and dictionary together. Empirical results in next section validate the efficiency of solving the weighted CSC problem (13) in GCSC by niAPG, rather than BCD.

## Iv Experiments

In this section, we perform experiments on both synthetic and real-world data sets. Experiments are performed on a PC with Intel i7 4GHz CPU with 32GB memory.

### Iv-a Baseline Algorithms

The proposed GCSC is compared with the following CSC state-of-the-arts:

1. CSC- [7], which models noise by the Gaussian distribution.

2. Alpha-stable CSC (CSC) [14], which uses symmetric alpha-stable distribution to models noise by setting the parameters of alpha-stable distribution (with stability parameter

, skewness parameter

, scale parameter and position parameter ) as and where is defined as in (1).666In [14], is simply set to 1.2. Here, we choose by using a validation set, which is obtained by randomly sampling 20% of the samples.

As a further baseline, we also compare with CSC-, a CSC variant which models noise by the Laplace distribution. It is formulated as the following optimization problem:

 min{dk}∈D,{zik}N∑i=1(12∥∥ ∥∥xi−K∑k=1dk∗zik∥∥ ∥∥1+K∑k=1β∥zik∥1). (20)

Details are in Appendix A.

We follow the automatic pruning strategy in [39] to select the number of mixture components in GCSC. We start with a relatively large . At each EM iteration, the relative difference among all Gaussians are computed:

 P∑p=1|σ2a(p)−σ2b(p)|σ2a(p)+σ2b(p), ∀a,b∈[1,…,G].

For the Gaussian pair with the smallest relative difference, if this value is small (less than 0.1), they are merged as

 πa ← πa+πb, μa←πaμa+πbμbπa+πb, Σa ← Diag(σ2a(1),…,σ2a(P)), % where σ2a(p)=πaσ2a(p)+πbσ2b(p)πa(p)+πb(p).

#### Stopping Criteria

The optimization problems in CSC and GCSC are solved by the EM algorithm. We stop the EM iterations when the relative change of log posterior in consecutive iterations is smaller than . In the M-step, we stop the updating of weighted CSC (Algorithm 2 for GCSC, and Algorithm in Appendix B of CSC paper [14]) if the relative change of the respective objective value is smaller than .

The optimization problems in CSC- and CSC- are solved by BCD. Alternating minimization is stopped when the relative change of objective value ((2) for CSC- and (20) for CSC-) in consecutive iterations is smaller than . As for the optimization subproblems of ’s (given ’s) and ’s (given ’s), we stop when the relative change of objective value is smaller than .

### Iv-B Synthetic Data

In this experiment, we first demonstrate the performance on synthetic data. Following [14], we use filters ’s (triangle, square, and sine), each of length (Figure 2(a)). Each

is normalized to have zero mean and unit variance. Each

has only one nonzero entry, whose magnitude is uniformly drawn from (Figure 2(b)). clean samples, each of length , are generated as: