Log In Sign Up

Local Region Sparse Learning for Image-on-Scalar Regression

by   Yao Chen, et al.

Identification of regions of interest (ROI) associated with certain disease has a great impact on public health. Imposing sparsity of pixel values and extracting active regions simultaneously greatly complicate the image analysis. We address these challenges by introducing a novel region-selection penalty in the framework of image-on-scalar regression. Our penalty combines the Smoothly Clipped Absolute Deviation (SCAD) regularization, enforcing sparsity, and the SCAD of total variation (TV) regularization, enforcing spatial contiguity, into one group, which segments contiguous spatial regions against zero-valued background. Efficient algorithm is based on the alternative direction method of multipliers (ADMM) which decomposes the non-convex problem into two iterative optimization problems with explicit solutions. Another virtue of the proposed method is that a divide and conquer learning algorithm is developed, thereby allowing scaling to large images. Several examples are presented and the experimental results are compared with other state-of-the-art approaches.


page 7

page 8


Simultaneous Sparse Dictionary Learning and Pruning

Dictionary learning is a cutting-edge area in imaging processing, that h...

Total variation with overlapping group sparsity for image deblurring under impulse noise

The total variation (TV) regularization method is an effective method fo...

A Dual Symmetric Gauss-Seidel Alternating Direction Method of Multipliers for Hyperspectral Sparse Unmixing

Since sparse unmixing has emerged as a promising approach to hyperspectr...

Adaptive Graph-based Total Variation for Tomographic Reconstructions

Sparsity exploiting image reconstruction (SER) methods have been extensi...

Automatic balancing parameter selection for Tikhonov-TV regularization

This paper considers large-scale linear ill-posed inverse problems whose...

Spectrally Grouped Total Variation Reconstruction for Scatter Imaging Using ADMM

We consider X-ray coherent scatter imaging, where the goal is to reconst...

Predictive support recovery with TV-Elastic Net penalty and logistic regression: an application to structural MRI

The use of machine-learning in neuroimaging offers new perspectives in e...

1 Introduction

There has been significant research activity aimed at the association between image data and other scalar variables (e.g. cognitive score, diagnostic status) in the study of neurodegenerative and neuropsychiatric diseases, such as Alzheimer’s disease (AD)[10]

. The growing public threat of AD has raised the urgency to discover ROI of magnetic resonance images (MRI) that may identify subjects at greatest risk for future cognitive decline and accelerate the testing of preventive strategies. Machine learning methods have been developed and the penalized optimization is popular in the framework of the empirical risk minimization plus a penalty. However, spatially heterogeneous smoothness and local region selection greatly complicates the image analysis. To address these challenges, several regularization methods have been proposed to impose sparsity on both pixel values and their spatial derivatives. For instance, GraphNet

[5] combines the Lasso penalty and an penalty of image gradients, and TV- [4, 9] uses a weighted combination of the Lasso penalty and the TV penalty.

It is well-known that both Lasso and TV models have the inherent bias and often lead to less stable predictions [16]. For example, the spatially adaptive TV model [15] was proposed to remove the inherent bias in the TV model by utilizing a spatially varying weight function that is inversely proportional to the magnitude of image derivatives. It is a two-step procedure where the weight function obtained from the first step using standard TV is then used to guide smoothing in the second step. It is of interest to note that, in the statistical literature, the Smoothly Clipped Absolute Deviation (SCAD) penalty [6]

has been proposed in the context of high-dimensional linear regression to address the shortcomings of Lasso (which is not consistent in variable selection). SCAD has some desired properties of the estimator such as continuity, asymptotic unbiasedness, sparsity, and the so-called oracle property (which behaves the same as when the zero coefficients are known in advance). There are a few papers on the use of SCAD for image analysis

[3, 9]. None of them consider the local region learning. We will adapt the SCAD penalty for our local region selection problem in the framework of image-on-scalar regression.

In this paper, we propose a novel regularization method called SCAD2TV, which combines the SCAD regularization, enforcing sparsity, and the SCAD of TV regularization, enforcing spatial contiguity, into one group, which segments contiguous spatial regions against zero-valued background. This paper makes three main contributions:

  • The new penalty, SCAD2TV, forces zeros on coordinates and spatial derivative jointly, which makes it easy to identify ROI for the image-on-scalar regression model. It solves the bias issue inherent in LASSO or TV methods.

  • Our proposed algorithms are based on ADMM, which decomposes a non-convex problem with the non-convex penalty into two iterative optimization problems with explicit solutions. The divide and conquer learning algorithm is also developed, thereby allowing scaling to large images.

  • Compared with GraphNet and TV-, SCAD2TV has better or competitive performance in either prediction or selection errors.

2 Image-on-Scalar Regression and SCAD2TV

2.1 Image-on-Scalar Regression

Regression models with image responses and scalar predictors are routinely encountered in many applications [2, 7]. Consider an image-on-scalar regression model with varying coefficients: , where is the covariate, is the image response at pixel (a 2D or 3D domain), and

is the coefficient image vector. Here

is a zero-mean spatial field which characterizes the spatial correlation, and

is the white noise with mean zero and variance

. In this paper, we focus on a 2D image. Extension to 3D images is straightforward. The objective is to identify ROI in the response image which are associated with the corresponding covariate by estimating the coefficient images . The available data are image and covariate pairs for subjects, , . We obtain the estimator by minimizing


where is a penalty function which favors estimators according to certain criteria. Our purpose is to recover nonzero active regions of . The main challenges are that we need to impose sparsity of pixel values and extract active regions simultaneously.

2.2 Existing Regularizers

TV and SCAD. The TV analysis plays a fundamental role in various image analyses since the path-breaking works [13, 12]. We focus on the anisotropic version of TV. For , define the discrete gradient is defined by

The TV norm is just . The isotropic induced TV norm is , which is equivalent to the anisotropic induced TV norms up to a factor of .

The SCAD penalty is more conveniently defined its derivative

and . We use by convention. Consider a penalized least squares problem: minimize . The solution is unique, explicit, and , where is the thresholding function. Figure 1 displays the thresholding function for SCAD and the soft thresholding function for Lasso with and . The SCAD penalty shrinks small coefficients to zero while keeping the large coefficients without shrinkage.

Figure 1: Thresholding function for the Lasso penalty (left) and the SCAD penalty (right) with and .

GraphNet and TV-. GraphNet and TV- have been successful applied to medical images. GraphNet is the weighted average of an penalty on all coordinates and a squared penalty on the discrete gradient, while TV- is the weighted average of an penalty and a TV penalty:

For both penalties, is the smoothing parameter which controls the strength of regularization and is another smoothing parameter controlling the trade-off between pixel sparsity and spatial regularity.

2.3 A New Penalty: SCAD2TV

For each coordinate , the discrete gradient involves three coordinate values . Let . The SCAD2TV penalty is defined by


where and are two tuning parameters, and is the SCAD function. The first term in the penalty allows adaptive estimation of the coefficient image and the second one enforces sparsity on coordinate values. One may also consider the functional version of (2). After some rescaling, (2) is equivalent to


The SCAD2TV solves the bias problem inherent in the TV and Lasso models. Note that this penalty function, unlike the penalty used in Lasso, is not convex, so that (1) is a non-convex objective function. We solve this problem based on the ADMM and convert it into two sub-problems with closed-form solutions. In general, ADMM has successful applications to convex problems. The behavior of ADMM applied to nonconvex problems has been a mystery. Recently, the global convergence of ADMM in non-convex optimization is discussed in [17], which shows that several ADMM algorithms including SCAD are guaranteed to converge.

3 Local Region Learning by SCAD2TV

3.1 Algorithm based on ADMM

Our proposed algorithm is based on ADMM [1]. We may write (1) as the matrix form by an abuse of notation:


where is the veritorized response image for subject , is the fixed extended design matrix related to the covariate for subject , and is the concatenated vectorized unknown coefficient image. Furthermore, one of the advantages of SCAD2TV in (2) is that we can write as for a fixed by matrix depending only on , which greatly facilitates the efficiency of our algorithm. This fact can be easily seen since the elements involved in the term in (2) are

for a fixed matrix . So is the concatenated version of .

Problem (4) is equivalent to

We form the augmented Lagrangian as

The ADMM consists of the iterations


It should be emphasized that both (5) and (6) have the explicit solutions. Specifically, (5

) is a ridge regression problem and (

6) is a penalized least squares problem with the identity design matrix. The closed-form solutions for (5) and 6 are, respectively,

The details of the algorithm is summarized in Algorithm 1.

Input : Training samples , tuning parameters , , , stopping criteria parameter and .
1 Initialize as random uniform numbers; initialize primal and dual residuals and ; while  and  do
2       Update from (6): For given and , ; Update from (5): For given and , ; Update by (7): For given and , ; Update and : .
3 end while
Output :  and
Algorithm 1 Local Region Learning by SCAD2TV

The output is either or . Note that is a sparse solution and may not be sparse. In practice, we extract the coefficient image estimator from the output to obtain the sparse estimator.

3.2 Divide and Conquer Learning Algorithm for Large Image Size

To address the big data issue, a divide and conquer (D&C) algorithm is a solution by recursively breaking down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem. In the above discussion, we assume that, for each subject , all image coordinate values are used together to infer the coefficient images . However, in many applications may be large and such a “batch” procedure is undesirable. In order to solve this issue, we develop a D&C algorithm for large image size.

Figure 2: Partition the image into overlapped sub-images.

Image data have their intrinsic structure and we need proceed the divide step with extra caution. For example, we may just partition each image as non-overlap sub-images , with the data processed sequentially. Due to the TV term in SCAD2TV, we lose the boundary information for all sub-images and this will give poor estimates of the boundary for all sub-images. We propose to partition with overlapped sub-images . For instance, Figure 2 displays a image, and it is straightforward to partition them into nine non-overlapped sub-images. For our purpose, we extend each sub-image in four directions. Specifically, on the left up corner, we include both the light blue part and the light yellow part, which makes our first sub-image a image. In the center, we take the inside orange part together with the brown part, which makes it a image. After this partition, we obtain nine overlapped sub-images. We perform Algorithm 1 on each sub-image and update the coefficient images. For each update we only keep the estimate for the original sub-image.

The above D&C algorithm can be executed sequentially in a single machine. This algorithm is naturally adapted for execution in multi-processor machines, especially shared-memory systems where the communication of data between processors does not need to be planned in advance, because distinct sub-problems can be executed on different processors.

4 Empirical Results

4.1 Synthetic data

We design a synthetic data example to compare the performance among three approaches: SCAD2TV, GraphNet, and TV- in terms of both prediction and selection errors.

Data Generation. In our setting, where each is a image (See the left panel of Figure 3). The covariate is and each

is generated from a uniform distribution between

and . The spatial field is generated from a zero mean Gaussian random field. The error process is the white noise with mean zero and variance . Two noise levels are adopted at . The sample size is .

Applying SCAD2TV. In order to examine the performances of three methods, SCAD2TV, GraphNet, and TV-, we have generated datasets for each setting. For each dataset, we obtain the coefficient image estimates from these three methods. The selection rate is define as


and the mean squared error is defined as


where is the total number of pixels and the are the predicted images.

Practical Consideration. Smoothing parameters can be selected by using the K-fold cross-validation (CV). However, its computational time can be long even under current computing facilities. In our experiment, we have tested a few different values for the tuning parameters such as and . We find is a good balance for the estimation. The value of is related to our expectation of ROI. If the ROI has a sharp boundary and the values do not change much inside ROI, we can use a large . Otherwise, a smaller would be preferred. We choose , and .

Figure 3: The coefficient images and the estimates. Left:True images; 2nd column: SCAD2TV; 3rd column: TV-; Right: GraphNet.

Results. For each setting, the experiments using SCAD2TV, GraphNet, TV- are repeated times. Figure 3 displays the estimates of coefficient images from one realization. We note that SCAD2TV provides the solution almost exact the same as the truth. TV- can keep the sharp boundary but provide biased estimates inside the active zone. GraphNet displays blurred estimates for both active zone and zero sub-regions. The average of the selection rates and the MSEs are reported in Table 1. It is noted that, in terms of the selection rate, SCAD2TV performs better consistently than the other two methods. On the other hand, in terms of the prediction error, SCAD2TV and TV- are similar to each other, and GraphNet gives the highest MSE.

SCAD2TV TV- GraphNet
= 1 0.820 1.97 0.783 1.98 0.534 1.98
0.728 0.740 0.583
0.677 0.672 0.508
= 0.1 0.9995 0.020 0.9675 0.024 0.8188 0.044
0.9983 0.9417 0.8210
0.9990 0.9070 0.8472
Table 1: Comparison Results of both SR and MSE for SCAD2TV, TV-, and GraphNet. The bold stands for the best among three methods.

4.2 Hippocampus Data

Dataset. To illustrate the usefulness of our proposed model, consider anatomical MRI data collected at the baseline by the Alzheimer’s Disease Neuroimaging Initiative (ADNI) study, which is a large scale multi-site study collecting clinical, imaging, and laboratory data at multiple time points from healthy controls, individuals with amnestic mild cognitive impairment, and subjects with Alzheimer’s disease. Given the MRI scans, hippocampal substructures were segmented with FSL FIRST [11] and hippocampal surfaces were automatically reconstructed with the marching cube method [8]. We adopted a surface fluid registration based hippocampal subregional analysis package [14], which uses isothermal coodinates and uid registration to generate one-to-one hippocampal surface registration for surface statistics computation.

In the dataset, we have total observations. For each subject, it includes a 2D representation of left hippocampus and covariates: gender (female=0 and male=1), age (55-92), disease status (control=0 and AD=1), and behavior score (1-36). The goal is to identify local regions of the response image associated with each covariate.

Applying SCAD2TV. We have applied our D&C learning algorithm to this dataset. We divide each response image into overlapped sub-images. We execute the algorithm sequentially in a single machine. The algorithm spends about 10 secs for each partition, and takes 25 minutes to get the final estimation of ’s. The estimated coefficient images are presented in the top panel of Figure 4.

Figure 4: The estimates of coefficient images from three methods. Top: SCAD2TV; Middle: TV-; Bottom: GraphNet.

Results. Our purpose is to identify the local regions where the response hippocampus image is associated with each individual covariate. Among all of the covariates, we are particularly interested in the association between the response hippocampus image and the disease status (Control vs AD). From the top panel of Figure 4, it is interesting to notice that gender has no effect on the response image, and for other three covariates SCAD2TV has been successfully identify the local active regions.

Figure 5: Left: ; Middle: Mean response image for health controls; Right: Mean response image for AD.

We take a close investigation on the coefficient image corresponding to the disease status. The left panel of Figure 5 displays the estimate of . The sub-regions in red indicate the active zone and the region in orange is the zero sub-region. In general, the AD patients have lower pixel values in the response hippocampus image. The right two panels are the mean response images for both health control and AD. The mean difference is consistent with our estimation of .

We extract the pixels within the ROI for health controls and AD, and apply hypothesis testing to test if their difference is significant. By applying hypothesis testing on each pixel in the ROI, all of them are different between health controls and AD at the significance level , and of the pixels in the ROI are different at the significance level . The result justifies our ROI selection is indeed the region to differentiate between health controls and AD.

Comparison with TV- and GraphNet. We obtain the estimates of the coefficient images for TV- and GraphNet, which are presented in the middle and bottom panels of Figure 4. These three methods overall detects similar regions. Both TV- and GraphNet display more blocky active regions, whereas SCAD2TV keep the active zone with sharp boundaries. We also divide our dataset into 5 parts to compare the prediction performance where each dataset contains around 80 observations. Every time we use 4 of them as the training data, and make prediction on the remaining testing data. The averages of the MSEs are computed for each methods, which are reported in Table 2. The MSEs are similar to each other and SCAD2TV displays a slightly better prediction power.

SCAD2TV TV- Graphnet
MSE 0.5476 0.5482 0.5491

Table 2: The MSEs for three methods

5 Conclusion

We have introduced a new region-selecting sparse non-convex penalty, SCAD2TV, which enforces large regions of zero sub-images and extracts non-zero active zones simultaneously. Efficient algorithm and the distributed algorithm have been developed. Numerical examples are presented and the experimental results are superior or competitive with other state-of-the-art approaches such as GraphNet and TV-. We have so-far focused on 2D images. It should be noted that our method works for 3D images as well. We are currently implementing our algorithm in the distributed platform such as Apache Spark. We have discussed the application for image-on-scalar regression models. This new framework may also be applied to the image clustering and image classification problems, which assume that only small regions of the images have significant effects on clustering and classification.


  • Boyd et al. [2011] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine Learning, 3(1):1–122, 2011.
  • Chen et al. [2015] Y. Chen, J. Goldsmith, and T. Ogden. Variable selection in function-on-scalar regression. 2015.
  • Chopra and Lian [2010] A. Chopra and H. Lian. Total variation, adaptive total variation and nonconvex smoothly clipped absolute deviation penalty for denoising blocky images. Pattern Recognition, 43(8):2609–2619, 2010.
  • Dohmatob et al. [2014] E. D. Dohmatob, A. Gramfort, B. Thirion, and G. Varoquaux.

    Benchmarking solvers for tv-ℓ 1 least-squares and logistic regression in brain imaging.

    In Pattern Recognition in Neuroimaging, 2014 International Workshop on, pages 1–4. IEEE, 2014.
  • Eickenberg et al. [2015] M. Eickenberg, E. Dohmatob, B. Thirion, and G. Varoquaux. Grouping total variation and sparsity: Statistical learning with segmenting penalties. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015, pages 685–693. Springer, 2015.
  • Fan and Li [2001] J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American statistical Association, 96(456):1348–1360, 2001.
  • Goldsmith et al. [2015] J. Goldsmith, V. Zipunnikov, and J. Schrack.

    Generalized multilevel function-on-scalar regression and principal component analysis.

    Biometrics, 71(2):344–353, 2015.
  • Lorensen and Cline [1987] W. E. Lorensen and H. E. Cline. Marching cubes: A high resolution 3d surface construction algorithm. In ACM siggraph computer graphics, volume 21, pages 163–169. ACM, 1987.
  • Mehranian et al. [2013] A. Mehranian, H. S. Rad, A. Rahmim, M. R. Ay, and H. Zaidi. Smoothly clipped absolute deviation (scad) regularization for compressed sensing mri using an augmented lagrangian scheme. Magnetic resonance imaging, 31(8):1399–1411, 2013.
  • Mu and Gage [2011] Y. Mu and F. H. Gage. Adult hippocampal neurogenesis and its role in alzheimer’s disease. Molecular neurodegeneration, 6(1):1–9, 2011.
  • Patenaude et al. [2011] B. Patenaude, S. M. Smith, D. N. Kennedy, and M. Jenkinson. A bayesian model of shape and appearance for subcortical brain segmentation. Neuroimage, 56(3):907–922, 2011.
  • Rudin et al. [1992] L. Rudin, S. Osher, and E. Fatemi. Non-linear total variation noise removal algorithm. Phys D, 60:259–268, 1992.
  • Rudin and Osher [1994] L. I. Rudin and S. Osher. Total variation based image restoration with free local constraints. In Image Processing, 1994. Proceedings. ICIP-94., IEEE International Conference, volume 1, pages 31–35. IEEE, 1994.
  • Shi et al. [2013] J. Shi, P. M. Thompson, B. Gutman, Y. Wang, A. D. N. Initiative, et al. Surface fluid registration of conformal representation: Application to detect disease burden and genetic influence on hippocampus. Neuroimage, 78:111–134, 2013.
  • Strong et al. [1997] D. M. Strong, P. Blomgren, and T. F. Chan. Spatially adaptive local-feature-driven total variation minimizing image restoration. In Optical Science, Engineering and Instrumentation’97, pages 222–233. International Society for Optics and Photonics, 1997.
  • Tibshirani [1996] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267–288, 1996.
  • Wang et al. [2015] Y. Wang, W. Yin, and J. Zeng. Global convergence of admm in nonconvex nonsmooth optimization. arXiv preprint arXiv:1511.06324, 2015.