1 Introduction
Interpretability is paramount to communicate classification and regression results to the domain expert. Especially in highdimensional problems, sparsity of the solution is often assumed to increase interpretability. As a consequence, various previous work in statistics focuses on covariate selection, with the penalty being a particular popular choice [11].
However, even after successful covariate selection, the remaining solution might still contain several hundreds or thousands of covariates. We therefore suggest to further cluster the covariates. In particular, we propose a clustering method that uses apriori similarity information between covariates while respecting the response variable of the classification problem. The resulting covariate clusters can help to identify meaningful groups of covariates and this way, arguably increases the interpretability of the solution.
As a simple motivating example, consider the situation of document classification, where the covariates are words in a document. Without further context, we might want to cluster the words “apple” and “cherry” together. This cluster might be appropriate for classifying a document into either category “cooking” or category “sports’.’ However, this cluster might be inappropriate for classifying a document as either “sports” or “computer”, due to the ambiguity of “apple” meaning also “Macinctosh”.
This demonstrates that a simple two step approach, first step: covariate clustering; second step: classification, might lead to clusters that are difficult to interpret.^{1}^{1}1
On the other hand, first classification and then covariate clustering requires an additional heuristic to determine which covariate is associated to which class. The final clustering result is then highly dependent on this fixed choice.
Therefore, we propose to formulate the clustering problem as a joint sample classification and covariate clustering problem. In particular, we use a logistic regression loss with a pairwise group lasso penalty for each pair of covariate weight vector. Our formulation leads to a convex optimization problem, like convex clustering
[6, 12]. As a consequence, we have the desirable properties of a unique global minima, and the ease to plot a clustering hierarchy, instead of just a single clustering.Our proposed method is conceptually related to joint convex covariate clustering and linear regression as proposed in
[19]. However, the change from linear regression to logistic regression is computationally challenging. The main reason is that the objective function is not decomposable for each covariate anymore. Furthermore, the logistic regression loss is convex, but not, in general, strictly convex, and therefore prohibits the application of the alternating minimization algorithm (AMA) as in [6].Therefore, we propose a specialized alternating direction method of multipliers (ADMM) with an efficient gradient descent step for the nondecomposable primal update. Our solution allows us to scale the covariate clustering to problems with several 1000 covariates.
Since we often want to decide on one clustering result, we also propose a model selection criterion using an approximated marginal likelihood criterion. The motivation for this criterion is similar to the Bayesian Information Criterion (BIC) [18], but prevents the issue of a singular likelihood function. The proposed criterion circumvents the need for hyperparameter selection with crossvalidation which is computationally infeasible here.
The outline of this article is as follows. In the next section, we introduce our proposed objective function, and describe an efficient ADMM algorithm for solving the convex optimization problem. In Section 3, we describe our model selection criterion for selecting a plausible clustering result out of all clustering results that were found with the proposed method. Next, in our experiments, Sections 4 and 5
, we compare the proposed method to kmeans clustering and convex clustering
[6]. In particular, in Section 4, we evaluate the proposed method and the model selection criterion on several synthetic data sets. In Section 4.2, we evaluate the computational runtime of the proposed ADMM algorithm and compare to standard convex optimization solvers. In Section 5, we evaluate the quality of the clustering result on two real text data sets. Finally, in Section 6, we summarize our conclusions.A note on our notation: we denote a matrix by capital letter, e.g. , and a column vector by bold font e.g. . Furthermore, the ith row of is denoted by and is a row vector. The jth column of is denoted by or simply , and is a column vector.
2 Proposed Method
Let , where is the number of classes, and is the number of covariates. is the weight vector for class . Furthermore, contains the intercepts. We now assume the multiclass logistic regression classifier defined by
We propose the following formulation for jointly classifying samples and clustering the covariates:
(1) 
where defines a similarity measure between covariate and and is assumed to be given apriori. The last term is a group lasso penalty on the class weights for any pair of two covariates and . The penalty is large for similar covariates, and therefore encourages that is , that means that and are equal. The clustering of the covariates can be found by grouping two covariates and together if and are equal.
The advantage of this formulation is that the problem is convex, and we are therefore guaranteed to find a global minima.
Note that this penalty shares some similarity to convex clustering as in [6, 12]. However, one major difference is that we do not introduce latent vectors for each data point, and our method can jointly learn the classifier and the clustering.
We remark that it is straight forward to additionally add a sparsity penalty on the columns of to jointly select and cluster all covariates. We omit in the following such extensions and focus on the computationally difficult part, the clustering. Our implementation^{2}^{2}2Released here https://github.com/andradestats/convexCovariateClusteringClassification. allows to perform joint or sequential covariate selection and clustering.
Finally, we remark that the similarity matrix can be represented as an undirected weighted graph with an edge between covariates and iff . In practice, might be manually crafted from a domain expert (e.g. given by some ontology), or learned from a different data set (as we do in Section 5).
2.1 Optimization using ADMM
Here, we focus on the optimization problem from Equation (1), which we can rewrite as
where denotes the number of adjacent covariates of . Assuming that the adjacent covariates of are ordered from 1 to , the function returns the global covariate id of the th adjacent covariate of . We can then formulate the problem as
subject to  
where we denote . Therefore, can be read as ”a copy of for the comparison with ”, where is an adjacent node of .
Using ADMM this can be optimized with the following sequence:
where denotes the current iteration; denotes the scaled dual variables for ; denotes the set of variables .
Update of primal variables
The update of can be solved with an approximate gradient Newton method [5], where fast calculation of the gradient and function evaluation is key to an efficient implementation. Let us define
Due to the second term, the calculation of is in , and therefore, for dense graphs. However, the double sum in the second term can be expressed as follows:
where we defined , and , and . Since and can be precalculated, each repeated calculation of is in .
Furthermore, we note that
Update of auxiliary variables
The update of and , for each unordered pair can be performed independently, i.e.:
This optimization problem has a closed form solution, which was proven in a different context in [10], with
where
(2) 
with .
2.2 Stopping criteria
As stopping criteria, we use, as suggested in [4], the norm of the primal and dual residual at iteration , defined here as
We stop after iteration , if
where is the total number of edges in our graph, and is set to 0.00001. We also stop, in the cases where .
2.3 Identifying the covariate clusters
Although in theory the optimization problem ensures that certain columns of are exactly , due to the use of ADMM this is not true in practice. We therefore follow the strategy proposed in [6]: after convergence of the ADMM, we investigate , and place an edge between node and iff
(3) 
The equality (3) can be check exactly (without numerical difficulties) due to the thresholding of to 0.5 in Equation (2).
In the resulting graph, there are two possible ways to identify clusters:

identify the connected components as clusters.^{3}^{3}3A component is in general not fully connected.

consider only fully connected components as clusters.
Of course, in theory, since the optimization problem is convex, after complete convergence, we must have that the two ways result into the same clustering. However, in practice, we find that the latter leads to too many covariates not being clustered (i.e. each covariate is in a single cluster). The latter is also computationally difficult, since identifying the largest fully connected component is NPhard. Therefore, we proceed here with the former strategy.
We denote the identified clusters as , where is the number of clusters and is a partition of the set of covariates.
3 Approximate Bayesian Model Selection
Note that different hyperparameter settings for will result in different clusterings. For our experiments, we consider . This way, we get a range of clusterings. Like convex clustering, this allows us to plot a clustering hierarchy. However, in most situations, we are interested in finding the most plausible clustering, or ranking the clusterings according to some criterion.
One obvious way is to use crossvalidation on the training data to estimate heldout classification accuracy for each setting of . However, the computational costs are enormous. Another, more subtle issue is that the group lasso terms in Equation (1) jointly perform clustering and shrinkage, but controlled by only one parameter . However, the optimal clustering and the optimal shrinkage might not be achieved by the same value of , an issue well known for the lasso penalty [14]. Therefore, we consider here a different model selection strategy: an approximate Bayesian model selection with low computational costs.
After we have found a clustering, we train a new logistic regression classifier with the parameter space for limited to the clustering. Assuming some prior over , the marginal likelihood provides a tradeoff between model fit and number of parameters (= number of clusters). A popular choice is the Bayesian information criterion (BIC) [18], which assumes that the prior can be asymptotically ignored^{4}^{4}4That is the prior does not increase with .. However, BIC requires that the unpenalized maximumlikelihood estimate is defined. Unfortunately, this is not the case for logistic regression where linearly separable data will lead to infinite weights. For this reason, we suggest here to use a Gaussian prior on and then estimate the marginal likelihood with a Laplace approximation.
In detail, in order to evaluate the quality of a clustering, we suggest to use the marginal likelihood , where we treat the intercept vector as hyperparameter. denotes the design matrix, and the responses.
We define the following Bayesian model
(4) 
where denotes our prior on , and denotes the projection of covariates on the clustering defined by . This means
where we define the matrix as follows
Assuming a Gaussian prior on each entry of
, the log joint distribution for one data point
is given byFor simplicity, let us denote . Note that . Then the gradient with respect to is
with .
The Hessian with respect to , is
Let us denote by
the Hessian of the log joint probability of the observed data
and prior , i.e.where denotes the th block of
. By the Bayesian central limit theorem, we then have that the posterior is approximately normal with covariance matrix
. Then applying the Laplace approximation (see e.g. [2]), we get the following approximation for the marginal likelihood :where and are the maximum aposteriori (MAP) estimates from model (4). For large number of clusters the calculation of is computationally expensive. Instead, we suggest to use only the diagonal of the Hessian, which can be calculated for one sample as follows
with , and .
We consider the intercept terms as hyperparameters and use empirical Bayes to estimate them, i.e. we set them to . The hyperparameter cannot be estimated with empirical Bayes, since the unpenalized maximumlikelihood might not be defined, and this would lead to . Therefore, we estimate using crossvalidation on the full model, i.e. no clustering. This is computationally feasible since the MAP is an ordinary logistic regression with penalty and crossvalidation needs to be done only once and not for every clustering.
4 Synthetic Data Experiments
Here, in this section, we investigate the performance of our proposed method for covariate clustering, as well as the performance of our proposed model selection criterion on synthetic data.
For all synthetic datasets, we set the number of classes to 4. The intercept vector is set to the zero vector. We group the covariates evenly into 10 clusters. The weight vector for covariate is set to the all zero vector except one position which is set to . That means each covariate is associated with exactly one class. If two covariates and belong to the same cluster, then and
are set to be equal. Finally, we generate samples from a multivariate normal distribution as follows: given a positive definite covariate similarity matrix
, we generate a sample from class by . For each class we generate the same number of samples.For we consider two scenarios:
agrees with class label information
If covariates and are in different clusters, then , otherwise . is set to . An example, where each cluster has four covariates is show in Figure 1.
partly disagrees with class label information
This setting considers the case when the prior information from partly disagrees with the class label information. For that purpose, we modify the clustering from before. In particular, a new cluster is introduced that covers evenly covariates from two different clusters that are associated with different class labels. This is repeated once more for two different clusters. This way we get two new clusters, which we denote and . is defined as follows. If covariates and belong to the same new cluster, then . If covariates and belong to the same old cluster, and neither belongs to a new cluster, then . In all other cases is set to 0, and . An example is show in Figure 2.
4.1 Clustering Evaluation
proposed method  
agreeing  
n  
40  400  4000  
d  40  1.0 (0.0)  1.0 (0.0)  1.0 (0.0) 
200  1.0 (0.0)  1.0 (0.0)  1.0 (0.0)  
disagreeing  
n  
40  400  4000  
d  40  1.0 (0.0)  1.0 (0.0)  1.0 (0.0) 
200  1.0 (0.0)  1.0 (0.0)  1.0 (0.0)  
convex clustering  
agreeing  
n  
40  400  4000  
d  40  1.0 (0.0)  1.0 (0.0)  1.0 (0.0) 
200  1.0 (0.0)  1.0 (0.0)  1.0 (0.0)  
disagreeing  
n  
40  400  4000  
d  40  0.88 (0.0) 0.88 (0.0)  0.88 (0.0) 0.88 (0.0)  0.88 (0.0) 0.88 (0.0) 
200  0.94 (0.0) 0.94 (0.0)  0.94 (0.0) 0.94 (0.0)  0.94 (0.0) 0.94 (0.0)  
kmeans clustering  
agreeing  
n  
40  400  4000  
d  40  0.67 (0.16) 1.0 (0.0)  0.57 (0.14) 1.0 (0.0)  0.77 (0.0) 1.0 (0.0) 
200  0.87 (0.17) 1.0 (0.0)  1.0 (0.0)  1.0 (0.0)  
disagreeing  
n  
40  400  4000  
d  40  0.86 (0.07) 0.88 (0.0)  0.88 (0.0) 0.88 (0.0)  0.88 (0.0) 0.88 (0.0) 
200  0.94 (0.0) 0.94 (0.0)  0.94 (0.0) 0.94 (0.0)  0.94 (0.0) 0.94 (0.0) 
Shows the ANMI score of the proposed method, convex clustering, and kmeans clustering on synthetic data: prior covariate similarity and class information is agreeing (top) and disagreeing (bottom). Standard deviation in brackets. Small number shows the highest ANMI score among all clusterings found (oracle model selection performance).
We consider and . We compare the proposed method to kmeans with the initialization proposed in [3] and convex clustering [6]. For kmeans, we consider . For convex clustering the hyperparameter is tested in the range 0 to 5 evenly spaced with step size , and weights are set to the exponential kernel with .^{5}^{5}5For the convex clustering method led to memory overflow. Therefore, we needed to limit the weights to 5nearest neighbors.
We repeat each experiment 10 times and report average and standard deviation of the adjusted normalized mutual information (ANMI) [21] when compared to the true clustering. The ANMI score ranges from 0.0 (agreement with true clustering at pure chance level) to 1.0 (complete agreement with true clustering). All results are summarized in Table 1. For selecting a clustering we use the approximate marginal likelihood selection criterion described in Section 3. If the correct clustering was not found with the marginal likelihood model selection criterion, we also show the oracle model selection performance, i.e. the highest ANMI score among all the clusterings found (small font in Table 1). This allows us to distinguish the failure of creating the correct clustering from the failure of selecting the correct clustering.
Agreement and disagreement of similarity measure with classlabel information
As expected, for the agreeing case, both kmeans and convex clustering can find the correct clustering. At least, for kmeans, this is true when focusing on the oracle model selection performance. However, when the apriori similarity measure disagrees with the classlabel information (disagreeing case), then kmeans and convex clustering fail to create the correct clustering. The proposed method finds in all cases the correct clustering.
Quality of marginal likelihood model selection criterion
For the disagreeing case, we see a discrepancy between the selected clustering and the best clustering, when clustering with kmeans. However, in all other cases, we find that the marginal likelihood model selection criterion proposed in Section 3 always selects the best clustering. In particular, for the proposed method, we see that the correct clustering is always selected.
4.2 Runtime Experiments
In order to check the efficiency of our proposed ADMM solution, we compare it to two standard solvers for convex optimization, namely ECOS [9] and SCS [16] using the CVXPY interface [8, 1]. Note that SCS uses a generic ADMM method for which we use the same stopping criteria described in Section 2.2. We run all experiments on a cluster with 32 Intel(R) Xeon(R) CPUs with 3.20GHz.
The results for the synthetic data set (disagreement of similarity measure with classlabel information) with and are listed in Table 2. We repeat each experiment 10 times and report the average runtime (wallclock time) in minutes. In cases where one experiment did not finish after 12 hours we stopped the evaluation.
Our evaluation shows that the proposed method scales well with the number of samples , but less well with the number of variables . One reason is that the number of auxiliary variables grows quadratically with the number of variables , and therefore increases the cost per iteration by .^{6}^{6}6For sparse graphs the number of auxiliary variables grows only in , and therefore, we can expect further speedups. Nevertheless, our proposed method is considerably faster than standard solvers. Importantly, our method scales to several 1000 variables which can be crucial when applying to real data.
Proposed method with proposed ADMM  
n  
40  400  4000  
d  40  2.224 (0.127)  2.515 (0.118)  4.654 (0.141) 
200  5.012 (0.162)  5.148 (0.29)  7.2 (0.217)  
1000  37.968 (1.073)  35.886 (1.581)  42.118 (2.898)  
Proposed method with ECOS  
n  
40  400  4000  
d  40  3.86 (0.032)  25.439 (0.397)  625.116 (10.744) 
200  40.164 (0.217)  284.789 (2.175)  NA  
1000  NA  NA  NA  
Proposed method with SCS  
n  
40  400  4000  
d  40  59.844 (5.671)  249.591 (7.799)  NA 
200  147.38 (4.952)  NA  NA  
1000  NA  NA  NA 
5 Experiments on Real Data
For our experiments on real data, we used the movie review corpus IMDB [13], which consists of in total 100k (labeled and unlabeled) documents. IMDB is a balanced corpus with 50% of the movies being reviewed as “good movie”, and the remaining as “bad movie”. As the second dataset, we used the 20 Newsgroups corpus (Newsgroup20)^{7}^{7}7http://people.csail.mit.edu/jrennie/20newsgroups/ with around 19k documents categorized into 20 classes (topics like “hockey”, “computer”,…) In order to check whether our model selection criterion correlates well with accuracy on heldout data, we use 10000 documents for training and clustering selection, and the remaining documents as heldout data.^{8}^{8}839k and 9k heldout documents for IMDB and Newsgroup20, respectively. We removed all duplicate documents, and performed tokenization and stemming with Senna [7].
5.1 Covariate Selection
First, for each dataset we extract the 10000 most frequent words as covariates, and represent each document by a vector where each dimension contains the tfidf score of each word in the document.^{9}^{9}9We also perform scaling of each which is known to improve classification performance. This is performed before the normalization of the covariates.
Finally, we normalize the covariates to have mean 0 and variance 1.
For such high dimensional problems, many covariates are only negligibly relevant. Therefore, in order to further reduce the dimension, we apply a group lasso penalty on the columns of like in [20]^{10}^{10}10We use also an ADMM algorithm to perform this optimization. We omit the details since the optimization function is the same as in Definition 1 in [20] with and . and select the model with the highest approximate marginal likelihood. This leaves us with 2604 and 1538 covariates for IMDB and Newsgroup20, respectively.
5.2 Covariate Similarity Measure
From additional unlabeled documents, we determine the similarity between two covariates and as follows.
First, using the unlabeled datasets, we create for each covariate a word embedding . For IMDB, we create 50dimensional word embeddings with word2vec [15] using 75k documents from the IMDB corpus.^{11}^{11}11https://code.google.com/p/word2vec/ word2vec was used with the default settings. For Newsgroup20, since, the number of samples is rather small, we use the 300 dimensional word embeddings from GloVe [17] that were trained on Wikipedia + Gigaword 5.
Finally, the similarity between covariate and is calculated using
5.3 Quantitative Results
As a quantitative measure of interpretability, we suggest to use the number of effective parameters, which is just the number of clusters here. The ideal covariate clustering should lead to similar, or even better accuracy on heldout data than the full model (no clustering), while being considerably more compact (few number of effective parameters).
For selecting a clustering we use the proposed marginal likelihood criterion (Section 3), and the heldout data is only used for final evaluation. The results for IMDB and Newsgroup20 are shown in Table 3 and 5, respectively. We find that the marginal likelihood criterion tends to select models which accuracy on heldout data is similar to the full model, but with fewer number of effective parameters. Furthermore, for IMDB, we find that the proposed method leads to considerably more compact models than convex clustering and kmeans, while having similar heldout accuracy. On the other hand, for Newsgroup20, the accuracy of the proposed method and kmeans clustering is similar, indicating that there is good agreement between the similarity measure and the classification task.
In order to confirm that these conclusions are true, independent of the model selection criterion, we also show the heldout accuracy of clusterings with number of effective parameters being around 100, 500 and 1000. Note that, in contrast to kmeans, the proposed method and convex clustering can only control the number of clusters indirectly through their regularization hyperparameter. The results for IMDB and Newsgroup20, shown in 4 and 6, confirm that the proposed method can lead to better covariate clusterings than kmeans and convex clustering.
Method  Marginal Likelihood  Effective Parameters  Holdout Acc  
No Clustering  3637.6  2604  0.85  

578.4  140  0.85  
582.5  142  0.85  
593.7  132  0.85  
594.0  93  0.85  
601.6  99  0.85  

3637.6  2604  0.85  
3653.7  2602  0.85  
3678.6  2588  0.85  
3684.2  2568  0.85  
3695.4  2535  0.85  

3589.7  2528  0.85  
3589.8  2534  0.85  
3590.0  2532  0.85  
3590.3  2529  0.85  
3590.3  2527  0.85 
Effective Parameters  Proposed  Convex Clustering  kmeans Clustering 
99  0.85  0.59  0.80 
491  0.84  0.76  0.82 
1010  0.84  0.80  0.83 
Method  Marginal Likelihood  Effective Parameters  Holdout Acc  
No Clustering  10695.8  1538  0.88  

9491.1  1174  0.86  
9712.1  880  0.84  
10064.8  1354  0.87  
10316.2  1432  0.87  
10422.5  1463  0.88  

10695.8  1538  0.88  
10700.0  1534  0.88  
10706.7  1532  0.88  
10707.1  1531  0.88  
10729.8  1528  0.87  

10152.4  1036  0.85  
10153.3  1037  0.85  
10155.2  1035  0.85  
10157.3  1029  0.85  
10157.3  1034  0.85 
Effective Parameters  Proposed  Convex Clustering  kmeans Clustering 
99  0.65  0.44  0.64 
457  0.8  0.7  0.79 
880  0.84  0.82  0.83 
5.4 Qualitative Results
Like convex clustering, our proposed method can be used to create a hierarchical clustering structure by varying the hyperparameter
. In particular, for large enough , all covariates collapse to one cluster. On the other side, for , each covariate is assigned to a separate cluster. In order to ease interpretability, we limit the visualization of the hierarchical clustering structure to all clusterings with , wherecorresponds to the clustering that was selected with the marginal likelihood criterion. Furthermore, for the visualization of IMDB, we only show the covariates with odds ratios larger than 1.1. Part of the clustering results of the proposed method, convex clustering and kmeans are shown in Figures
5, 6, 7, 8, and Figures 9, 10, for IMDB and Newsgroup20, respectively.^{12}^{12}12For comparison, for kmeans and convex clustering we choose (around) the same number of clusters as the proposed method. In the supplement material, we give the clustering results of all methods for all classes.^{13}^{13}13The clustering result for method m of dataset d corresponding to class c is shown in file “d_m_c.pdf” in the supplement material.Each node represents one cluster. If the cluster is a leaf node, we show the covariate, otherwise we show the size of the cluster (in the following called cluster description). From the logistic regression model of each clustering, we associate each cluster to the class with highest entry in weight matrix , and then calculate the odds ratio to the second largest weight. The color of a node represents its associated class. All classes with coloring are explained in Figure 3 and 4 for IMDB and Newsgroup20, respectively. The odds ratio of each cluster is shown below the cluster description. Further details of the creation of the hierarchical structure and visualization are given in the supplement material.^{14}^{14}14See file “joint_covariate_clustering_and_classification_visualization.pdf” in the supplement material.
For IMDB, the qualitative differences between the proposed method and kmeans are quite obvious. Kmeans clustering tends to produce clusters that include covariates that are associated to different classes. An example is shown in Figure 5, where all number ratings are clustered together. However, these have very different meaning, since for example 7/10, 8/10, 9/10, 10/10 rate good movies and 0, 1/10, *1/, 2/10,… rate bad movies. We observe a similar result for convex clustering as shown in Figure 6. In contrast, our proposed method distinguishes them correctly by assigning them to different clusters associated with different classes (see Figure 7 and 8).
For Newsgroup20, the qualitative differences of the proposed method with kmeans are more subtle, but still we can identify some interesting differences. For example, kmeans assigns “apple” and “cherry” to the same cluster which is associated with the class “Macintosh” (comp_sys_mac_hardware) (see Figure 9). In contrast, our proposed method correctly distinguishes these two concepts in the context of “Macintosh” (see Figure 10).
6 Conclusions
We presented a new method for covariate clustering that uses an apriori similarity measure between two covariates, and additionally, samples with class label information. In contrast to kmeans and convex clustering, the proposed method creates clusters that are aposteriori plausible in the sense that they help to explain the observed data (samples with class label information). Like convex clustering [6], the proposed objective function is convex, and therefore insensitive to heuristics for initializations (as needed by kmeans clustering).
Solving the convex objective function is computationally challenging. Therefore, we proposed an efficient ADMM algorithm which allows us to scale to several 1000 variables. Furthermore, in order to prevent computationally expensive crossvalidation, we proposed a marginal likelihood criterion similar to BIC. For synthetic and real data, we confirmed the usefulness of our proposed clustering method and the marginal likelihood criterion.
Supplement Material
Our implementation of the proposed ADMM algorithm and the marginal likelihood criterion is available here
https://github.com/andradestats/convexCovariateClusteringClassification.
References
 [1] Steven Diamond Akshay Agrawal, Robin Verschueren and Stephen Boyd. A rewriting system for convex optimization problems. Journal of Control and Decision, 5(1):42–60, 2018.
 [2] Tomohiro Ando. Bayesian model selection and statistical modeling. CRC Press, 2010.
 [3] David Arthur and Sergei Vassilvitskii. kmeans++: The advantages of careful seeding. In Proceedings of the Annual ACMSIAM Symposium on Discrete Algorithms, pages 1027–1035, 2007.

[4]
Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein.
Distributed optimization and statistical learning via the alternating
direction method of multipliers.
Foundations and Trends® in Machine Learning
, 3(1):1–122, 2011.  [5] Richard H Byrd, Peihuang Lu, Jorge Nocedal, and Ciyou Zhu. A limited memory algorithm for bound constrained optimization. SIAM Journal on Scientific Computing, 16(5):1190–1208, 1995.
 [6] Eric C Chi and Kenneth Lange. Splitting methods for convex clustering. Journal of Computational and Graphical Statistics, 24(4):994–1013, 2015.
 [7] Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493–2537, 2011.
 [8] Steven Diamond and Stephen Boyd. CVXPY: A Pythonembedded modeling language for convex optimization. Journal of Machine Learning Research, 17(83):1–5, 2016.
 [9] A. Domahidi, E. Chu, and S. Boyd. ECOS: An SOCP solver for embedded systems. In European Control Conference (ECC), pages 3071–3076, 2013.
 [10] David Hallac, Jure Leskovec, and Stephen Boyd. Network lasso: Clustering and optimization in large graphs. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 387–396. ACM, 2015.
 [11] Trevor Hastie, Robert Tibshirani, and Martin Wainwright. Statistical learning with sparsity: the lasso and generalizations. CRC press, 2015.
 [12] Toby Dylan Hocking, Armand Joulin, Francis Bach, and JeanPhilippe Vert. Clusterpath an algorithm for clustering using convex fusion penalties. In 28th international conference on machine learning, page 1, 2011.

[13]
Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and
Christopher Potts.
Learning word vectors for sentiment analysis.
In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 142–150, 2011.  [14] Nicolai Meinshausen. Relaxed lasso. Computational Statistics & Data Analysis, 52(1):374–393, 2007.
 [15] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119, 2013.
 [16] Brendan O’Donoghue, Eric Chu, Neal Parikh, and Stephen Boyd. Conic optimization via operator splitting and homogeneous selfdual embedding. Journal of Optimization Theory and Applications, 169(3):1042–1068, 2016.
 [17] Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In Conference on Empirical Methods on Natural Language Processing, pages 1532–43, 2014.
 [18] Gideon Schwarz. Estimating the dimension of a model. The annals of statistics, 6(2):461–464, 1978.
 [19] Yiyuan She. Sparse regression with exact clustering. Electronic Journal of Statistics, 4:1055–1096, 2010.
 [20] Martin Vincent and Niels Richard Hansen. Sparse group lasso and high dimensional multinomial classification. Computational Statistics & Data Analysis, 71:771–786, 2014.
 [21] Nguyen Xuan Vinh, Julien Epps, and James Bailey. Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. Journal of Machine Learning Research, 11(Oct):2837–2854, 2010.
Comments
There are no comments yet.