Convex Covariate Clustering for Classification

03/05/2019
by   Daniel Andrade, et al.
nec global
0

Clustering, like covariate selection for classification, is an important step to understand and interpret the data. However, clustering of covariates is often performed independently of the classification step, which can lead to undesirable clustering results. Therefore, we propose a method that can cluster covariates while taking into account class label information of samples. We formulate the problem as a convex optimization problem which uses both, a-priori similarity information between covariates, and information from class-labeled samples. Like convex clustering [Chi and Lange, 2015], the proposed method offers a unique global minima making it insensitive to initialization. In order to solve the convex problem, we propose a specialized alternating direction method of multipliers (ADMM), which scales up to several thousands of variables. Furthermore, in order to circumvent computationally expensive cross-validation, we propose a model selection criterion based on approximate marginal likelihood estimation. Experiments on synthetic and real data confirm the usefulness of the proposed clustering method and the selection criterion.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/22/2020

An Efficient Smoothing Proximal Gradient Algorithm for Convex Clustering

Cluster analysis organizes data into sensible groupings and is one of fu...
04/01/2013

Splitting Methods for Convex Clustering

Clustering is a fundamental problem in many scientific applications. Sta...
07/18/2014

Extensions of stability selection using subsamples of observations and covariates

We introduce extensions of stability selection, a method to stabilise va...
05/09/2015

Simultaneous Clustering and Model Selection for Multinomial Distribution: A Comparative Study

In this paper, we study different discrete data clustering methods, whic...
05/30/2019

Clustered Gaussian Graphical Model via Symmetric Convex Clustering

Knowledge of functional groupings of neurons can shed light on structure...
10/29/2018

Evolutionary Self-Expressive Models for Subspace Clustering

The problem of organizing data that evolves over time into clusters is e...
08/04/2019

Simultaneous Clustering and Optimization for Evolving Datasets

Simultaneous clustering and optimization (SCO) has recently drawn much a...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Interpretability is paramount to communicate classification and regression results to the domain expert. Especially in high-dimensional problems, sparsity of the solution is often assumed to increase interpretability. As a consequence, various previous work in statistics focuses on covariate selection, with the -penalty being a particular popular choice [11].

However, even after successful covariate selection, the remaining solution might still contain several hundreds or thousands of covariates. We therefore suggest to further cluster the covariates. In particular, we propose a clustering method that uses a-priori similarity information between covariates while respecting the response variable of the classification problem. The resulting covariate clusters can help to identify meaningful groups of covariates and this way, arguably increases the interpretability of the solution.

As a simple motivating example, consider the situation of document classification, where the covariates are words in a document. Without further context, we might want to cluster the words “apple” and “cherry” together. This cluster might be appropriate for classifying a document into either category “cooking” or category “sports’.’ However, this cluster might be inappropriate for classifying a document as either “sports” or “computer”, due to the ambiguity of “apple” meaning also “Macinctosh”.

This demonstrates that a simple two step approach, first step: covariate clustering; second step: classification, might lead to clusters that are difficult to interpret.111

On the other hand, first classification and then covariate clustering requires an additional heuristic to determine which covariate is associated to which class. The final clustering result is then highly dependent on this fixed choice.

Therefore, we propose to formulate the clustering problem as a joint sample classification and covariate clustering problem. In particular, we use a logistic regression loss with a pair-wise group lasso penalty for each pair of covariate weight vector. Our formulation leads to a convex optimization problem, like convex clustering

[6, 12]. As a consequence, we have the desirable properties of a unique global minima, and the ease to plot a clustering hierarchy, instead of just a single clustering.

Our proposed method is conceptually related to joint convex covariate clustering and linear regression as proposed in

[19]. However, the change from linear regression to logistic regression is computationally challenging. The main reason is that the objective function is not decomposable for each covariate anymore. Furthermore, the logistic regression loss is convex, but not, in general, strictly convex, and therefore prohibits the application of the alternating minimization algorithm (AMA) as in [6].

Therefore, we propose a specialized alternating direction method of multipliers (ADMM) with an efficient gradient descent step for the non-decomposable primal update. Our solution allows us to scale the covariate clustering to problems with several 1000 covariates.

Since we often want to decide on one clustering result, we also propose a model selection criterion using an approximated marginal likelihood criterion. The motivation for this criterion is similar to the Bayesian Information Criterion (BIC) [18], but prevents the issue of a singular likelihood function. The proposed criterion circumvents the need for hyper-parameter selection with cross-validation which is computationally infeasible here.

The outline of this article is as follows. In the next section, we introduce our proposed objective function, and describe an efficient ADMM algorithm for solving the convex optimization problem. In Section 3, we describe our model selection criterion for selecting a plausible clustering result out of all clustering results that were found with the proposed method. Next, in our experiments, Sections 4 and 5

, we compare the proposed method to k-means clustering and convex clustering

[6]. In particular, in Section 4, we evaluate the proposed method and the model selection criterion on several synthetic data sets. In Section 4.2, we evaluate the computational runtime of the proposed ADMM algorithm and compare to standard convex optimization solvers. In Section 5, we evaluate the quality of the clustering result on two real text data sets. Finally, in Section 6, we summarize our conclusions.

A note on our notation: we denote a matrix by capital letter, e.g. , and a column vector by bold font e.g. . Furthermore, the i-th row of is denoted by and is a row vector. The j-th column of is denoted by or simply , and is a column vector.

2 Proposed Method

Let , where is the number of classes, and is the number of covariates. is the weight vector for class . Furthermore, contains the intercepts. We now assume the multi-class logistic regression classifier defined by

We propose the following formulation for jointly classifying samples and clustering the covariates:

(1)

where defines a similarity measure between covariate and and is assumed to be given a-priori. The last term is a group lasso penalty on the class weights for any pair of two covariates and . The penalty is large for similar covariates, and therefore encourages that is , that means that and are equal. The clustering of the covariates can be found by grouping two covariates and together if and are equal.

The advantage of this formulation is that the problem is convex, and we are therefore guaranteed to find a global minima.

Note that this penalty shares some similarity to convex clustering as in [6, 12]. However, one major difference is that we do not introduce latent vectors for each data point, and our method can jointly learn the classifier and the clustering.

We remark that it is straight forward to additionally add a sparsity penalty on the columns of to jointly select and cluster all covariates. We omit in the following such extensions and focus on the computationally difficult part, the clustering. Our implementation222Released here https://github.com/andrade-stats/convexCovariateClusteringClassification. allows to perform joint or sequential covariate selection and clustering.

Finally, we remark that the similarity matrix can be represented as an undirected weighted graph with an edge between covariates and iff . In practice, might be manually crafted from a domain expert (e.g. given by some ontology), or learned from a different data set (as we do in Section 5).

2.1 Optimization using ADMM

Here, we focus on the optimization problem from Equation (1), which we can rewrite as

where denotes the number of adjacent covariates of . Assuming that the adjacent covariates of are ordered from 1 to , the function returns the global covariate id of the -th adjacent covariate of . We can then formulate the problem as

subject to

where we denote . Therefore, can be read as ”a copy of for the comparison with ”, where is an adjacent node of .

Using ADMM this can be optimized with the following sequence:

where denotes the current iteration; denotes the scaled dual variables for ; denotes the set of variables .

Update of primal variables

The update of can be solved with an approximate gradient Newton method [5], where fast calculation of the gradient and function evaluation is key to an efficient implementation. Let us define

Due to the second term, the calculation of is in , and therefore, for dense graphs. However, the double sum in the second term can be expressed as follows:

where we defined , and , and . Since and can be precalculated, each repeated calculation of is in .

Furthermore, we note that

Update of auxiliary variables

The update of and , for each unordered pair can be performed independently, i.e.:

This optimization problem has a closed form solution, which was proven in a different context in [10], with

where

(2)

with .

2.2 Stopping criteria

As stopping criteria, we use, as suggested in [4], the norm of the primal and dual residual at iteration , defined here as

We stop after iteration , if

where is the total number of edges in our graph, and is set to 0.00001. We also stop, in the cases where .

2.3 Identifying the covariate clusters

Although in theory the optimization problem ensures that certain columns of are exactly , due to the use of ADMM this is not true in practice. We therefore follow the strategy proposed in [6]: after convergence of the ADMM, we investigate , and place an edge between node and iff

(3)

The equality (3) can be check exactly (without numerical difficulties) due to the thresholding of to 0.5 in Equation (2).

In the resulting graph, there are two possible ways to identify clusters:

  • identify the connected components as clusters.333A component is in general not fully connected.

  • consider only fully connected components as clusters.

Of course, in theory, since the optimization problem is convex, after complete convergence, we must have that the two ways result into the same clustering. However, in practice, we find that the latter leads to too many covariates not being clustered (i.e. each covariate is in a single cluster). The latter is also computationally difficult, since identifying the largest fully connected component is NP-hard. Therefore, we proceed here with the former strategy.

We denote the identified clusters as , where is the number of clusters and is a partition of the set of covariates.

3 Approximate Bayesian Model Selection

Note that different hyper-parameter settings for will result in different clusterings. For our experiments, we consider . This way, we get a range of clusterings. Like convex clustering, this allows us to plot a clustering hierarchy. However, in most situations, we are interested in finding the most plausible clustering, or ranking the clusterings according to some criterion.

One obvious way is to use cross-validation on the training data to estimate held-out classification accuracy for each setting of . However, the computational costs are enormous. Another, more subtle issue is that the group lasso terms in Equation (1) jointly perform clustering and shrinkage, but controlled by only one parameter . However, the optimal clustering and the optimal shrinkage might not be achieved by the same value of , an issue well known for the lasso penalty [14]. Therefore, we consider here a different model selection strategy: an approximate Bayesian model selection with low computational costs.

After we have found a clustering, we train a new logistic regression classifier with the parameter space for limited to the clustering. Assuming some prior over , the marginal likelihood provides a trade-off between model fit and number of parameters (= number of clusters). A popular choice is the Bayesian information criterion (BIC) [18], which assumes that the prior can be asymptotically ignored444That is the prior does not increase with .. However, BIC requires that the unpenalized maximum-likelihood estimate is defined. Unfortunately, this is not the case for logistic regression where linearly separable data will lead to infinite weights. For this reason, we suggest here to use a Gaussian prior on and then estimate the marginal likelihood with a Laplace approximation.

In detail, in order to evaluate the quality of a clustering, we suggest to use the marginal likelihood , where we treat the intercept vector as hyper-parameter. denotes the design matrix, and the responses.

We define the following Bayesian model

(4)

where denotes our prior on , and denotes the projection of covariates on the clustering defined by . This means

where we define the matrix as follows

Assuming a Gaussian prior on each entry of

, the log joint distribution for one data point

is given by

For simplicity, let us denote . Note that . Then the gradient with respect to is

with .

The Hessian with respect to , is

Let us denote by

the Hessian of the log joint probability of the observed data

and prior , i.e.

where denotes the -th block of

. By the Bayesian central limit theorem, we then have that the posterior is approximately normal with covariance matrix

. Then applying the Laplace approximation (see e.g. [2]), we get the following approximation for the marginal likelihood :

where and are the maximum a-posteriori (MAP) estimates from model (4). For large number of clusters the calculation of is computationally expensive. Instead, we suggest to use only the diagonal of the Hessian, which can be calculated for one sample as follows

with , and .

We consider the intercept terms as hyper-parameters and use empirical Bayes to estimate them, i.e. we set them to . The hyper-parameter cannot be estimated with empirical Bayes, since the unpenalized maximum-likelihood might not be defined, and this would lead to . Therefore, we estimate using cross-validation on the full model, i.e. no clustering. This is computationally feasible since the MAP is an ordinary logistic regression with penalty and cross-validation needs to be done only once and not for every clustering.

4 Synthetic Data Experiments

Here, in this section, we investigate the performance of our proposed method for covariate clustering, as well as the performance of our proposed model selection criterion on synthetic data.

For all synthetic datasets, we set the number of classes to 4. The intercept vector is set to the zero vector. We group the covariates evenly into 10 clusters. The weight vector for covariate is set to the all zero vector except one position which is set to . That means each covariate is associated with exactly one class. If two covariates and belong to the same cluster, then and

are set to be equal. Finally, we generate samples from a multivariate normal distribution as follows: given a positive definite covariate similarity matrix

, we generate a sample from class by . For each class we generate the same number of samples.

For we consider two scenarios:

agrees with class label information

If covariates and are in different clusters, then , otherwise . is set to . An example, where each cluster has four covariates is show in Figure 1.

Figure 1: Shows the first 20 covariates with its associated cluster labels (ground truth), associated classes, and similarities between the covariates. Grey bar at the same hight means that the covariates are similar to each other, i.e. . Here, similarity agrees with class information.

partly disagrees with class label information

This setting considers the case when the prior information from partly disagrees with the class label information. For that purpose, we modify the clustering from before. In particular, a new cluster is introduced that covers evenly covariates from two different clusters that are associated with different class labels. This is repeated once more for two different clusters. This way we get two new clusters, which we denote and . is defined as follows. If covariates and belong to the same new cluster, then . If covariates and belong to the same old cluster, and neither belongs to a new cluster, then . In all other cases is set to 0, and . An example is show in Figure 2.

Figure 2: Shows the first 20 covariates with its associated cluster labels (ground truth), associated classes, and similarities between the covariates. Grey bar at the same hight means that the covariates are similar to each other, i.e. . Here, similarity disagrees with class information.

4.1 Clustering Evaluation

proposed method
agreeing
n
40 400 4000
d 40 1.0 (0.0) 1.0 (0.0) 1.0 (0.0)
200 1.0 (0.0) 1.0 (0.0) 1.0 (0.0)
disagreeing
n
40 400 4000
d 40 1.0 (0.0) 1.0 (0.0) 1.0 (0.0)
200 1.0 (0.0) 1.0 (0.0) 1.0 (0.0)
convex clustering
agreeing
n
40 400 4000
d 40 1.0 (0.0) 1.0 (0.0) 1.0 (0.0)
200 1.0 (0.0) 1.0 (0.0) 1.0 (0.0)
disagreeing
n
40 400 4000
d 40 0.88 (0.0) 0.88 (0.0) 0.88 (0.0) 0.88 (0.0) 0.88 (0.0) 0.88 (0.0)
200 0.94 (0.0) 0.94 (0.0) 0.94 (0.0) 0.94 (0.0) 0.94 (0.0) 0.94 (0.0)
k-means clustering
agreeing
n
40 400 4000
d 40 0.67 (0.16) 1.0 (0.0) 0.57 (0.14) 1.0 (0.0) 0.77 (0.0) 1.0 (0.0)
200 0.87 (0.17) 1.0 (0.0) 1.0 (0.0) 1.0 (0.0)
disagreeing
n
40 400 4000
d 40 0.86 (0.07) 0.88 (0.0) 0.88 (0.0) 0.88 (0.0) 0.88 (0.0) 0.88 (0.0)
200 0.94 (0.0) 0.94 (0.0) 0.94 (0.0) 0.94 (0.0) 0.94 (0.0) 0.94 (0.0)
Table 1:

Shows the ANMI score of the proposed method, convex clustering, and k-means clustering on synthetic data: prior covariate similarity and class information is agreeing (top) and disagreeing (bottom). Standard deviation in brackets. Small number shows the highest ANMI score among all clusterings found (oracle model selection performance).

We consider and . We compare the proposed method to k-means with the initialization proposed in [3] and convex clustering [6]. For k-means, we consider . For convex clustering the hyper-parameter is tested in the range 0 to 5 evenly spaced with step size , and weights are set to the exponential kernel with .555For the convex clustering method led to memory overflow. Therefore, we needed to limit the weights to 5-nearest neighbors.

We repeat each experiment 10 times and report average and standard deviation of the adjusted normalized mutual information (ANMI) [21] when compared to the true clustering. The ANMI score ranges from 0.0 (agreement with true clustering at pure chance level) to 1.0 (complete agreement with true clustering). All results are summarized in Table 1. For selecting a clustering we use the approximate marginal likelihood selection criterion described in Section 3. If the correct clustering was not found with the marginal likelihood model selection criterion, we also show the oracle model selection performance, i.e. the highest ANMI score among all the clusterings found (small font in Table 1). This allows us to distinguish the failure of creating the correct clustering from the failure of selecting the correct clustering.

Agreement and disagreement of similarity measure with class-label information

As expected, for the agreeing case, both k-means and convex clustering can find the correct clustering. At least, for k-means, this is true when focusing on the oracle model selection performance. However, when the a-priori similarity measure disagrees with the class-label information (disagreeing case), then k-means and convex clustering fail to create the correct clustering. The proposed method finds in all cases the correct clustering.

Quality of marginal likelihood model selection criterion

For the disagreeing case, we see a discrepancy between the selected clustering and the best clustering, when clustering with k-means. However, in all other cases, we find that the marginal likelihood model selection criterion proposed in Section 3 always selects the best clustering. In particular, for the proposed method, we see that the correct clustering is always selected.

4.2 Runtime Experiments

In order to check the efficiency of our proposed ADMM solution, we compare it to two standard solvers for convex optimization, namely ECOS [9] and SCS [16] using the CVXPY interface [8, 1]. Note that SCS uses a generic ADMM method for which we use the same stopping criteria described in Section 2.2. We run all experiments on a cluster with 32 Intel(R) Xeon(R) CPUs with 3.20GHz.

The results for the synthetic data set (disagreement of similarity measure with class-label information) with and are listed in Table 2. We repeat each experiment 10 times and report the average runtime (wall-clock time) in minutes. In cases where one experiment did not finish after 12 hours we stopped the evaluation.

Our evaluation shows that the proposed method scales well with the number of samples , but less well with the number of variables . One reason is that the number of auxiliary variables grows quadratically with the number of variables , and therefore increases the cost per iteration by .666For sparse graphs the number of auxiliary variables grows only in , and therefore, we can expect further speed-ups. Nevertheless, our proposed method is considerably faster than standard solvers. Importantly, our method scales to several 1000 variables which can be crucial when applying to real data.

Proposed method with proposed ADMM
n
40 400 4000
d 40 2.224 (0.127) 2.515 (0.118) 4.654 (0.141)
200 5.012 (0.162) 5.148 (0.29) 7.2 (0.217)
1000 37.968 (1.073) 35.886 (1.581) 42.118 (2.898)
Proposed method with ECOS
n
40 400 4000
d 40 3.86 (0.032) 25.439 (0.397) 625.116 (10.744)
200 40.164 (0.217) 284.789 (2.175) NA
1000 NA NA NA
Proposed method with SCS
n
40 400 4000
d 40 59.844 (5.671) 249.591 (7.799) NA
200 147.38 (4.952) NA NA
1000 NA NA NA
Table 2: Average runtime (in minutes with standard deviation in brackets) of the proposed ADMM solution, and two standard solvers ECOS [9] and SCS [16] NA (not available) marks experiments where one run did not stop after 12 hours.

5 Experiments on Real Data

For our experiments on real data, we used the movie review corpus IMDB [13], which consists of in total 100k (labeled and unlabeled) documents. IMDB is a balanced corpus with 50% of the movies being reviewed as “good movie”, and the remaining as “bad movie”. As the second dataset, we used the 20 Newsgroups corpus (Newsgroup20)777http://people.csail.mit.edu/jrennie/20newsgroups/ with around 19k documents categorized into 20 classes (topics like “hockey”, “computer”,…) In order to check whether our model selection criterion correlates well with accuracy on held-out data, we use 10000 documents for training and clustering selection, and the remaining documents as held-out data.88839k and 9k held-out documents for IMDB and Newsgroup20, respectively. We removed all duplicate documents, and performed tokenization and stemming with Senna [7].

5.1 Covariate Selection

First, for each dataset we extract the 10000 most frequent words as covariates, and represent each document by a vector where each dimension contains the tf-idf score of each word in the document.999We also perform scaling of each which is known to improve classification performance. This is performed before the normalization of the covariates.

Finally, we normalize the covariates to have mean 0 and variance 1.

For such high dimensional problems, many covariates are only negligibly relevant. Therefore, in order to further reduce the dimension, we apply a group lasso penalty on the columns of like in [20]101010We use also an ADMM algorithm to perform this optimization. We omit the details since the optimization function is the same as in Definition 1 in [20] with and . and select the model with the highest approximate marginal likelihood. This leaves us with 2604 and 1538 covariates for IMDB and Newsgroup20, respectively.

5.2 Covariate Similarity Measure

From additional unlabeled documents, we determine the similarity between two covariates and as follows.

First, using the unlabeled datasets, we create for each covariate a word embedding . For IMDB, we create 50-dimensional word embeddings with word2vec [15] using 75k documents from the IMDB corpus.111111https://code.google.com/p/word2vec/ word2vec was used with the default settings. For Newsgroup20, since, the number of samples is rather small, we use the 300 dimensional word embeddings from GloVe [17] that were trained on Wikipedia + Gigaword 5.

Finally, the similarity between covariate and is calculated using

5.3 Quantitative Results

As a quantitative measure of interpretability, we suggest to use the number of effective parameters, which is just the number of clusters here. The ideal covariate clustering should lead to similar, or even better accuracy on held-out data than the full model (no clustering), while being considerably more compact (few number of effective parameters).

For selecting a clustering we use the proposed marginal likelihood criterion (Section 3), and the held-out data is only used for final evaluation. The results for IMDB and Newsgroup20 are shown in Table 3 and 5, respectively. We find that the marginal likelihood criterion tends to select models which accuracy on held-out data is similar to the full model, but with fewer number of effective parameters. Furthermore, for IMDB, we find that the proposed method leads to considerably more compact models than convex clustering and k-means, while having similar held-out accuracy. On the other hand, for Newsgroup20, the accuracy of the proposed method and k-means clustering is similar, indicating that there is good agreement between the similarity measure and the classification task.

In order to confirm that these conclusions are true, independent of the model selection criterion, we also show the held-out accuracy of clusterings with number of effective parameters being around 100, 500 and 1000. Note that, in contrast to k-means, the proposed method and convex clustering can only control the number of clusters indirectly through their regularization hyper-parameter. The results for IMDB and Newsgroup20, shown in 4 and 6, confirm that the proposed method can lead to better covariate clusterings than k-means and convex clustering.

Method Marginal Likelihood Effective Parameters Hold-out Acc
No Clustering -3637.6 2604 0.85
Proposed
Clustering
-578.4 140 0.85
-582.5 142 0.85
-593.7 132 0.85
-594.0 93 0.85
-601.6 99 0.85
Convex
Clustering
-3637.6 2604 0.85
-3653.7 2602 0.85
-3678.6 2588 0.85
-3684.2 2568 0.85
-3695.4 2535 0.85
k-means
Clustering
-3589.7 2528 0.85
-3589.8 2534 0.85
-3590.0 2532 0.85
-3590.3 2529 0.85
-3590.3 2527 0.85
Table 3: Results for IMDB dataset. Shows the top 5 clustering results measured by the marginal likelihood, the number of effective parameters and the accuracy on hold-out data. Sorted according to marginal likelihood.
Effective Parameters Proposed Convex Clustering k-means Clustering
99 0.85 0.59 0.80
491 0.84 0.76 0.82
1010 0.84 0.80 0.83
Table 4: Results for IMDB dataset. Comparison of held-out accuracy of the proposed method, convex clustering and k-means for around the same number of effective parameters: 99, 491, and 1010 (for convex clustering: 85, 509, and 1008).
Method Marginal Likelihood Effective Parameters Hold-out Acc
No Clustering -10695.8 1538 0.88
Proposed
Clustering
-9491.1 1174 0.86
-9712.1 880 0.84
-10064.8 1354 0.87
-10316.2 1432 0.87
-10422.5 1463 0.88
Convex
Clustering
-10695.8 1538 0.88
-10700.0 1534 0.88
-10706.7 1532 0.88
-10707.1 1531 0.88
-10729.8 1528 0.87
k-means
Clustering
-10152.4 1036 0.85
-10153.3 1037 0.85
-10155.2 1035 0.85
-10157.3 1029 0.85
-10157.3 1034 0.85
Table 5: Results for Newsgroup20 dataset. Shows the top 5 clustering results measured by the marginal likelihood, the number of effective parameters and the accuracy on hold-out data. Sorted according to marginal likelihood.
Effective Parameters Proposed Convex Clustering k-means Clustering
99 0.65 0.44 0.64
457 0.8 0.7 0.79
880 0.84 0.82 0.83
Table 6: Results for Newsgroup20 dataset. Comparison of held-out accuracy of the proposed method, convex clustering and k-means for around the same number of effective parameters: 99, 457, 880 (for convex clustering: 189, 552, 1000).

5.4 Qualitative Results

Like convex clustering, our proposed method can be used to create a hierarchical clustering structure by varying the hyper-parameter

. In particular, for large enough , all covariates collapse to one cluster. On the other side, for , each covariate is assigned to a separate cluster. In order to ease interpretability, we limit the visualization of the hierarchical clustering structure to all clusterings with , where

corresponds to the clustering that was selected with the marginal likelihood criterion. Furthermore, for the visualization of IMDB, we only show the covariates with odds ratios larger than 1.1. Part of the clustering results of the proposed method, convex clustering and k-means are shown in Figures

5, 6, 7, 8, and Figures 9, 10, for IMDB and Newsgroup20, respectively.121212For comparison, for k-means and convex clustering we choose (around) the same number of clusters as the proposed method. In the supplement material, we give the clustering results of all methods for all classes.131313The clustering result for method m of dataset d corresponding to class c is shown in file “d_m_c.pdf” in the supplement material.

Each node represents one cluster. If the cluster is a leaf node, we show the covariate, otherwise we show the size of the cluster (in the following called cluster description). From the logistic regression model of each clustering, we associate each cluster to the class with highest entry in weight matrix , and then calculate the odds ratio to the second largest weight. The color of a node represents its associated class. All classes with coloring are explained in Figure 3 and 4 for IMDB and Newsgroup20, respectively. The odds ratio of each cluster is shown below the cluster description. Further details of the creation of the hierarchical structure and visualization are given in the supplement material.141414See file “joint_covariate_clustering_and_classification_visualization.pdf” in the supplement material.

For IMDB, the qualitative differences between the proposed method and k-means are quite obvious. K-means clustering tends to produce clusters that include covariates that are associated to different classes. An example is shown in Figure 5, where all number ratings are clustered together. However, these have very different meaning, since for example 7/10, 8/10, 9/10, 10/10 rate good movies and 0, 1/10, *1/, 2/10,… rate bad movies. We observe a similar result for convex clustering as shown in Figure 6. In contrast, our proposed method distinguishes them correctly by assigning them to different clusters associated with different classes (see Figure 7 and 8).

For Newsgroup20, the qualitative differences of the proposed method with k-means are more subtle, but still we can identify some interesting differences. For example, k-means assigns “apple” and “cherry” to the same cluster which is associated with the class “Macintosh” (comp_sys_mac_hardware) (see Figure 9). In contrast, our proposed method correctly distinguishes these two concepts in the context of “Macintosh” (see Figure 10).

Figure 3: Coloring of each class in IMDB.
Figure 4: Coloring of each class in Newsgroup20.
Figure 5: Shows part of the clustering result of k-means clustering on IMDB for the class bad movie.
Figure 6: Shows part of the clustering result of convex clustering on IMDB for the class bad movie.
Figure 7: Shows part of the clustering result of the proposed method on IMDB for the class bad movie.
Figure 8: Shows part of the clustering result of the proposed method on IMDB for the class good movie.
Figure 9: Shows part of the clustering result of the k-means clustering on Newsgroup20 for the class Macintosh (hardware).
Figure 10: Shows part of the clustering result of the proposed method on Newsgroup20 for the class Macintosh (hardware).

6 Conclusions

We presented a new method for covariate clustering that uses an a-priori similarity measure between two covariates, and additionally, samples with class label information. In contrast to k-means and convex clustering, the proposed method creates clusters that are a-posteriori plausible in the sense that they help to explain the observed data (samples with class label information). Like convex clustering [6], the proposed objective function is convex, and therefore insensitive to heuristics for initializations (as needed by k-means clustering).

Solving the convex objective function is computationally challenging. Therefore, we proposed an efficient ADMM algorithm which allows us to scale to several 1000 variables. Furthermore, in order to prevent computationally expensive cross-validation, we proposed a marginal likelihood criterion similar to BIC. For synthetic and real data, we confirmed the usefulness of our proposed clustering method and the marginal likelihood criterion.

Supplement Material

Our implementation of the proposed ADMM algorithm and the marginal likelihood criterion is available here
https://github.com/andrade-stats/convexCovariateClusteringClassification.

References

  • [1] Steven Diamond Akshay Agrawal, Robin Verschueren and Stephen Boyd. A rewriting system for convex optimization problems. Journal of Control and Decision, 5(1):42–60, 2018.
  • [2] Tomohiro Ando. Bayesian model selection and statistical modeling. CRC Press, 2010.
  • [3] David Arthur and Sergei Vassilvitskii. k-means++: The advantages of careful seeding. In Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1027–1035, 2007.
  • [4] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers.

    Foundations and Trends® in Machine Learning

    , 3(1):1–122, 2011.
  • [5] Richard H Byrd, Peihuang Lu, Jorge Nocedal, and Ciyou Zhu. A limited memory algorithm for bound constrained optimization. SIAM Journal on Scientific Computing, 16(5):1190–1208, 1995.
  • [6] Eric C Chi and Kenneth Lange. Splitting methods for convex clustering. Journal of Computational and Graphical Statistics, 24(4):994–1013, 2015.
  • [7] Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493–2537, 2011.
  • [8] Steven Diamond and Stephen Boyd. CVXPY: A Python-embedded modeling language for convex optimization. Journal of Machine Learning Research, 17(83):1–5, 2016.
  • [9] A. Domahidi, E. Chu, and S. Boyd. ECOS: An SOCP solver for embedded systems. In European Control Conference (ECC), pages 3071–3076, 2013.
  • [10] David Hallac, Jure Leskovec, and Stephen Boyd. Network lasso: Clustering and optimization in large graphs. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 387–396. ACM, 2015.
  • [11] Trevor Hastie, Robert Tibshirani, and Martin Wainwright. Statistical learning with sparsity: the lasso and generalizations. CRC press, 2015.
  • [12] Toby Dylan Hocking, Armand Joulin, Francis Bach, and Jean-Philippe Vert. Clusterpath an algorithm for clustering using convex fusion penalties. In 28th international conference on machine learning, page 1, 2011.
  • [13] Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts.

    Learning word vectors for sentiment analysis.

    In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 142–150, 2011.
  • [14] Nicolai Meinshausen. Relaxed lasso. Computational Statistics & Data Analysis, 52(1):374–393, 2007.
  • [15] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119, 2013.
  • [16] Brendan O’Donoghue, Eric Chu, Neal Parikh, and Stephen Boyd. Conic optimization via operator splitting and homogeneous self-dual embedding. Journal of Optimization Theory and Applications, 169(3):1042–1068, 2016.
  • [17] Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In Conference on Empirical Methods on Natural Language Processing, pages 1532–43, 2014.
  • [18] Gideon Schwarz. Estimating the dimension of a model. The annals of statistics, 6(2):461–464, 1978.
  • [19] Yiyuan She. Sparse regression with exact clustering. Electronic Journal of Statistics, 4:1055–1096, 2010.
  • [20] Martin Vincent and Niels Richard Hansen. Sparse group lasso and high dimensional multinomial classification. Computational Statistics & Data Analysis, 71:771–786, 2014.
  • [21] Nguyen Xuan Vinh, Julien Epps, and James Bailey. Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. Journal of Machine Learning Research, 11(Oct):2837–2854, 2010.