semicrowd
Code for Semi-crowdsourced Clustering with Deep Generative Models
view repo
We consider the semi-supervised clustering problem where crowdsourcing provides noisy information about the pairwise comparisons on a small subset of data, i.e., whether a sample pair is in the same cluster. We propose a new approach that includes a deep generative model (DGM) to characterize low-level features of the data, and a statistical relational model for noisy pairwise annotations on its subset. The two parts share the latent variables. To make the model automatically trade-off between its complexity and fitting data, we also develop its fully Bayesian variant. The challenge of inference is addressed by fast (natural-gradient) stochastic variational inference algorithms, where we effectively combine variational message passing for the relational part and amortized learning of the DGM under a unified framework. Empirical results on synthetic and real-world datasets show that our model outperforms previous crowdsourced clustering methods.
READ FULL TEXT VIEW PDFCode for Semi-crowdsourced Clustering with Deep Generative Models
Clustering is a classic data analysis problem when the taxonomy of data is unknown in advance. Its main goal is to divide samples into disjunct clusters based on the similarity between them. Clustering is useful in various application areas including computer vision
shi2000normalized , bioinformatics wiwie2015comparing chandola2009anomaly, etc. When the feature vectors of samples are observed, most clustering algorithms require a similarity or distance metric defined in the feature space, so that the optimization objective can be built. Since different metrics may result in entirely different clustering results, and general geometry metrics may not meet the intention of the tasks’ designer, many clustering approaches learn the metric from the side-information provided by domain experts
xing2003distance , thus the manual labeling procedure of experts could be a bottleneck for the learning pipeline.Crowdsourcing is an efficient way to collect human feedbacks howe2006rise
. It distributes micro-tasks to a group of ordinal web workers in parallel, so the whole task can be done fast with relatively low cost. It has been used on annotating large-scale machine learning datasets such as ImageNet
deng2009imagenet , and can also be used to collect side-information for clustering. However, directly collecting labels from crowds may lead to low-quality results due to the lack of expertise of workers. Consider an example of labeling a set of images of flowers from different species. One could show images to the web workers and ask them to identify the corresponding species, but such tasks require the workers to be experts in identifying the flowers and have all the species in their minds, which is not always possible. A more reasonable and easier task is to ask the workers to compare pairs of flower images and to answer whether they are in the same species or not. Then specific clustering methods are required to discover the clusters from the noisy feedbacks.To solve above clustering problems with pairwise similarity labels between samples from the crowds, Crowdclustering gomes2011crowdclustering discovers the clusters within the dataset using a Bayesian hierarchical model. By explicitly modeling the mistakes and preferences of web workers, the outputs will match the human consciousness of the clustering tasks. This method reduces the labeling cost to a great degree compared with expert labeling. However, the cost still grows quadratically as the dataset size grows, so it is still only suitable for small datasets. In this work, we move one step further and consider the semi-supervised crowdclustering problem that jointly models the feature vectors and the crowdsourced pairwise labels for only a subset of samples. When we control the size of the subset to be labeled by crowds, the total labeling budget and time can be controlled. A similar problem has been discussed by yi2012semi , while the authors use a linear similarity function defined on the low-level object features, and ignore the noise and inter-worker variations in the manual annotations.
Different from existing approaches, we propose a semi-supervised deep Bayesian model to jointly model the generation of the labels and the raw features for both crowd-labeled and unlabeled samples. Instead of the direct usage of low-level features, we build a flexible deep generative model (DGM) to capture the latent representation of data, which is more suitable to express the semantic similarity than the low-level features. The crowdsourced pairwise labels are modeled by a statistical relational model, and the two parts (i.e., DGM and the relational model) share the same latent variables. We also investigate the fully Bayesian variant of this model so that it can automatically control its complexity. Due to the intractability of exact inference, we develop fast (natural-gradient) stochastic variational inference algorithms. To address the challenges in fully Bayesian inference over model parameters, we effectively combine variational message passing and natural gradient updates for the conjugate part (i.e., the relational model and the mixture model) and amortized learning of the nonconjugate part (i.e., DGM) under a unified framework. Empirical results on synthetic and real-world datasets show that our model outperforms previous crowdsourced clustering methods.
In this section, we propose the semi-crowdsourced clustering with deep generative models for directly modeling the raw data, which enables end-to-end training. We call the model Semi-crowdsourced Deep Clustering (SCDC), whose graphical model is shown in Figure 1. This model is composed of two parts: the raw data model handles the generative process of the observations ; the crowdsourcing behavior model on labels describes the labeling procedure of the workers. The details for each part will be introduced below.
We denote the raw data observations by . For images, denotes the pixel values. For each data point we have a corresponding latent variable and
is a flexible neural network density model parametrized by
. is a Gaussian mixture where comprises a -of- binary vector with elements for . Here denotes the number of clusters. We denote the local latent variables by , . When real-valued observations are given, the generative process is as follows:where and are two neural networks parameterized by . For other types of observations ,
can be other distributions, e.g. Bernoulli distribution for binary observations. In general, our model is a deep generative model with structured latent variables.
We collect pairwise annotations provided by workers. A partially observed is the annotation matrix of the -th worker, where is the number of annotated data points. For observation pairs , represents that the -th worker provides a must-link (ML) constraint, which means observations and belong to a same cluster, represents cannot-link (CL) constraint, which means observations and belong to different clusters, and NULL represents that is not observed. It is obvious that is symmetric, i.e., . Self-edges are not allowed, i.e., .
Among all the data observations , we only crowdsource pairwise annotations for a small portion of , denoted by . Each worker only provides annotations to a small amount of items in and the annotation accuracies of non-expert workers may vary with observations and levels of expertise. We adopt the two-coin Dawid-Skene model for annotators from raykar2010learning and develop a probabilistic model by explicitly modeling the uncertainty of each worker. Specifically, the uncertainty of the -th worker can be characterized by accuracy parameters , where
represents sensitivity, which means the probability of providing ML constraints for sample pairs belonging to the same cluster. And
is the -th worker’s specificity, which means the probability of providing CL constraints for sample pairs from different clusters. Let and . The likelihood is defined as(1) |
or equivalently, , To simplify the notation, we define . Using the symmetry of , the total likelihood of annotations can be written
(2) |
As described above, the parameters in the semi-crowdsourced deep clustering model include , ,, , and the parameters of neural networks . Let , the overall joint likelihood of the model is
(3) |
For this model, the learning objective is to maximize the variational lower bound of the marginal log likelihood of the entire dataset :
(4) |
To deal with the non-conjugate likelihood , we introduce inference networks for each of the latent variables and . The inference networks are assumed to have a factorized form
, which are Categorical and Normal distributions, respectively:
where
is a vector of standard deviations and
denotes the inference networks parameters. Similar to the approach in kingma2014semi , we can analytically sum over the discrete variables in the lower bound and use the reparameterization trick to compute gradients w.r.t. to and .The above objective sums over all data and annotations. For large datasets, we can conveniently use a stochastic version by approximating the lower bound with subsampled minibatches of data. Specifically, the variational lower bound is decomposed into two terms: , where , and It is easy to derive an unbiased stochastic approximation of :
where is the sampled minibatch. For , we can similarly randomly sample a minibatch of annotations: , where denotes the total number of annotations.
In the previous section, the global parameters are assumed to be deterministic and are directly optimized by gradient descent. In this section, we propose a fully Bayesian variant of our model (BayesSCDC), which has an automatic trade-off between its complexity and fitting the data. There is no overfitting if we choose a large number of components in the mixture, in which case the variational treatment below can automatically determine the optimal number of mixture components. We develop fast natural-gradient stochastic variational inference algorithms for BayesSCDC, which effectively combines variational message passing for the conjugate structures (i.e., the relational part and the mixture part) and amortized learning of deep components (i.e., the deep generative model).
For the mixture model, we choose a Dirichlet prior over the mixing coefficients and an independent Normal-Inverse-Wishart prior governing the mean and covariance of each Gaussian component, given by
(5) |
where is the location parameter, is the concentration, is the scale matrix (positive definite), and
is the degrees of freedom. The densities of
can be written in the standard form of exponential families as:where denotes the natural parameters, denotes the sufficient statistics111Detailed expressions of each distribution can be found in Appendix A, and denotes the log partition function.
For the relational model, we assume the accuracy parameters of all workers (, ) are drawn independently from common priors. We choose conjugate Beta priors for them as
(6) |
We write the exponential family form of as: ( is similar), where and .
The overall joint distribution of all of the hidden and observed variables takes the form:
(7) | ||||
Our learning objective is to maximize the marginal likelihood of observed data and pairwise annotations . Exact posterior inference for this model is intractable. Thus we consider a mean-field variational family To simplify the notations, we write each variational distribution in its exponential family form: The evidence lower bound (ELBO) of is
(8) |
In traditional mean-field variational inference for conjugate models, the optimal solution of maximizing eq. 8 over each variational parameter can be derived analytically given other parameters fixed, thus a coordinate ascent can be applied as an efficient message passing algorithm winn2005variational ; hoffman2013stochastic . However, it is not directly applicable to our model due to the non-conjugate observation likelihood . Inspired by johnson2016composing , we handle the non-conjugate likelihood by introducing recognition networks . Different from SCDC in Section 2.3, the recognition networks here are used to form conjugate graphical model potentials:
(9) |
By replacing the non-conjugate likelihood in the original ELBO with a conjugate term defined by , we have the following surrogate objective :
(10) |
As we shall see, the surrogate objective helps us exploit the conjugate structure in the model, thus enables a fast message-passing algorithm for these parts. Specifically, we can view eq. 10 as the ELBO of a conjugate graphical model with the same structure as in Fig. 1 (up to a constant). Similar to coordinate-ascent mean-field variational inference hoffman2013stochastic , we can derive the local partial optimizers of individual variational parameters as below.
The optimal solution for factorizes over , i.e., , and depends on the expected sufficient statistics of and :
(11) | ||||
(12) |
By further assuming a mean-field structure over : , we have the local partial optimizer for each single as
(13) | |||
(14) |
where is the weight of the message from to . Using a block coordinate ascent algorithm that applies eqs. 14 and 12 alternatively, we can find the joint local partial optimizers of w.r.t. given other parameters fixed, i.e.,
(15) |
Plugging back into , we define the final objective
(16) |
As shown in johnson2016composing , lower-bounds the partially-optimized mean field objective, i.e., thus can serve as a variational objective itself. We compute the natural gradients of w.r.t. the global variational parameters :
(17) |
Note that the first term in section 3.2 is the same as the formula of natural gradient in SVI hoffman2013stochastic , which is easy to compute, and the second term originates from the dependence of on and can be computed using the reparameterization trick. For other parameters , we can also get the gradients and using the reparameterization trick.
Computing the full natural gradient in section 3.2 requires to scan over all data and annotations, which is time-consuming. Similar to Section 2.3
, we can approximate the variational lower bound with unbiased estimates using mini-batches of data and annotations, thus getting a stochastic natural gradient. Several sampling strategies have been developed for relational model
gopalan2012scalable to keep the stochastic gradient unbiased. Here we choose the simplest way: we sample annotated data pairs uniformly from the annotations and form a subsample of the relational model, and do local message passing (eqs. 14 and 12), then perform the global update using stochastic natural gradient calculated in the subsample. Besides, for all the unannotated data, we also subsample mini-batches from them and perform local and global steps without relational terms. The algorithm of BayesSCDC is shown in Algorithm 1.BayesSCDC is different in two aspects: (a) fully Bayesian treatment of global parameters; (b) variational algorithms. As we shall see in experiments, the result of (a) is that BayesSCDC can automatically determine the number of mixture components during training. As for (b), note that the variational family used in SCDC is not more flexible, but more restricted compared to BayesSCDC. In BayesSCDC, the mean-field doesn’t imply that and are independent, instead they implicitly influence each other through message passing in Eqs. (12) and (14). More importantly, in BayesSCDC the variational posterior gathers information from through message passing in the relational model. In contrast, the amortized form used in SCDC ignores the effect of observed annotations . Another advantage of the inference algorithm in BayesSCDC is in the computational cost. As we have seen in Algorithm 1, the number of passes through the to network is no longer linear with because we get rid of summing over in the observation term as in Section 2.3.
Most previous works on learning-from-crowds are about aggregating noisy crowdsourced labels from several predefined classes dawid1979maximum ; raykar2010learning ; welinder2010multidimensional ; zhouaggregating ; Tian2015Max . A common way they use is to simultaneously estimate the workers’ behavior models and the ground truths. Different from this line of work, crowdclustering gomes2011crowdclustering collects pairwise labels, including the must-links and the cannot-links, from the crowds, then discovers the items’ affiliations as well as the category structure from these noisy labels, so it can be used on a border range of applications compared with the classification methods. Recent work Vinayak2016Crowdsourced also developed crowdclustering algorithm on triplet annotations.
One shortcoming of crowdclustering is that it can only cluster objects with available manual annotations. For large-scale problems, it is not feasible to have each object manually annotated by multiple workers. Similar problems were extensively discussed in the semi-supervised clustering area, where we are given the features for all the items and constraints on only a same portion of the items. Metric learning methods, including Information-Theoretic Metric Learning (ITML) davis2007information and Metric Pairwise Constrained KMeans (MPCKMeans) bilenko2004integrating , are used on this problem, they first learn the similarity metric between items mainly based on the supervised portion of data, then cluster the rest items using this metric. Semi-crowdsourced clustering (SemiCrowd) yi2012semi combines the idea of crowdclustering and semi-supervised clustering, it aims to learn a pairwise similarity measure from the crowdsourced labels of objects () and the features of objects. Unlike crowdclustering, the number of clusters in SemiCrowd is assumed to be given a priori. And it doesn’t estimate the behavior of different workers. Multiple Clustering Views from the Crowd (MCVC) chang2017multiple extends the idea to discover several different clustering results from the noisy labels provided by uncertain experts. A common shortcoming of these semi-crowdsourced clustering methods is they cannot make good use of unlabeled items when measuring the similarities, while our model is a step towards this direction.
As shown in Section 2.1
, our model is a deep generative model (DGM) with relational latent structures. DGMs are a kind of probabilistic graphical models that use neural networks to parameterize the conditional distribution between random variables. Unlike traditional probabilistic models, DGMs can directly model high-dimensional outputs with complex structures, which enables end-to-end training on real data. They have shown success in image generation
kingma2013auto kingma2014semi , and one-shot classification rezende2016one . Typical inference algorithms for DGMs are in the amortized form like that in Section 2.3. However, this approach cannot leverage the conjugate structures in latent variables. Therefore few works have been done on fully Bayesian treatment of global parameters in DGMs. johnson2016composing ; lin2018variational are two exceptions. In johnson2016composing the authors propose using recognition networks to produce conjugate graphical model potentials, so that traditional variational message passing algorithms and natural gradient updates can be easily combined with amortized learning of network parameters. Our work extends their algorithm to relational observations, which has not been investigated before.In this section, we demonstrate the effectiveness of the proposed methods on synthetic and real-world datasets with simulated or crowdsourced noisy annotations. Code is available at https://github.com/xinmei9322/semicrowd. Part of the implementation is based on ZhuSuan (zhusuan2017, ).
Simulating noisy annotations from workers. Suppose we have workers with accuracy parameters . We random sample pairs of items and and generate the annotations provided by worker based on the true clustering labels of and as well as the worker’s accuracy parameters . If and belong to the same cluster, the worker has probability to provide ML constraint . If not, the worker has probability to provide CL constraint .
Evaluation metrics. The clustering performance is evaluated by the commonly used normalized mutual information (NMI) score strehl2002cluster , measuring the similarity between two partitions. Following recent work xie2016unsupervised , we also report the unsupervised clustering accuracy, which requires to compute the best mapping using the Hungarian algorithm efficiently.
![]() |
![]() |
![]() |
![]() |
First we apply our method to a toy example–the pinwheel dataset in Fig. 1(a) following johnson2016composing ; lin2018variational . It has 5 clusters and each cluster has 100 data points, thus there are 500 data points in total. We compare with unsupervised clustering to understand the help of noisy annotations. The clustering results are shown in Fig. 2. We random sampled 100 data points for annotations and simulate 20 workers, each worker gives 49 pairs of annotations, 980 in total. We set equal accuracy to each worker .
We use the fully Bayesian model (BayesSCDC) described in Section 3. The initial number of clusters is set to a larger number since the hyper priors have sparsity property intrinsically and can learn the number of clusters automatically. Unsupervised clustering is sensitive to the initializations, which achieves 95.6% accuracy and NMI score 0.91 with good initializations as shown in Fig. 1(b). After training, it learns clusters. However, with bad initializations, the accuracy and NMI score of unsupervised clustering are 75.6% and 0.806, respectively, as shown in Fig. 1(c). With noisy annotations on random sampled 100 data points, our model improves accuracy to 96.6% and NMI score to 0.94. And it converges to clusters. Our model prevents the bad results in Fig. 1(c) by making use of annotations.
In this subsection, we compare the proposed SCDC with the competing methods on the UCI benchmarks. The baselines include MCVC chang2017multiple , SemiCrowd yi2012semi , semi-supervised clustering methods such as ITML davis2007information , MPCKMeans bilenko2004integrating and Cluster-based Similarity Partitioning Algorithm (CSPA) strehl2002cluster .
Crowdsourced annotations are not available for UCI datasets. Following the experimental protocol in MCVC chang2017multiple , we generate noisy annotations given by simulated workers with different sensitivity and specificity, i.e., , which is more challenging than equal accuracy parameters. The annotations provided by each worker varies from to and the number of ML constraints equals to the number of CL constraints.
We test on Face dataset Dua:2017 , containing 640 face images from 20 people with different poses (straight, left, right, up). The ground-truth clustering is based on the poses. The original image has 960 pixels. To speed up training, baseline methods apply Principle Component Analysis (PCA) and keep 20 components. For fair comparison, we test the proposed SCDC on the features after PCA. Fig. 3 plots the mean and standard deviation of NMI scores in 10 different runs for each fixed number of constraints. In Fig. 2(a), the annotations are randomly generated on the whole dataset. We observe that our method consistently outperforms all competing methods, demonstrating that the clustering benefits from the joint generative modeling of inputs and annotations.
![]() |
![]() |
![]() |
Annotations on a subset. To illustrate the benefits of our method in the situation where only a small part of data points are annotated, we simulate noisy annotations on only 100 images. Fig. 2(b) shows the results of 100 annotated images. Our method exploits more structure information in the unlabeled data and shows notable improvements over all competing methods.
Recover worker behaviors. For each worker , our model estimates the different accuracies and . We can derive from eq. 2 that the annotations of each worker are weighted by , which means workers with higher accuracies are more reliable and will be weighted higher. We plot the weights of 5 workers in the Face experiments in Fig. 2(c).
As mentioned earlier, an important feature of DGMs is that they can directly model raw data, such as images. To verify this, we experiment with the MNIST dataset of digit images, which includes 60k training images from handwritten digits 0-9. We collect crowdsourced annotations from workers and get 3276 annotations in total. The two variants of our model (SCDC, BayesSCDC) are tested with or without annotations. For BayesSCDC, a non-informative prior is placed over . For fair comparison, we also randomly sample the initial accuracy parameters from
for SCDC. We average the results of 5 runs. In each run we randomly initialize the model for 10 times and pick the best result. All models are trained for 200 epochs with minibatch size of 128 for each random initialization. The results are shown in
Table 1. We can see that both models can effectively combine the information from the raw data and annotations, i.e., they worked reasonably well with only unlabeled data, and better when given noisy annotations on a subset of data. In terms of clustering accuracy and NMI, BayesSCDC outperforms SCDC. We believe that this is because the variational message passing algorithm used in BayesSCDC can effectively gather information from the crowdsourced annotations to form better variational approximations, as explained in Section 3.2. Besides being more accurate, BayesSCDC is much faster because the computation cost caused by neural networks does not scales linearly with the number of clusters (50 in this case). In Fig. 3(a) we show that BayesSCDC is more flexible and automatically determines the number of mixture components during training.Method | without annotations | with annotations | |||||
Accuracy | NMI | Time | Accuracy | NMI | Time | ||
|
65.92 3.47 % | 0.6953 0.0167 | 177.3s | 81.87 3.86% | 0.7657 0.0233 | 201.7s | |
|
77.64 3.97 % | 0.7944 0.0178 | 11.2s | 84.24 5.52% | 0.8120 0.0210 | 16.4s |
|
|
We also conduct experiments with real crowdsourced labels on more complex natural images, i.e., CIFAR-10. Using the same crowdsourcing scheme, we collect 8640 noisy annotations from 32 web workers on a subset of randomly sampled 4000 images. We apply SCDC with/without annotations for 5 runs of random initializations. SCDC without annotations failed with NMI score 0.0424 0.0119 and accuracy 14.23 0.69% among 5 runs. But the NMI score achieved by SCDC with noisy annotations is 0.5549 0.0028 and the accuracy is 50.09 0.08%. The clustering results on test dataset are shown in Fig. 3(b). We plot 10 test samples with the largest probability for each cluster. More experiment details and discussions could be found in the supplementary material.
In this paper, we proposed a semi-crowdsourced clustering model based on deep generative models and its fully Bayesian version. We developed fast (natural-gradient) stochastic variational inference algorithms for them. The resulting method can jointly model the crowdsourced labels, worker behaviors, and the (un)annotated items. Experiments have demonstrated that the proposed method outperforms previous competing methods on standard benchmark datasets. Our work also provides general guidelines on how to incorporate DGMs to statistical relational models, where the proposed inference algorithm can be applied under a broader context.
Yucen Luo would like to thank Matthew Johnson for helpful discussions on the SVAE algorithm (johnson2016composing, ), and Yale Chang for sharing the code of the UCI benchmark experiments. We thank the anonymous reviewers for feedbacks that greatly improved the paper. This work was supported by the National Key Research and Development Program of China (No. 2017YFA0700904), NSFC Projects (Nos. 61620106010, 61621136008, 61332007), Beijing NSF Project (No. L172037), Tiangong Institute for Intelligent Computing, NVIDIA NVAIL Program, and the projects from Siemens, NEC and Intel.
Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on
, pages 248–255. IEEE, 2009.Unsupervised deep embedding for clustering analysis.
In International conference on machine learning, pages 478–487, 2016.Note that , , where the conjugacy comes. In practice, we parameterized the unnormalized version of since it is unconstrained. The update rule for is the same with due to their constant difference.
where , is the digamma function.
where , is the Gamma function.
The local update for :
(18) |
The local update for :
where .
The final objective:
(19) |
The annotation likelihood term:
(20) |
The local KL divergence term: