Semi-crowdsourced Clustering with Deep Generative Models

10/29/2018
by   Yucen Luo, et al.
Tsinghua University
0

We consider the semi-supervised clustering problem where crowdsourcing provides noisy information about the pairwise comparisons on a small subset of data, i.e., whether a sample pair is in the same cluster. We propose a new approach that includes a deep generative model (DGM) to characterize low-level features of the data, and a statistical relational model for noisy pairwise annotations on its subset. The two parts share the latent variables. To make the model automatically trade-off between its complexity and fitting data, we also develop its fully Bayesian variant. The challenge of inference is addressed by fast (natural-gradient) stochastic variational inference algorithms, where we effectively combine variational message passing for the relational part and amortized learning of the DGM under a unified framework. Empirical results on synthetic and real-world datasets show that our model outperforms previous crowdsourced clustering methods.

READ FULL TEXT VIEW PDF

Authors

page 9

09/16/2018

A Deep Generative Model for Semi-Supervised Classification with Noisy Labels

Class labels are often imperfectly observed, due to mistakes and to genu...
05/28/2015

Automatic Relevance Determination For Deep Generative Models

A recurring problem when building probabilistic latent variable models i...
04/05/2021

Semi-Supervised Clustering with Inaccurate Pairwise Annotations

Pairwise relational information is a useful way of providing partial sup...
12/12/2020

Learning Consistent Deep Generative Models from Sparse Data via Prediction Constraints

We develop a new framework for learning variational autoencoders and oth...
12/11/2014

A Topic Modeling Approach to Ranking

We propose a topic modeling approach to the prediction of preferences in...
09/28/2020

Variational Temporal Deep Generative Model for Radar HRRP Target Recognition

We develop a recurrent gamma belief network (rGBN) for radar automatic t...
05/31/2018

Cyberattack Detection using Deep Generative Models with Variational Inference

Recent years have witnessed a rise in the frequency and intensity of cyb...

Code Repositories

semicrowd

Code for Semi-crowdsourced Clustering with Deep Generative Models


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Clustering is a classic data analysis problem when the taxonomy of data is unknown in advance. Its main goal is to divide samples into disjunct clusters based on the similarity between them. Clustering is useful in various application areas including computer vision 

shi2000normalized , bioinformatics wiwie2015comparing

, anomaly detection 

chandola2009anomaly

, etc. When the feature vectors of samples are observed, most clustering algorithms require a similarity or distance metric defined in the feature space, so that the optimization objective can be built. Since different metrics may result in entirely different clustering results, and general geometry metrics may not meet the intention of the tasks’ designer, many clustering approaches learn the metric from the side-information provided by domain experts 

xing2003distance , thus the manual labeling procedure of experts could be a bottleneck for the learning pipeline.

Crowdsourcing is an efficient way to collect human feedbacks howe2006rise

. It distributes micro-tasks to a group of ordinal web workers in parallel, so the whole task can be done fast with relatively low cost. It has been used on annotating large-scale machine learning datasets such as ImageNet 

deng2009imagenet , and can also be used to collect side-information for clustering. However, directly collecting labels from crowds may lead to low-quality results due to the lack of expertise of workers. Consider an example of labeling a set of images of flowers from different species. One could show images to the web workers and ask them to identify the corresponding species, but such tasks require the workers to be experts in identifying the flowers and have all the species in their minds, which is not always possible. A more reasonable and easier task is to ask the workers to compare pairs of flower images and to answer whether they are in the same species or not. Then specific clustering methods are required to discover the clusters from the noisy feedbacks.

To solve above clustering problems with pairwise similarity labels between samples from the crowds, Crowdclustering gomes2011crowdclustering discovers the clusters within the dataset using a Bayesian hierarchical model. By explicitly modeling the mistakes and preferences of web workers, the outputs will match the human consciousness of the clustering tasks. This method reduces the labeling cost to a great degree compared with expert labeling. However, the cost still grows quadratically as the dataset size grows, so it is still only suitable for small datasets. In this work, we move one step further and consider the semi-supervised crowdclustering problem that jointly models the feature vectors and the crowdsourced pairwise labels for only a subset of samples. When we control the size of the subset to be labeled by crowds, the total labeling budget and time can be controlled. A similar problem has been discussed by yi2012semi , while the authors use a linear similarity function defined on the low-level object features, and ignore the noise and inter-worker variations in the manual annotations.

Different from existing approaches, we propose a semi-supervised deep Bayesian model to jointly model the generation of the labels and the raw features for both crowd-labeled and unlabeled samples. Instead of the direct usage of low-level features, we build a flexible deep generative model (DGM) to capture the latent representation of data, which is more suitable to express the semantic similarity than the low-level features. The crowdsourced pairwise labels are modeled by a statistical relational model, and the two parts (i.e., DGM and the relational model) share the same latent variables. We also investigate the fully Bayesian variant of this model so that it can automatically control its complexity. Due to the intractability of exact inference, we develop fast (natural-gradient) stochastic variational inference algorithms. To address the challenges in fully Bayesian inference over model parameters, we effectively combine variational message passing and natural gradient updates for the conjugate part (i.e., the relational model and the mixture model) and amortized learning of the nonconjugate part (i.e., DGM) under a unified framework. Empirical results on synthetic and real-world datasets show that our model outperforms previous crowdsourced clustering methods.

2 Semi-crowdsourced deep clustering

,

Figure 1: Semi-crowdsourced Deep Clustering (SCDC).

In this section, we propose the semi-crowdsourced clustering with deep generative models for directly modeling the raw data, which enables end-to-end training. We call the model Semi-crowdsourced Deep Clustering (SCDC), whose graphical model is shown in Figure 1. This model is composed of two parts: the raw data model handles the generative process of the observations ; the crowdsourcing behavior model on labels describes the labeling procedure of the workers. The details for each part will be introduced below.

2.1 Model the raw data – deep generative models

We denote the raw data observations by . For images, denotes the pixel values. For each data point we have a corresponding latent variable and

is a flexible neural network density model parametrized by

. is a Gaussian mixture where comprises a -of- binary vector with elements for . Here denotes the number of clusters. We denote the local latent variables by , . When real-valued observations are given, the generative process is as follows:

where and are two neural networks parameterized by . For other types of observations ,

can be other distributions, e.g. Bernoulli distribution for binary observations. In general, our model is a deep generative model with structured latent variables.

2.2 Model the behavior of each worker – two-coin Dawid-Skene model

We collect pairwise annotations provided by workers. A partially observed is the annotation matrix of the -th worker, where is the number of annotated data points. For observation pairs , represents that the -th worker provides a must-link (ML) constraint, which means observations and belong to a same cluster, represents cannot-link (CL) constraint, which means observations and belong to different clusters, and NULL represents that is not observed. It is obvious that is symmetric, i.e., . Self-edges are not allowed, i.e., .

Among all the data observations , we only crowdsource pairwise annotations for a small portion of , denoted by . Each worker only provides annotations to a small amount of items in and the annotation accuracies of non-expert workers may vary with observations and levels of expertise. We adopt the two-coin Dawid-Skene model for annotators from raykar2010learning and develop a probabilistic model by explicitly modeling the uncertainty of each worker. Specifically, the uncertainty of the -th worker can be characterized by accuracy parameters , where

represents sensitivity, which means the probability of providing ML constraints for sample pairs belonging to the same cluster. And

is the -th worker’s specificity, which means the probability of providing CL constraints for sample pairs from different clusters. Let and . The likelihood is defined as

(1)

or equivalently, , To simplify the notation, we define . Using the symmetry of , the total likelihood of annotations can be written

(2)

2.3 Amortized variational inference

As described above, the parameters in the semi-crowdsourced deep clustering model include , ,, , and the parameters of neural networks . Let , the overall joint likelihood of the model is

(3)

For this model, the learning objective is to maximize the variational lower bound of the marginal log likelihood of the entire dataset :

(4)

To deal with the non-conjugate likelihood , we introduce inference networks for each of the latent variables and . The inference networks are assumed to have a factorized form

, which are Categorical and Normal distributions, respectively:

where

is a vector of standard deviations and

denotes the inference networks parameters. Similar to the approach in kingma2014semi , we can analytically sum over the discrete variables in the lower bound and use the reparameterization trick to compute gradients w.r.t. to and .

The above objective sums over all data and annotations. For large datasets, we can conveniently use a stochastic version by approximating the lower bound with subsampled minibatches of data. Specifically, the variational lower bound is decomposed into two terms: , where , and It is easy to derive an unbiased stochastic approximation of :

where is the sampled minibatch. For , we can similarly randomly sample a minibatch of annotations: , where denotes the total number of annotations.

3 Natural gradient inference for the fully Bayesian model

In the previous section, the global parameters are assumed to be deterministic and are directly optimized by gradient descent. In this section, we propose a fully Bayesian variant of our model (BayesSCDC), which has an automatic trade-off between its complexity and fitting the data. There is no overfitting if we choose a large number of components in the mixture, in which case the variational treatment below can automatically determine the optimal number of mixture components. We develop fast natural-gradient stochastic variational inference algorithms for BayesSCDC, which effectively combines variational message passing for the conjugate structures (i.e., the relational part and the mixture part) and amortized learning of deep components (i.e., the deep generative model).

3.1 Fully Bayesian semi-crowdsourced deep clustering (BayesSCDC)

For the mixture model, we choose a Dirichlet prior over the mixing coefficients and an independent Normal-Inverse-Wishart prior governing the mean and covariance of each Gaussian component, given by

(5)

where is the location parameter, is the concentration, is the scale matrix (positive definite), and

is the degrees of freedom. The densities of

can be written in the standard form of exponential families as:

where denotes the natural parameters, denotes the sufficient statistics111Detailed expressions of each distribution can be found in Appendix A, and denotes the log partition function.

For the relational model, we assume the accuracy parameters of all workers (, ) are drawn independently from common priors. We choose conjugate Beta priors for them as

(6)

We write the exponential family form of as: ( is similar), where and .

3.2 Natural-gradient stochastic variational inference

The overall joint distribution of all of the hidden and observed variables takes the form:

(7)

Our learning objective is to maximize the marginal likelihood of observed data and pairwise annotations . Exact posterior inference for this model is intractable. Thus we consider a mean-field variational family To simplify the notations, we write each variational distribution in its exponential family form: The evidence lower bound (ELBO) of is

(8)

In traditional mean-field variational inference for conjugate models, the optimal solution of maximizing eq. 8 over each variational parameter can be derived analytically given other parameters fixed, thus a coordinate ascent can be applied as an efficient message passing algorithm winn2005variational ; hoffman2013stochastic . However, it is not directly applicable to our model due to the non-conjugate observation likelihood . Inspired by johnson2016composing , we handle the non-conjugate likelihood by introducing recognition networks . Different from SCDC in Section 2.3, the recognition networks here are used to form conjugate graphical model potentials:

(9)

By replacing the non-conjugate likelihood in the original ELBO with a conjugate term defined by , we have the following surrogate objective :

(10)

As we shall see, the surrogate objective helps us exploit the conjugate structure in the model, thus enables a fast message-passing algorithm for these parts. Specifically, we can view eq. 10 as the ELBO of a conjugate graphical model with the same structure as in Fig. 1 (up to a constant). Similar to coordinate-ascent mean-field variational inference hoffman2013stochastic , we can derive the local partial optimizers of individual variational parameters as below.

The optimal solution for factorizes over , i.e., , and depends on the expected sufficient statistics of and :

(11)
(12)

By further assuming a mean-field structure over : , we have the local partial optimizer for each single as

(13)
(14)

where is the weight of the message from to . Using a block coordinate ascent algorithm that applies eqs. 14 and 12 alternatively, we can find the joint local partial optimizers of w.r.t. given other parameters fixed, i.e.,

(15)

Plugging back into , we define the final objective

(16)

As shown in johnson2016composing , lower-bounds the partially-optimized mean field objective, i.e., thus can serve as a variational objective itself. We compute the natural gradients of w.r.t. the global variational parameters :

(17)

Note that the first term in section 3.2 is the same as the formula of natural gradient in SVI hoffman2013stochastic , which is easy to compute, and the second term originates from the dependence of on and can be computed using the reparameterization trick. For other parameters , we can also get the gradients and using the reparameterization trick.

Stochastic approximation:

Computing the full natural gradient in section 3.2 requires to scan over all data and annotations, which is time-consuming. Similar to Section 2.3

, we can approximate the variational lower bound with unbiased estimates using mini-batches of data and annotations, thus getting a stochastic natural gradient. Several sampling strategies have been developed for relational model 

gopalan2012scalable to keep the stochastic gradient unbiased. Here we choose the simplest way: we sample annotated data pairs uniformly from the annotations and form a subsample of the relational model, and do local message passing (eqs. 14 and 12), then perform the global update using stochastic natural gradient calculated in the subsample. Besides, for all the unannotated data, we also subsample mini-batches from them and perform local and global steps without relational terms. The algorithm of BayesSCDC is shown in Algorithm 1.

Comparison with SCDC

BayesSCDC is different in two aspects: (a) fully Bayesian treatment of global parameters; (b) variational algorithms. As we shall see in experiments, the result of (a) is that BayesSCDC can automatically determine the number of mixture components during training. As for (b), note that the variational family used in SCDC is not more flexible, but more restricted compared to BayesSCDC. In BayesSCDC, the mean-field doesn’t imply that and are independent, instead they implicitly influence each other through message passing in Eqs. (12) and (14). More importantly, in BayesSCDC the variational posterior gathers information from through message passing in the relational model. In contrast, the amortized form used in SCDC ignores the effect of observed annotations . Another advantage of the inference algorithm in BayesSCDC is in the computational cost. As we have seen in Algorithm 1, the number of passes through the to network is no longer linear with because we get rid of summing over in the observation term as in Section 2.3.

  Input: observations , annotations , variational parameters
  repeat
     
     for each local variational parameter and  do
        Update alternatively using eq. 12 and eq. 14
     end for
     Sample
     Use to approximate in the lower bound  eq. 16
     Update the global variational parameters using the natural gradient in section 3.2
     Update using
  until Convergence
Algorithm 1 Semi-crowdsoursed clustering with DGMs (BayesSCDC)

4 Related work

Most previous works on learning-from-crowds are about aggregating noisy crowdsourced labels from several predefined classes dawid1979maximum ; raykar2010learning ; welinder2010multidimensional ; zhouaggregating ; Tian2015Max . A common way they use is to simultaneously estimate the workers’ behavior models and the ground truths. Different from this line of work, crowdclustering gomes2011crowdclustering collects pairwise labels, including the must-links and the cannot-links, from the crowds, then discovers the items’ affiliations as well as the category structure from these noisy labels, so it can be used on a border range of applications compared with the classification methods. Recent work Vinayak2016Crowdsourced also developed crowdclustering algorithm on triplet annotations.

One shortcoming of crowdclustering is that it can only cluster objects with available manual annotations. For large-scale problems, it is not feasible to have each object manually annotated by multiple workers. Similar problems were extensively discussed in the semi-supervised clustering area, where we are given the features for all the items and constraints on only a same portion of the items. Metric learning methods, including Information-Theoretic Metric Learning (ITMLdavis2007information and Metric Pairwise Constrained KMeans (MPCKMeansbilenko2004integrating , are used on this problem, they first learn the similarity metric between items mainly based on the supervised portion of data, then cluster the rest items using this metric. Semi-crowdsourced clustering (SemiCrowdyi2012semi combines the idea of crowdclustering and semi-supervised clustering, it aims to learn a pairwise similarity measure from the crowdsourced labels of objects () and the features of objects. Unlike crowdclustering, the number of clusters in SemiCrowd is assumed to be given a priori. And it doesn’t estimate the behavior of different workers. Multiple Clustering Views from the Crowd (MCVCchang2017multiple extends the idea to discover several different clustering results from the noisy labels provided by uncertain experts. A common shortcoming of these semi-crowdsourced clustering methods is they cannot make good use of unlabeled items when measuring the similarities, while our model is a step towards this direction.

As shown in Section 2.1

, our model is a deep generative model (DGM) with relational latent structures. DGMs are a kind of probabilistic graphical models that use neural networks to parameterize the conditional distribution between random variables. Unlike traditional probabilistic models, DGMs can directly model high-dimensional outputs with complex structures, which enables end-to-end training on real data. They have shown success in image generation 

kingma2013auto

, semi-supervised learning 

kingma2014semi , and one-shot classification rezende2016one . Typical inference algorithms for DGMs are in the amortized form like that in Section 2.3. However, this approach cannot leverage the conjugate structures in latent variables. Therefore few works have been done on fully Bayesian treatment of global parameters in DGMs. johnson2016composing ; lin2018variational are two exceptions. In johnson2016composing the authors propose using recognition networks to produce conjugate graphical model potentials, so that traditional variational message passing algorithms and natural gradient updates can be easily combined with amortized learning of network parameters. Our work extends their algorithm to relational observations, which has not been investigated before.

5 Experiments

In this section, we demonstrate the effectiveness of the proposed methods on synthetic and real-world datasets with simulated or crowdsourced noisy annotations. Code is available at https://github.com/xinmei9322/semicrowd. Part of the implementation is based on ZhuSuan (zhusuan2017, ).

5.1 Toy Pinwheel dataset

Simulating noisy annotations from workers. Suppose we have workers with accuracy parameters . We random sample pairs of items and and generate the annotations provided by worker based on the true clustering labels of and as well as the worker’s accuracy parameters . If and belong to the same cluster, the worker has probability to provide ML constraint . If not, the worker has probability to provide CL constraint .

Evaluation metrics. The clustering performance is evaluated by the commonly used normalized mutual information (NMI) score strehl2002cluster , measuring the similarity between two partitions. Following recent work xie2016unsupervised , we also report the unsupervised clustering accuracy, which requires to compute the best mapping using the Hungarian algorithm efficiently.

(a) The Pinwheel dataset.
(b) Without annotations, good initialization.
(c) Without annotations, bad initialization.
(d) With noisy annotations on a subset of data.
Figure 2: Clustering results on the Pinwheel dataset, with each color representing one cluster.

First we apply our method to a toy example–the pinwheel dataset in Fig. 1(a) following johnson2016composing ; lin2018variational . It has 5 clusters and each cluster has 100 data points, thus there are 500 data points in total. We compare with unsupervised clustering to understand the help of noisy annotations. The clustering results are shown in Fig. 2. We random sampled 100 data points for annotations and simulate 20 workers, each worker gives 49 pairs of annotations, 980 in total. We set equal accuracy to each worker .

We use the fully Bayesian model (BayesSCDC) described in Section 3. The initial number of clusters is set to a larger number since the hyper priors have sparsity property intrinsically and can learn the number of clusters automatically. Unsupervised clustering is sensitive to the initializations, which achieves 95.6% accuracy and NMI score 0.91 with good initializations as shown in Fig. 1(b). After training, it learns clusters. However, with bad initializations, the accuracy and NMI score of unsupervised clustering are 75.6% and 0.806, respectively, as shown in Fig. 1(c). With noisy annotations on random sampled 100 data points, our model improves accuracy to 96.6% and NMI score to 0.94. And it converges to clusters. Our model prevents the bad results in Fig. 1(c) by making use of annotations.

5.2 UCI benchmark experiments

In this subsection, we compare the proposed SCDC with the competing methods on the UCI benchmarks. The baselines include MCVC chang2017multiple , SemiCrowd yi2012semi , semi-supervised clustering methods such as ITML davis2007information , MPCKMeans bilenko2004integrating and Cluster-based Similarity Partitioning Algorithm (CSPAstrehl2002cluster .

Crowdsourced annotations are not available for UCI datasets. Following the experimental protocol in MCVC chang2017multiple , we generate noisy annotations given by simulated workers with different sensitivity and specificity, i.e., , which is more challenging than equal accuracy parameters. The annotations provided by each worker varies from to and the number of ML constraints equals to the number of CL constraints.

We test on Face dataset Dua:2017 , containing 640 face images from 20 people with different poses (straight, left, right, up). The ground-truth clustering is based on the poses. The original image has 960 pixels. To speed up training, baseline methods apply Principle Component Analysis (PCA) and keep 20 components. For fair comparison, we test the proposed SCDC on the features after PCA. Fig. 3 plots the mean and standard deviation of NMI scores in 10 different runs for each fixed number of constraints. In Fig. 2(a), the annotations are randomly generated on the whole dataset. We observe that our method consistently outperforms all competing methods, demonstrating that the clustering benefits from the joint generative modeling of inputs and annotations.

(a)
(b)
(c)
Figure 3: Comparison to baselines: (a) Face: All the data points are annotated; (b) Face: Only 100 data points are annotated; (c) True accuracies are set to . The green line is the true weights of each worker and the red line is the estimated weights by our model.

Annotations on a subset. To illustrate the benefits of our method in the situation where only a small part of data points are annotated, we simulate noisy annotations on only 100 images. Fig. 2(b) shows the results of 100 annotated images. Our method exploits more structure information in the unlabeled data and shows notable improvements over all competing methods.

Recover worker behaviors. For each worker , our model estimates the different accuracies and . We can derive from eq. 2 that the annotations of each worker are weighted by , which means workers with higher accuracies are more reliable and will be weighted higher. We plot the weights of 5 workers in the Face experiments in Fig. 2(c).

5.3 End-to-end training with raw images

Mnist

As mentioned earlier, an important feature of DGMs is that they can directly model raw data, such as images. To verify this, we experiment with the MNIST dataset of digit images, which includes 60k training images from handwritten digits 0-9. We collect crowdsourced annotations from workers and get 3276 annotations in total. The two variants of our model (SCDC, BayesSCDC) are tested with or without annotations. For BayesSCDC, a non-informative prior is placed over . For fair comparison, we also randomly sample the initial accuracy parameters from

for SCDC. We average the results of 5 runs. In each run we randomly initialize the model for 10 times and pick the best result. All models are trained for 200 epochs with minibatch size of 128 for each random initialization. The results are shown in

Table 1. We can see that both models can effectively combine the information from the raw data and annotations, i.e., they worked reasonably well with only unlabeled data, and better when given noisy annotations on a subset of data. In terms of clustering accuracy and NMI, BayesSCDC outperforms SCDC. We believe that this is because the variational message passing algorithm used in BayesSCDC can effectively gather information from the crowdsourced annotations to form better variational approximations, as explained in Section 3.2. Besides being more accurate, BayesSCDC is much faster because the computation cost caused by neural networks does not scales linearly with the number of clusters (50 in this case). In Fig. 3(a) we show that BayesSCDC is more flexible and automatically determines the number of mixture components during training.

Method without annotations with annotations
Accuracy NMI Time Accuracy NMI Time
SCDC
65.92 3.47 % 0.6953 0.0167 177.3s 81.87 3.86% 0.7657 0.0233 201.7s
BayesSCDC
77.64 3.97 % 0.7944 0.0178 11.2s 84.24 5.52% 0.8120 0.0210 16.4s
Table 1: Clustering performance on MNIST. The average time per epoch is reported.
Epoch 1 [2pt 0pt]
Epoch 7 [2pt 0pt]
Epoch 25 [2pt 0pt]
Epoch 200 [2pt 0pt]
(a)
(b)
Figure 4: (a) MNIST: visualization of generated random samples of 50 clusters during training BayesSCDC. Each column represents a cluster, whose inferred proportion () is reflected by brightness; (b) Clustering results on CIFAR-10: (top) unsupervised; (bottom) with noisy annotations.

Cifar-10

We also conduct experiments with real crowdsourced labels on more complex natural images, i.e., CIFAR-10. Using the same crowdsourcing scheme, we collect 8640 noisy annotations from 32 web workers on a subset of randomly sampled 4000 images. We apply SCDC with/without annotations for 5 runs of random initializations. SCDC without annotations failed with NMI score 0.0424 0.0119 and accuracy 14.23 0.69% among 5 runs. But the NMI score achieved by SCDC with noisy annotations is 0.5549 0.0028 and the accuracy is 50.09 0.08%. The clustering results on test dataset are shown in Fig. 3(b). We plot 10 test samples with the largest probability for each cluster. More experiment details and discussions could be found in the supplementary material.

6 Conclusion

In this paper, we proposed a semi-crowdsourced clustering model based on deep generative models and its fully Bayesian version. We developed fast (natural-gradient) stochastic variational inference algorithms for them. The resulting method can jointly model the crowdsourced labels, worker behaviors, and the (un)annotated items. Experiments have demonstrated that the proposed method outperforms previous competing methods on standard benchmark datasets. Our work also provides general guidelines on how to incorporate DGMs to statistical relational models, where the proposed inference algorithm can be applied under a broader context.

Acknowledgement

Yucen Luo would like to thank Matthew Johnson for helpful discussions on the SVAE algorithm (johnson2016composing, ), and Yale Chang for sharing the code of the UCI benchmark experiments. We thank the anonymous reviewers for feedbacks that greatly improved the paper. This work was supported by the National Key Research and Development Program of China (No. 2017YFA0700904), NSFC Projects (Nos. 61620106010, 61621136008, 61332007), Beijing NSF Project (No. L172037), Tiangong Institute for Intelligent Computing, NVIDIA NVAIL Program, and the projects from Siemens, NEC and Intel.

References

  • [1] Mikhail Bilenko, Sugato Basu, and Raymond J Mooney. Integrating constraints and metric learning in semi-supervised clustering. In Proceedings of the twenty-first international conference on Machine learning, page 11. ACM, 2004.
  • [2] Varun Chandola, Arindam Banerjee, and Vipin Kumar. Anomaly detection: A survey. ACM computing surveys (CSUR), 41(3):15, 2009.
  • [3] Yale Chang, Junxiang Chen, Michael H Cho, Peter J Castaldi, Edwin K Silverman, and Jennifer G Dy. Multiple clustering views from multiple uncertain experts. In International Conference on Machine Learning, pages 674–683, 2017.
  • [4] Jason V Davis, Brian Kulis, Prateek Jain, Suvrit Sra, and Inderjit S Dhillon. Information-theoretic metric learning. In Proceedings of the 24th international conference on Machine learning, pages 209–216. ACM, 2007.
  • [5] Alexander Philip Dawid and Allan M Skene. Maximum likelihood estimation of observer error-rates using the em algorithm. Applied Statistics, pages 20–28, 1979.
  • [6] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In

    Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on

    , pages 248–255. IEEE, 2009.
  • [7] Dua Dheeru and Efi Karra Taniskidou. UCI machine learning repository, 2017.
  • [8] Ryan G Gomes, Peter Welinder, Andreas Krause, and Pietro Perona. Crowdclustering. In Advances in neural information processing systems, pages 558–566, 2011.
  • [9] Prem K Gopalan, Sean Gerrish, Michael Freedman, David M Blei, and David M Mimno. Scalable inference of overlapping communities. In Advances in Neural Information Processing Systems, pages 2249–2257, 2012.
  • [10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  • [11] Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational inference. The Journal of Machine Learning Research, 14(1):1303–1347, 2013.
  • [12] Jeff Howe. The rise of crowdsourcing. Wired magazine, 14(6):1–4, 2006.
  • [13] Matthew Johnson, David K Duvenaud, Alex Wiltschko, Ryan P Adams, and Sandeep R Datta. Composing graphical models with neural networks for structured representations and fast inference. In Advances in neural information processing systems, pages 2946–2954, 2016.
  • [14] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pages 3581–3589, 2014.
  • [15] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
  • [16] Wu Lin, Mohammad Emtiyaz Khan, and Nicolas Hubacher. Variational message passing with structured inference networks. In International Conference on Learning Representations, 2018.
  • [17] Yucen Luo, Jun Zhu, Mengxi Li, Yong Ren, and Bo Zhang. Smooth neighbors on teacher graphs for semi-supervised learning. In The IEEE Conference on Computer Vision and Pattern Recognition, 2018.
  • [18] V. C. Raykar, S. Yu, L. H. Zhao, G. H. Valadez, C. Florin, L. Bogoni, and L. Moy. Learning from crowds. JMLR, 11:1297–1322, 2010.
  • [19] Danilo J Rezende, Shakir Mohamed, Ivo Danihelka, Karol Gregor, and Daan Wierstra. One-shot generalization in deep generative models. In Proceedings of the 33rd International Conference on International Conference on Machine Learning-Volume 48, pages 1521–1529. JMLR. org, 2016.
  • [20] Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE Transactions on pattern analysis and machine intelligence, 22(8):888–905, 2000.
  • [21] Jiaxin Shi, Jianfei. Chen, Jun Zhu, Shengyang Sun, Yucen Luo, Yihong Gu, and Yuhao Zhou. ZhuSuan: A library for Bayesian deep learning. arXiv preprint arXiv:1709.05870, 2017.
  • [22] Alexander Strehl and Joydeep Ghosh. Cluster ensembles—a knowledge reuse framework for combining multiple partitions. Journal of machine learning research, 3(Dec):583–617, 2002.
  • [23] Tian Tian and Jun Zhu. Max-margin majority voting for learning from crowds. In Advances in Neural Information Processing Systems, pages 1621–1629, 2015.
  • [24] Ramya Korlakai Vinayak and Babak Hassibi. Crowdsourced clustering: Querying edges vs triangles. In Neural Information Processing System, 2016.
  • [25] Peter Welinder, Steve Branson, Pietro Perona, and Serge J Belongie. The multidimensional wisdom of crowds. In Advances in neural information processing systems, pages 2424–2432, 2010.
  • [26] John Winn and Christopher M Bishop. Variational message passing. Journal of Machine Learning Research, 6(Apr):661–694, 2005.
  • [27] Christian Wiwie, Jan Baumbach, and Richard Röttger. Comparing the performance of biomedical clustering methods. Nature methods, 12(11):1033–1038, 2015.
  • [28] Junyuan Xie, Ross Girshick, and Ali Farhadi.

    Unsupervised deep embedding for clustering analysis.

    In International conference on machine learning, pages 478–487, 2016.
  • [29] Eric P Xing, Michael I Jordan, Stuart J Russell, and Andrew Y Ng. Distance metric learning with application to clustering with side-information. In Advances in neural information processing systems, pages 521–528, 2003.
  • [30] Jinfeng Yi, Rong Jin, Shaili Jain, Tianbao Yang, and Anil K Jain. Semi-crowdsourced clustering: Generalizing crowd labeling by robust distance metric learning. In Advances in neural information processing systems, pages 1772–1780, 2012.
  • [31] Dengyong Zhou, Qiang Liu, John Platt, and Christopher Meek. Aggregating ordinal labels from crowds by minimax conditional entropy. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, pages 262–270, 2014.

Appendix A Derivations

a.1 Natural parameters and sufficient statistics

Note that , , where the conjugacy comes. In practice, we parameterized the unnormalized version of since it is unconstrained. The update rule for is the same with due to their constant difference.

a.2 Expected sufficient statistics

where , is the digamma function.

a.3 Log partition function

where , is the Gamma function.

a.4 Variational message passing for local parameters

The local update for :

(18)

The local update for :

where .

a.5 The final objective

The final objective:

(19)

The annotation likelihood term:

(20)

The local KL divergence term: