Unsupervised Learning with Imbalanced Data via Structure Consolidation Latent Variable Model

06/30/2016 ∙ by Fariba Yousefi, et al. ∙ The University of Sheffield University of Bristol 0

Unsupervised learning on imbalanced data is challenging because, when given imbalanced data, current model is often dominated by the major category and ignores the categories with small amount of data. We develop a latent variable model that can cope with imbalanced data by dividing the latent space into a shared space and a private space. Based on Gaussian Process Latent Variable Models, we propose a new kernel formulation that enables the separation of latent space and derives an efficient variational inference method. The performance of our model is demonstrated with an imbalanced medical image dataset.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In many medical applications, e.g. pathology, negatively labelled data is extremely easy to obtain (e.g. healthy cells). Positive labels, on the other hand, can be harder to acquire (e.g. particular disease morphologies). These massively unbalanced problems are challenging for most algorithms because the negative class tends to dominate the objective function and the resulting model performs poorly. In practice it is often better to throw away much of the negative data and rebalance the data set.

Unsupervised learning has been attracting a lot of attention as it has the potential to serve as an underpinning technology for a range of challenges such as generative modeling, missing data imputation and coping with multiple data modalities. Unsupervised learning can also be applied to a wider range of data sets, because it does not rely on having carefully labelled data available.

In this paper we explore the possibility of using a variant unsupervised learning algorithm to solve the problem of label balance. We build latent variable models that can simultaneously accommodate a very large number of negative examples, sharing their characteristics appropriately with the positive class, while simultaneously allowing the model to characterise the manner in which the positive class is differently characterised through preserved (or private) latent spaces that are separately learned for each class. The resulting model does not suffer from the standard challenges in this domain. We compare with a variant of the discriminative GP-LVM (the model that underpinned GaussianFace) and show signficantly improved performance.

Our probabilistic latent variable model divides its latent space into a shared space of all the categories and a private space for each category (Damianou et al., 2012)

. The shared space accounts for capturing the common regularities among categories (e.g. positive and negative class) and the private space is dedicated to model the variance specific to individual categories. Because the modelling of the private space is category specific, there is no domination of it’s characteristics by the larger category. Thus the data in each category can be modeled appropriately while the common regularities are still exploited.

We implement the idea of shared and private space in the framework of Gaussian Process Latent Variable Models (GPLVM, Lawrence, 2005) by deriving a particular covariance function (kernel) that enables such separation. We exploit closed form variational lower bounds of the log marginal likelihood of the proposed model, which to provide an efficient approximation inference method.

The performance of our model is evaluated with a real image dataset, in which the positive and negative data are extremely imbalanced. We show our model still can learn from imbalanced data and perform well in both generative and discriminative tasks.

2 Structure Consolidation Latent Variable Model

We assume the dataset is represented as a set of fixed length vectors

, where is the number of data points and is the dimensionality of individual data points. Additionally, a label of category is associated with each data point, , where indicates the number of categories in the dataset. We aim at building a probabilistic model that is robust when the numbers of data in different categories are highly imbalanced.

We assume the data associated with a set of latent representations , where is the dimensionality of the latent space. The latent representations are related to the observed data through an unknown mapping function and follows a prior distribution that is defined as a Gaussian process,

(1)

where denotes the observation noise and is the kernel function. Given the observed data

, we wish to obtain a posterior estimate for both the latent representation

and the unknown mapping function . In our model we separate the latent space into a shared space with the dimensionality and a private space with the dimensionality . Therefore, a latent representation can be denoted as , where and are the latent representations in shared and private space respectively. With the separated latent representation, we define the kernel function in our model as

(2)

where is the kernel function for the shared space and is the kernel function for the private space. The shared kernel can be any kernel function built on a vector space from the literature. However, the private kernel is defined to take the following form:

(3)

where is the kernel function chosen to calculate the covariance and is the label of category for the data point . We give a unit Gaussian prior distribution to latent representations . The log marginal likelihood for the proposed model can be derived as . There is no analytical solution for this marginal likelihood. We apply variational inference and derive a closed form lower bound of the log marginal likelihood, by following a sparse Gaussian process approximation (Titsias & Lawrence, 2010):

(4)
(5)

where , and , , are the expectation of covariance matrices w.r.t. the variational posterior . In our model, these expectation are derived as

(6)
(7)
(8)

where and are the variational parameters known as inducing inputs and inducing labels.

3 Experiment

Mitosis detection is a stage in tumour assessment that involves determining whether individual cells are in mitosis (dividing to reproduce). These cells are rare. We used data from the assessment of mitosis detection algorithm 2013 (AMIDA13, Veta et al., 2015) challenge which is publicly available. The main goal of the challenge is to find proper mitosis detection methods that can be automatic or semi-automatic. We use the training set from the challenge, which consists of tissue images of patients and are annotated by human experts. We preprocess the tissue images with the algorithm by Snell (2013) and focus on the generated candidate image patches. The resulting image set contains 146,562 grey-scale image patches ( pixels), of which 550 are positive (mitosis) according to manual annotation. We randomly take 80% of positive images and 5,000 negative images as the training data. This gives in total 5,440 images. Some examples of the training data is shown in Figure 1(a). We applied SCLVM to this dataset and used an exponentiated quadratic kernel for both the shared and private space and set the dimensionality of both shared and private space to five.

Both the latent representations and kernel parameters are optimized until convergence. The resulting latent space is visualized in Figure 1. The positive and negative images present similar structures in the shared space, which demonstrates the discovered common regularities, and their private space are significantly different from each other. To demonstrate the ability of SCLVM in balancing the modeling capabilities between imbalanced categories, we draw samples from the learned latent space of SCLVM for both positive and negative categories (see Figure 1(b)). The generated samples from positive and negative categories are clearly different from each other and they capture some characteristics of their own categories. We further evaluate the learned latent space by performing classification on test set (the rest of positive examples plus randomly sampled negative examples, in total 1000 images). We compare SCLVM with BGPLVM (Titsias & Lawrence, 2010) and DGPLVM111Due to its complexity , we only use 1000 images for training (440 positive and 560 negative). (Urtasun & Darrell, 2007) with ten test sets. We apply weighted SVM with an exponentiated quadratic kernel on the latent space from the BGPLVM and DGPLVM. The results are shown in Table 1

. Note that BGPLVM requires an additional classification model to be learned. This does not provide probabilities over the classes other than in the ad-hoc manner that an SVM will. Similarly DGPLVM does learn a space that reflects the class information, but it does not provide means to get the posterior over the classes. Our model is the only one that learns the classification jointly with the model and provides a principled way of getting probabilities over the classes.

Figure 1: The visualization of the training data in learned latent spaces. The first figure shows the positive and negative data in two of the shared dimensions. The second and third figures show the two of the private dimensions for the negative and positive data respectively. The fourth figure shows the learned latent space from BGPLVM.
(a)
(b)
Figure 2: (a) Some examples in the data sets. (b) Samples generated from the trained SCLVM. In both figures, the first two rows correspond to positive images and the last two rows correspond to negative images.
SCLVM BGPLVM (SVM) DGPLVM (SVM)
precision
recall
F1 score
Table 1:

Classification performance. The mean and standard deviation from ten test sets are shown.

4 Conclusion

We presented a probabilistic latent variable model that can cope with imbalanced data. We developed a kernel that separates the latent space into a shared spare and a private space. An efficient variational inference method is proposed by deriving a closed form lower bound of marginal likelihood. Beyond the shown example, the ability of jointly modelling multiple data categories and handling imbalanced datasets can be linked to many other areas such as transfer learning.

References

  • Damianou et al. (2012) Andreas Damianou, Carl Henrik Ek, Michalis K. Titsias, and Neil D. Lawrence. Manifold relevance determination. In John Langford and Joelle Pineau (eds.),

    Proceedings of the International Conference in Machine Learning

    , volume 29, San Francisco, CA, 2012. Morgan Kauffman.
  • Lawrence (2005) Neil D. Lawrence.

    Probabilistic non-linear principal component analysis with Gaussian process latent variable models.

    Journal of Machine Learning Research, 6:1783–1816, 11 2005.
  • Snell (2013) Violet Snell. Shape and Texture Recognition for Automated Analysis of Pathology Images. PhD thesis, Centre for Vision, Speech and Signal Processing, University of Surrey, Surrey, UK, 2013.
  • Titsias & Lawrence (2010) Michalis K. Titsias and Neil D. Lawrence. Bayesian Gaussian process latent variable model. In Yee Whye Teh and D. Michael Titterington (eds.),

    Proceedings of the Thirteenth International Workshop on Artificial Intelligence and Statistics

    , volume 9, pp. 844–851, Chia Laguna Resort, Sardinia, Italy, 13-16 May 2010. JMLR W&CP 9.
  • Urtasun & Darrell (2007) Raquel Urtasun and Trevor Darrell. Discriminative gaussian process latent variable model for classification. In Proceedings of the 24th international conference on Machine learning, pp. 927–934. ACM, 2007.
  • Veta et al. (2015) Mitko Veta, Paul J Van Diest, Stefan M Willems, Haibo Wang, Anant Madabhushi, Angel Cruz-Roa, Fabio Gonzalez, Anders BL Larsen, Jacob S Vestergaard, Anders B Dahl, et al. Assessment of algorithms for mitosis detection in breast cancer histopathology images. Medical image analysis, 20(1):237–248, 2015.