Macau: Scalable Bayesian Multi-relational Factorization with Side Information using MCMC

09/15/2015 ∙ by Jaak Simm, et al. ∙ UHasselt Johnson & Johnson Services, Inc. 0

We propose Macau, a powerful and flexible Bayesian factorization method for heterogeneous data. Our model can factorize any set of entities and relations that can be represented by a relational model, including tensors and also multiple relations for each entity. Macau can also incorporate side information, specifically entity and relation features, which are crucial for predicting sparsely observed relations. Macau scales to millions of entity instances, hundred millions of observations, and sparse entity features with millions of dimensions. To achieve the scale up, we specially designed sampling procedure for entity and relation features that relies primarily on noise injection in linear regressions. We show performance and advanced features of Macau in a set of experiments, including challenging drug-protein activity prediction task.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Matrix factorization

(MF) has a long history and a wide range of applications in data sciences, engineering and many fields of scientific research. While classical approaches, such as SVD, factorize fully observed matrices, a previous work proposed matrix factorization for partially observed matrices

[8]

. This enabled the direct use of MF in predictive machine learning problems (

e.g., in collaborative filtering). However, the original formulation [8] can easily overfit the data and it was improved in Probabilistic Matrix Factorization (PMF) [4]

. The main idea in these two papers and in subsequent research has been to represent each row and each column by a latent vector of size

and find the best match to the observed elements of the matrix:

(1)

where are the latent vectors for th row and th column, is the set of matrix cells whose value has been observed, are the observed values, is the Frobenius norm, and are regularization parameters. The last two terms in optimization problem (1), introduced in PMF [4], are derived from zero-mean Gaussian priors on and and a Gaussian noise model on the observed values .

1.1 Bayesian PMF

Bayesian PMF (BPMF) [6]

extends PMF to full Bayesian inference by introducing common multivariate Gaussian priors for the latent variables, one for the rows and one for the columns. To infer these two priors from the data, BPMF places fixed uninformative Normal-Wishart hyperpriors on them. Let

and ( and ) be the mean and precision matrix of the Gaussian prior for rows (columns) then the model used by BPMF is

(2)
(3)

where and are Normal and Normal-Wishart distributions and

are the fixed hyperparameters of the Normal-Wishart hyperprior. Similarly to PMF the noise model of BPMF is Gaussian:

(4)

where is the precision parameter and is assumed to be known. From (2)–(4), it is straight-forward to derive block Gibbs sampler for each latent vector and , and for the parameters of the Gaussian priors , , , .

It is generally observed that BPMF shows improvement in predictive performance compared to PMF (e.g., in the collaborative filtering task of Netflix [6]

). Another advantage of BPMF is that it provides credibility intervals for the estimates. It should be also noted that BPMF can be easily parallelized and can handle large scale data sets, such as the Netflix challenge data, which contains 200 million observations.

1.2 Proposed Method

In this paper we propose Macau, a powerful and flexible method for factorization of heterogeneous data. Its essential features are

  • The ability to factorize wide range of data models, which we represent by a hypergraph where entities are nodes and relations are hyperedges. Supported models include the cases of ordinary graphs and tensor relations, see Appendix A.1.

  • The incorporation of features (side information) for any entity and for any relation.

  • Scalability up to millions of entity instances, hundred millions of observations, and sparse entity features with millions of dimensions.

We follow the approach of BPMF by proposing a Gibbs sampler scheme that includes specially designed noise injection step for entity and relation features. This enables the scaling of the method to millions of sparse or to tens of thousands of dense features.

The novelty of the proposed method is the combination of all above mentioned functionality into a unified Bayesian framework (see Section 3 for detail overview). Also in the context of matrix factorization with entity features Macau carries out MCMC inference rather than variational approximate approaches, such as Variational Bayes as proposed in previous research [5].

We apply our method to a standard matrix factorization benchmark of MovieLens, outperforming the state-of-the-art MF approaches. Additionally, we explore the performance of Macau in a challenging biochemistry task of drug-protein activity prediction where we demonstrate the effectiveness of the aforementioned characteristics of the method. This task is based on publicly available data from ChEMBL [2]. Finally, we report runtime information for private industrial data set from Pharmaceutica containing millions of drug candidates with millions of sparse features and tens of millions of observed activity values.

Our contribution includes an open source package111URL to the package: https://github.com/jaak-s/BayesianDataFusion.jl implementing all of the above mentioned features together with multi-core and multi-node parallelization in the Julia language.

2 Macau

In this section we outline the probabilistic model for Macau. Then we give an overview of related research (Section 3). Finally, we outline the details for the Gibbs sampler (Section 2.4), including the crucial noise injection based scheme for sampling the weight variables linking the entity and relation features (Section 2.5).

2.1 Multiple relations and tensor relations

In practice, data sets can often contain multiple relations between entities (e.g., drugs and proteins, see Fig. 3). To handle it in Macau, we consider a relational model with a set of entities and a set of relations such that each relation can link together two or more entities, i.e., is a tensor. Each relation maps the instances of its entities to a real number, denoted by where is the index vector and is the degree of the relation (i.e., the number of entities connected by ). Formally, is a map . As in the case of partially observed matrix the values are partially observed. We denote the latent vector of instance of entity by .

Each relation has a Gaussian noise model with precision

(5)

where is the set of index vectors for which is observed, is the ordered list of entities connected by relation where the order is the same as in the index vectors , is the vector of ones and

is the element-wise product of the latent vectors. The conditional probability of the observations of all relations is then

(6)

Equation (6) allows Macau to simultaneously factorize more than two entities and multiple relations with possibly different degrees.

2.2 Entity Features

Entity features are extra information available about instances of entities, often referred to as side-information. For example, in the case of movie ratings, it could be the genre and the release year for movies, or the age and the gender for users. In the example of drug-protein activity modeling, it is possible to use substructure information of the drug candidate, represented by a large sparse binary vector. The idea exploited in Macau is that we can use this extra information to predict the latent vector of the instance and, thus, get more accurate factorization, especially for entities that have few or no observations.

First, let us write the standard Gaussian prior (2), used in BPMF, for the latent variable of an instance of entity :

(7)

where and are the common prior mean and precision matrix for entity , respectively. To incorporate the instance’s feature we add a term into the Gaussian mean:

(8)

where is the weight matrix for the entity features and is the dimensionality of the features. Equation (8) can be interpreted as a linear model for the latent vectors. If an instance does not have any observations then the distribution of its latent variable is fully determined by (8), because there are no terms involving its latent variable in (6). On the other hand, if the instance has many observations its features will have only a minor impact.

To have a full Bayesian treatment for , we introduce a zero mean multivariate normal as its prior:

(9)
(10)

where is the vectorization of , denotes the Kronecker product and is the diagonal element of the precision matrix. The inclusion of (the precision matrix of the latent vectors) in (9) is crucial for deriving a computationally efficient noise injection sampler, described in detail in Section 2.5.

As the choice of

is problem dependent, we set a gamma distribution as its hyperprior, as used in similar context for neural networks

[3]:

(11)

where and are fixed hyperparameters, which are both set to in the experiments.

2.3 Relation Features

Often there is extra information regarding observations (e.g., the day (from the release) when the user went to see the movie or the temperature of the chemical experiment). However, this data is not linked to a single entity instance but instead to the observation (e.g., a particular user-movie pair). If these features are fully observed for a particular relation , then Macau can incorporate them directly into the observation model. Let be the relation feature for observation , then the previous observation model (5) for relation is replaced by

(12)

where is the weight vector. The treatment of is similar to , i.e., Macau uses zero mean Gaussian prior on with precision that has gamma hyperprior:

(13)
(14)

where as before and are fixed hyperparameters, set to 1 in experiments.

2.4 Gibbs Sampler

Gibbs sampling is used to sample from the posterior of the model variables. In this section we present the conditional distributions of the Gibbs sampler for all variables (except and for which we propose a specially designed sampler in Section 2.5).

2.4.1 Latent vectors

Based on (8) and (12) the conditional probability for is

(15)
(16)

where

where is the set of relations that the entity is linked to, denotes element-wise division222 The formula assumes that is only present once. To handle such cases where, e.g., , and there are observations on the diagonal, equations should be modified. , is the set of indexes of observations to which instance of entity is linked to, i.e., all the observed data in relation for instance . The above equations use a shorthand that if relation does not have features then is and similarly if entity does not have features then is .

2.4.2 Gaussian priors

Macau also uses the same Normal-Wishart hyperprior for and as BPMF [6]:

(17)

where the hyperparameters are set to uninformative values of , ,

(the identity matrix), and

. Combining the hyperprior (17) with (8) we get conditional probability

(18)

Because of space constraints the formulas for , , , are presented in the Appendix A.2. One of the essential differences compared to BPMF is that in BPMF the Gaussian priors model the latent vectors whereas in Macau they model the residual .

2.4.3 Precision parameter for the weight vector

From (9) and (11) we can derive the conditional probability for as

(19)

where

The conditional probability for is analogous and is described in Appendix A.3.

2.5 Noise Injection Sampler

From (8) and (9) we can write out the conditional probability for

(20)

Let us denote and then, because both the likelihood and the prior contain , we can factorize out:

(21)

and the Gaussian mean and precision can be derived (for details see Appendix A.4):

(22)

where is the mean and is the precision of the posterior. However, even for moderate feature dimensions the standard sampling of the multivariate Gaussian is computationally intractable because the size of the precision matrix is .

By exploiting the Kronecker product structure of the precision matrix and the existence of in both the mean and the precision we derive an alternative approach. A sample of from (22) can be obtained by solving a linear system for :

(23)

where each row of matrices and is sampled from . The correctness of (23) is proven in Appendix A.5. The derivation of noise injection sampler for the weight vector for relation features is analogous, with the difference that the linear system has only single right-hand side.

Thus, to sample we need to solve a linear system of size with different right-hand sides. If is medium size (up to 20,000) we propose to use direct solvers333On a system with 2 Intel Xeon E5-2699 v3 CPUs using 8 cores Julia takes 25 seconds to solve a 20,000 20,000 system with 60 right-hand sides (= latent dimensions).. If is sparse we can tackle high-dimensional systems by solving each right-hand side separately by using iterative method of conjugate gradient (CG). CG only requires multiplication of (and ) with a vector and can handle cases where is in the order of millions.

3 Related Research

There are several extensions already proposed to BPMF that are related to our work. Bayesian Probabilistic Tensor Factorization [9] extends BPMF to the factorization of a single 3-way relation (tensor) without entity and relation features. Like BPMF their approach uses Gibbs sampler.

Singh and Gordon [7] propose Bayesian MF method that can link together more than one 2-way relation (matrix). Their sampling approach is analogous to BPMF except using Hamiltonian Monte Carlo within Gibbs where each latent vector is sampled separately by using Hamiltonian Monte Carlo. Their method does not have support for tensors, entity features or relation features. The lack of support for tensors also means their method cannot support multiple relations between two entities, which is useful, for example, in the case of drug-protein activity modeling (see Section 4.2 for the experiment on ChEMBL data with two relations between potential drugs and proteins).

Hierarchical Bayesian Matrix Factorization with Side Information (HBMFSI) [5] is a method for the special case of factorizing a single matrix with entity features based on Variational Bayes. HBMFSI does not allow the model to use relation features as in Macau. However, they propose to add the concatenation of row entity features and column entity features in the same way as relation features in Macau, i.e. .

4 Experiments

This section gives results for 1) the standard MF benchmark MovieLens and 2) a challenging biochemical problem based on the ChEMBL data set [2]

, and reports runtimes of Macau on a large-scale industrial drug–protein activity data set. The performance reported is mean RMSE and the error bars in figures represent standard deviations. All experiments are repeated 10 times.

4.1 MovieLens Benchmark

The MovieLens data set consists of a single matrix of movie–user ratings from 6,040 users and 3,952 movies. There are total of 1,000,209 ratings taking values from 1.0 to 5.0. Recent research [1] has investigated in detail the noise level in movie ratings. Amatriain et al. made a conservative estimate of between-trial RMSE of . Based on that estimate, we chose to use in our experiments. The data set contains 29 and 18 dimensional entity features for users and movies, respectively. We compare Macau against HBMFSI444In the experiments we used the MATLAB implementation of HBMFSI provided by the authors., which is the state-of-the-art MF approach with entity features, using the same evaluation setup used in their paper [5]. Namely, one half of the ratings are randomly set as the test set and another half as the training set. The methods compared are 1) Macau-E: Macau with entity features, 2) Macau-ER: Macau with entity features and relation features constructed as in HBMFSI (see Section 3), 3) HBMFSI, and 4) BPMF. All methods use latent dimension . It should be noted that the relative performance between the methods was similar when we used (data not shown). The results show that both Macau setups outperform HBMFSI and BPMF, see Figure 1.

Figure 1:

Results for MovieLens experiments. The p-value of the two-sided t-test between Macau-ER and HBMFSI is lower than 0.0001.

Figure 2: Results for ChEMBL experiments. BPMF and Macau use .

4.2 ChEMBL drug–protein activity prediction

The prediction of drug and protein interactions is crucial for the development of new drugs. In this case study we focus on the interaction measure IC50, which measures the concentration of the drug necessary to inhibit the activity of the protein by . We prepared a data set from the public bioactivity database ChEMBL[2] Version 19. First, we selected proteins that had at least 200 IC50 measurements, and then we kept drugs with 3 or more IC50 measurements. Finally, we filtered out some measurements with clear data errors (these were also reported to ChEMBL). The final numbers for small molecules and proteins are 15,073 and 346, respectively, with total of 59,280 IC50 measurements. In all of the ChEMBL experiments we model of IC50 and set , because this corresponds to a reasonable standard deviation of 555It is possible to enhance the performance by tuning/sampling .. For drugs, we use sparse features (substructure fingerprint) with , for proteins, we use dense features (based on protein sequence) with .

In the first experiment, we compare Macau with entity features for drugs and proteins to BPMF, as well as individual ridge regression based on drug features, for each protein. Macau and BPMF use 30 latent dimensions, because we observed it is sufficient for good performance, see Appendix 

A.6. To tune the regularization parameter of ridge regression of each protein, we used 5-fold inner cross-validation. A test set containing 20% of the observations is chosen at random. The strong performance of Macau over BPMF, as seen in Figure 2, is expected because Macau gives the most advantage when the relation is sparsely observed.

Figure 3: The IC50+Ki model has two Types of relations between entity Drug and entity Protein. Figure 4: Comparison of IC50 and IC50+Ki Macau models using .

In the second experiment we want to improve IC50 predictions by introducing a new relation, Ki, between drugs and proteins. Ki measures the binding affinity of the drug for a protein. While related to IC50, it measures a different biochemical aspect of the interaction. We thus expect that it contributes additional information for our task. In Macau, multiple relations between two entities are represented as a tensor by creating a third entity denoting the type of interaction, see Figure 3. The dimensions of the tensor are , and the Ki part contains 5,121 observations. As before the test set contains measurements only from the IC50 part. As can be seen from Figure 4 the tensor model IC50+Ki significantly outperforms the single relation model with only IC50 ().

Figure 5: The IC50+Pheno model has three entities and two relations. Figure 6: Comparison of IC50 and IC50+Pheno models.

In the final experiment, we explore the effect of connecting drugs to two relations. For the drugs in our IC50 data set, we compiled all cancer assays from ChEMBL that had at least 20 compounds in that set. From these, we created a new relation Pheno with observations measuring the phenotypic effect of drugs in Assays, which is depicted in Figure 5. Because the effects of assays are not directly linked to any specific protein, we expected weaker effect than from the Ki data. Therefore, for the comparison of IC50 and IC50+Pheno models the test set of IC50 measurements are selected only from that 1,678 compounds that have Pheno observations. In Figure 6 we can observe the IC50+Pheno model outperforms the IC50 model when appropriately large latent dimension is used. It is interesting to note, that with the IC50+Pheno model is slightly losing in performance, which can be an evidence that, with too small , adding more relations can result in an overcrowded latent space.

4.3 Runtime on Large-scale Industrial Data Set

For large scale problems, our implementation has multi-core and multi-node parallelization. The sampling of latent vectors can be parallelized straightforwardly as the the latent vectors of a single entity are, in our use cases, independent of each other and can be sampled in parallel. The only difference here compared to BPMF is that, for entities that have features Macau requires computing , for which our implementation provides parallelization as well.

In the parallelization of (23), if , the direct solver is fast enough not to require additional parallelization. As mentioned in the case of sparse features, Macau uses CG to solve (23) for each right-hand side separately. For each CG, our implementation parallelizes the matrix product operations in a multi-core way and CGs can be distributed across multiple processes and thus can be parallelized over multiple nodes.

The large-scale data set is a subset of a proprietary data set from Janssen Pharmaceutica containing millions of compounds. The subset has more than 1.8M compounds and more than 1,000 proteins for a total of several tens of millions of compound–protein measurements. Here we report the computation times of Macau on two types of features for the compounds using systems with 2 Intel Xeon E5-2699 v3 CPUs. Firstly, for the feature dimension and , the computation of the full Gibbs step using 8 cores of a single node takes about 40 seconds (using a direct solver for (23)). Secondly, for the feature dimension having sparsity and the computation of the full Gibbs step using 10 cores per CG and total of 15 nodes (2 CGs per node) takes about 600 seconds. We observed that 1,000 Gibbs iterations (from which 800 were discarded as burn-in) were sufficient to reach a stable posterior.

5 Conclusion

The best of our knowledge, this paper proposes the first Bayesian factorization method that allows to handle tensors, multiple relations, and entity and relation features.

Acknowledgments

Jaak Simm, Adam Arany, Pooya Zakeri and Yves Moreau are funded by Research Council KU Leuven (CoE PFV/10/016 SymBioSys) and by Flemish Government (IOF, Hercules Stitching, iMinds Medical Information Technologies SBO 2015, IWT: O&O ExaScience Life Pharma; ChemBioBridge, PhD grants).

References

References

  • [1] X. Amatriain, J. M. Pujol, and N. Oliver. I like it… i like it not: Evaluating user ratings noise in recommender systems. In User modeling, adaptation, and personalization, pages 247–258. Springer, 2009.
  • [2] A. P. Bento, A. Gaulton, A. Hersey, L. J. Bellis, J. Chambers, M. Davies, F. A. Krüger, Y. Light, L. Mak, S. McGlinchey, et al. The chembl bioactivity database: an update. Nucleic acids research, 42(D1):D1083–D1090, 2014.
  • [3] D. Husmeier, W. D. Penny, and S. J. Roberts.

    An empirical evaluation of bayesian sampling with hybrid monte carlo for training neural network classifiers.

    Neural Networks, 12(4):677–705, 1999.
  • [4] A. Mnih and R. Salakhutdinov. Probabilistic matrix factorization. In Advances in neural information processing systems, pages 1257–1264, 2007.
  • [5] S. Park, Y.-D. Kim, and S. Choi. Hierarchical bayesian matrix factorization with side information. In

    Proceedings of the Twenty-Third international joint conference on Artificial Intelligence

    , pages 1593–1599. AAAI Press, 2013.
  • [6] R. Salakhutdinov and A. Mnih.

    Bayesian probabilistic matrix factorization using markov chain monte carlo.

    In Proceedings of the 25th international conference on Machine learning, pages 880–887. ACM, 2008.
  • [7] A. Singh and G. Gordon. A bayesian matrix factorization model for relational data. In Proceedings of the Twenty-Sixth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-10), pages 556–563, Corvallis, Oregon, 2010. AUAI Press.
  • [8] N. Srebro, T. Jaakkola, et al. Weighted low-rank approximations. In ICML, volume 3, pages 720–727, 2003.
  • [9] L. Xiong, X. Chen, T.-K. Huang, J. G. Schneider, and J. G. Carbonell. Temporal collaborative filtering with bayesian probabilistic tensor factorization. In SDM, volume 10, pages 211–222. SIAM, 2010.

Appendix A Appendices

a.1 Data Models Supported by Macau

Let be a Macau model (i.e., hypergraph) specifying the entities and their relations. We say is factorizable if given large enough latent dimension we can choose latent vectors for every entity in such a way that they can fit any relation data with arbitrarily small error.

However, it is clear that some models are not factorizable. For example, consider two entities and , with the latent vectors and , respectively, and two relations, and , between them. Since both relations are modelled by the same formula it is not possible to fit arbitrary data. Actually we can only fit the case when the two relations are equal (i.e., ).

Let us define the relation in model to be factorizable if in the single latent dimensional case () it is possible to specify arbitrary values to the latent variables of its entities, , while keeping the predictions for all other relations in the model equal to . It is straight-forward to see that if is factorizable we can fit any observed data of the relation by adding new latent dimensions without affecting the predictions for other relations. Additionally, the fact that all relations of are factorizable implies is factorizable, because we can always add new latent dimensions that only effect a specific relation and, thus, fit all relations as accurately as needed.

It is easy to show that if all pairs of entities in the hypergraph (Macau model) have at most one hyperedge (relation) between them, the model is factorizable. To see that this is true consider a relation in such a model. is factorizable because if we set the latent variables of the non-participating entities, , equal to then the predictions of the other relations will be zero as they all contain at least one non-participating entity.

From this we can see that Macau can factorize any

  • ordinary undirected graph,

  • acyclic hypergraph.

Additionally, it is also possible to tensorize simple cases when there are multiple edges between two entities. For example, the model IC50+Ki in our paper has two relations, namely IC50 and Ki, between the entities Drug and Protein. To handle it in Macau we represent it as a tensor with three modes: Drug, Protein, Type, where the third mode specifies the relation (either IC50 or Ki).

a.2 Sampling of Gaussian Priors of the Latent Variables

In Macau and are interpreted as the model for the residuals . The conditional joint probability , used in sampler is

(24)

where

(25)
(26)
(27)
(28)
(29)
(30)

where is the number of instances of entity . Note that the terms in (28) and in (27) come due to the dependence of the prior of on , see (9).

a.3 Gibbs Sampling of Precision Parameter of Weight Vector of Relation Features

Recall that the is the diagonal value for the precision variable in the prior for weight vector and that the hyperprior of is gamma distribution with fixed parameters and :

(31)
(32)

where

(33)

The conditional probability is then

(34)
(35)
(36)
(37)
(38)

where

(39)
(40)

where is the dimensionality of the relation features of .

a.4 Derivation of Gaussian Mean and Precision for the Weight Vector of the Entity Features

The conditional probability for is

(41)
(42)

Next we use the link (from the definition of ) and expand the inner part of (42):

(43)
(44)
(45)
(46)
(47)
(48)

Next we plug the non-constant part back to (42)

(49)
(50)
(51)

where we can clearly see the precision and mean of .

a.5 Correctness of Noise Injection Sampler

Let , , be matrices described in Gibbs sampling section of . Here we prove a more general version of the sampler where instead of precision matrix we allow any positive definite matrix . Let be a matrix such that .

Lemma A.1.

Let and be matrices where their each row is independently generated from and let the variable be the solution to the linear system

(52)

then

is distributed by multinomial Gaussian distribution with mean

and precision .

Proof.

From

(53)

it is clear that is distributed by Gaussian as it constructed by affine transformations and sums of Gaussian variables. As and have zero mean we get

(54)
(55)

proving the correctness of the mean.

For the precision we investigate the covariance between and column of . In what follows we use notation to denote the column of matrix . Let’s also denote , giving us , then

(56)
(57)
(58)
(59)

The expectations in the first and last term of the equation (59) give

(60)
(61)

where is -dimensional identity matrix. The middle two terms are equal to zero because due to and being zero mean and independent of each other. Thus, we get

(62)
(63)
(64)
(65)

This means the covariance matrix of is