Comparing Aggregators for Relational Probabilistic Models

07/25/2017 ∙ by Seyed Mehran Kazemi, et al. ∙ 0

Relational probabilistic models have the challenge of aggregation, where one variable depends on a population of other variables. Consider the problem of predicting gender from movie ratings; this is challenging because the number of movies per user and users per movie can vary greatly. Surprisingly, aggregation is not well understood. In this paper, we show that existing relational models (implicitly or explicitly) either use simple numerical aggregators that lose great amounts of information, or correspond to naive Bayes, logistic regression, or noisy-OR that suffer from overconfidence. We propose new simple aggregators and simple modifications of existing models that empirically outperform the existing ones. The intuition we provide on different (existing or new) models and their shortcomings plus our empirical findings promise to form the foundation for future representations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Much research in machine learning and reasoning under uncertainty is application-driven; the aim is to predict a distribution as well as possible given all of the information available. In science, it is normal to have idealized experiments where research concentrates on one aspect, and extraneous variables (confounders) are eliminated as much as possible. This paper examines the problem of aggregation in relational probabilistic models, which we try to study without confounders such as other properties or relations.

Probabilistic reasoning and learning about objects and relations has been studied under the umbrella of statistical relational learning [Getoor and Taskar, 2007] or statistical relational AI [De Raedt et al., 2016]. The models, which we refer to generically as relational probabilistic models, are characterized by weights over first-order formulae (where not all variables are quantified), and typically mean the grounding, where unquantified (free) variables are substituted by the constants in the language in all possible ways. The quantification allows for parameter sharing or weight tying, where the different instances share the same weights/parameters.

These models can be directed (such as in various instances of probabilistic logic programming

[Poole, 1993, Sato and Kameya, 1997, De Raedt et al., 2007]) or undirected (such as in Markov logic networks [Domingos and Lowd, 2009]).

Relational models are typically defined in terms of the grounding of the model, where there is a random variable instance for each assignment of an object to each logical variable. The neighbors are those instances that are in a (weighted) formula. This could be seen as making relational models just like graphical models (with some parameter sharing / weight tying). One way in which relational models differ from standard graphical models is when a variable can have an unbounded number of neighbors in the graphical model, and the number of neighbors can vary from instance to instance. This problem of aggregation is a major subproblem in relational models and occurs when a variable, property or relation is in a formula with a property or relation that has extra logical variables. This paper investigates the simplest non-trivial form of such aggregation, where a property depends on a relation with one extra logical variable, and other confounders are deliberately ignored.

More than a decade ago, Perlich and Provost [2003] made a first step toward better understanding of the aggregation problem for relational models. They concluded that the existing relational models to that date were relying on poor aggregators, and that a better understanding of why each model performs well or badly on different domains is required. Since then, several relational models have been developed [Neville et al., 2003, De Raedt et al., 2007, Natarajan et al., 2008, Domingos and Lowd, 2009, Natarajan et al., 2011, Kimmig et al., 2012, Kazemi et al., 2014]. The types of aggregators used in these models as well as the learning issues arising when aggregating, however, are still poorly understood.

In this paper, we investigate the types of aggregation used in current relational models. All these models (explicitly or implicitly) rely on either numerical aggregators such as mean, count, proportion, etc. or correspond to a naive Bayes, logistic regression, or noisy-OR model. We discuss the problems with relying on any of these and show as a consequence that none of the existing relational models work entirely satisfactorily for aggregation. We then propose new (simple) aggregators that work better and have the potential to form the central component of future relational representations.

Note that this is an experimental paper; we deliberately create subproblems that only include aggregation. This is not an application paper; we are not trying to do as well as we can in those domains, which would entail using all information, but would then interfere with testing methods for aggregation. Aggregation is only one of the tools needed for applications, and it is important to understand its characteristics in isolation as well as combined with other methods. This is also an exploratory paper in that we try many qualitatively different methods to see which ones work; we are not trying to find the definitively best answer, which can only be done when the space of methods is better defined.

2 A Motivating/Running Example

As a motivating/running example, consider the problem of predicting gender from movie ratings. The MovieLens movie rating dataset(s) [Harper and Konstan, 2015] have relations on for user , item (where, here, items are movies), rating and timestamp , which we will write as relation , ignoring the timestamp during training and testing. In some of the datasets, the gender of each user is recorded, which we write as relation . In popular culture there is a hypothesis that ratings depend on gender (there are some movies that appeal to females and some to males); this hypothesis is also supported by Weinsberg et al. [2012]. It aligns with our results as well, but many of the existing methods cannot extract that signal. In a probabilistic model, because users rate multiple movies, there needs to be some aggregation of the movies a user has rated to predict the gender of the user.

In particular, we initially use the first 60,000 ratings of the ml-100k dataset, which we call MovieLens 60K. We use the gender of the users who rated the first 40,000 ratings as the training set, and predict the gender of the users of the remaining 20,000 ratings111See https://grouplens.org/datasets/movielens/ The ratings used are those with a timestamp of 884673930 or less, and the gender is those that rated on or before timestamp 880845177.. This subset contains 1451 movies, 419 users used for training and 171 test users. This is more challenging (and more interesting) than the full MovieLens dataset because it includes users with few ratings (whereas the full dataset has these removed). We use this as an ongoing example to understand and debug various aggregation schemes.

The challenge is (mainly) because the number of ratings per person is extremely varied. In this dataset the number of ratings per user ranges from 1 to 559, the mode is at 20 (with 18 users giving a rating of 20), the mean number of ratings per user is 102. Two medians are of interest: half of the users rated 66 or fewer movies, but half of the ratings were by users who rated 163 or fewer movies. Similarly, the number of ratings per movie ranges from 1 to 375, with 126 movies with just 1 rating. There are 297 movies that no females rated, and 28 movies that no males rated.

To keep things simple, we only consider whether a rating was greater than or equal to 4, which we write as , or the rating was less than 4, which we write as . There is a subtle issue about what to do with ratings that do not appear in the database. Some of the methods treat these as a third value, and some as being unobserved (although many of them can be coerced into one way or the other). We will be explicit about how unobserved ratings are treated.

There are two main challenges that arise:

  • Regularization: with many low counts, we need to regularize. This needs to be taken into account in the inference as well as in the learning, and so needs to be explicitly part of the representations.

  • Independence: movies do not provide independent evidence. For example, one might imagine that ratings for Godfather 1 and Godfather 2 are strongly correlated. With many movies rated by some people, determining dependency is important. With low counts (by each user and for each movies), determining dependence is difficult.

The challenges do not disappear with more data; larger datasets have more movies and more users. It is not that users watch more movies, although larger datasets do capture a higher proportion of the viewers of popular movies.

3 Background and Notation

Relational models are not just big non-relational models because of parameter tying and because variables can depend on varying numbers of other variables. In this section, we define some basic terminologies as well as a subset of first-order logic and logic programs that are the basis for relational models, and briefly describe the relational models used in the rest of the paper.

3.1 Finite-domain, Function-free, First-order logic

A population is a finite set of objects. The size of a population corresponds to the number of objects in its set. A logical variable (logvar) is typed with a population and is written in upper-case. For a logvar , we show the size of the population associated with it as . Constants denoting objects are written in lower-case. A term is a logvar or a constant. An atom is of the form where is a predicate or a functor, and each is a term. An atom is ground if all its terms () are constants. A literal is an atom or its negation. A formula is a literal, the negation of a formula, a disjunction of formulae, or a conjunction of formulae. A theory is a set of of formulae that implicitly form a conjunction. A world is a value assignment to all ground atoms in a theory.

3.2 Logic programs

A rule is of the form where is an atom representing the head and is a formula representing the of the rule. A fact is a rule where the body is true. A logic program (LP) is a set of rules. Every variable in a LP that is not implied to be true is assumed to be false. This assumption is known as closed-world assumption (CWA). Every logical variable appearing in but not in is assumed to be existentially quantified. For instance having a rule is interpreted as , meaning is true if is true for at least one object.

3.3 Markov logic networks

Markov logic networks (MLNs) [Domingos and Lowd, 2009] are one of the most well-known and widely-used relational models. MLNs use weighted formulae

to define joint distributions over ground atoms.

A weighted formula (WF) is a pair where is a formula and

is a weight. An MLN is a set of WFs, where the probability of any world is proportional to the exponent of the sum of the weights of the groundings of the formulae that are true in the world.

3.4 Relational logistic regression

Relational logistic regression (RLR) [Kazemi et al., 2014]

is the directed analogue of MLNs. RLR defines a conditional probability distribution over a child atom using WFs consisting of atoms from the child’s parents. The formulae of WFs can be viewed as features whose values are computed by counting the number of times the formula is

based on the parent values. The probability of child being is the of the weighted sum of these features. Fatemi et al. [2016] show how hidden object features with continuous values can be also included in WFs and learned directly for each object during training. Kazemi et al. [2014] show that not only several well-known explicit aggregators (such as noisy-OR, noisy-AND, , ), but any other aggregator that is a polynomial of counts can be modelled or approximated using RLR.

3.5 Problog

Problog [De Raedt et al., 2007] is a probabilistic logic program where a probability is assigned to each fact. These probabilities define a probability distribution over (non-probabilistic) logic programs. The probability of a ground atom being in a Problog program is the sum of the probabilities of the logic programs that entail the ground atom.

3.6 RDN-Boost

Natarajan et al. [2011]

learn multiple relational probability trees using gradient boosting and use all learned trees to make predictions about unseen cases. Their boosting method is known as RDN-Boost.

Relational probability trees [Neville et al., 2003]

can be viewed as standard decision trees where the decision nodes may include thresholds on aggregation functions such as

count, average and proportion. In our running example, for instance, a node may split the users based on whether they have rated more than movies or not.

4 The Aggregators

In this section we describe the aggregators and show how they work for the running example. The atoms is the gender of user , means the rating of user for movie is greater than or equal to , and and means rating less than .

4.1 Existing Explicit Aggregators

Existing explicit aggregators in the literature (see e.g., Horsch and Poole [1990], Friedman et al. [1999], Neville et al. [2003], Neville and Jensen [2007], Kisynski and Poole [2009], Kazemi et al. [2014]) include OR, AND, noisy-OR, noisy-AND, logistic regression and numerical functions such as average, count, median, mode, max/min and proportion.

OR and AND (corresponding respectively to whether the user has rated at least one movie and whether a user has rated all movies in our running example) will obviously perform badly. Noisy-OR and noisy-AND are both monotonic functions: with noisy-OR, more observations can only increase the probability of a certain class, and with noisy-AND, more observations can only decrease the probability of a certain class. These two aggregators may be expected to perform badly on our running example as each observation (each movie rated by a user) may only increase (or only decrease) the probability of a class.

Logistic regression solves the monotonicity problem of noisy-OR and noisy-AND functions as each observation may increase or decrease the probability of a certain class. For instance, if has been mostly rated by males and has been mostly rated by females, then observing a new user has rated may increase the probability that this user is male and observing they have rated may decrease the probability that this user is male. However, as pointed out by Poole et al. [2014], as the number of ratings increases, a logistic regression model tends to become over-confident about its predictions (predicting with probabilities near zero or one) which results in poor performance in terms of log loss.

Numerical functions such as average, count, median, mode, max/min and proportion may provide small hints. In our running example, for instance, on average males may have rated more movies than females, thus thresholding count may provide a small hint for prediction. However, all of these functions lose great amounts of information as they do not consider which movies females or males like better.

4.2 Existing Implicit Aggregators

Here we consider some well-known relational learning models and how they do aggregation.

4.2.1 MLNs and RLR

An MLN model for our running example is as follows:

(1)

We assume the closed world assumption (having no rules for the negations of and means that unobserved ratings have no effect on the probability of the models). With this assumption, the MLN model is equivalent to an RLR model with the same weighted formulae. The value of reflects the prediction for no observed ratings, weight the number of ratings that are 4 or greater and the number of ratings that are less than 4. Note that replacing (and) with (or) or (implies), or adding rules containing , does not change the model or what can be represented (but the weights change).

It can be shown that from this model

(2)

where is the number of positive () ratings for user in rating set , and is the number of negative ratings () in rating set .

We would not expect this model to work well. As shown by Poole et al. [2014], as the number of ratings for a user increases ( and increase), the probability of will approach 0 or 1, unless and are approximately zero (or happen to cancel out). Thus we would expect that to avoid extreme predictions, and go to zero, which means that the model effectively ignores the ratings in the prediction of gender.

We also tested models with a hidden variable:

where, again, adding extra formulae involving , and does not affect the model. In this case, the model and the model are different (in how is marginalized). We expected the hidden variable to act as a regularizer, which makes the model make none-extreme predictions for even if the values become extreme. This model, however, did not perform better than the one with no hidden variables.

Another model we tested is to have different weights for each movie as in the following schema:

(3)

for each movie . Assuming all ratings are observed (the closed-world assumption), this is equivalent to an RLR model for with the following WFs:

(4)

where and are continuous hidden atoms whose value for each movie can be directly learned during training. We report the results for this model in our experiments.

4.2.2 Problog

For predicting the gender of users given their ratings in our running example, a Problog program may be as follows:

which is equivalent to the following program where weights are only assigned to facts:

where and are noise variables. Again, this model corresponds to taking into account only the number of movies rated by a user (i.e. the explicit aggregator count), and building a model using just this feature with weight .

In this case,

(5)

4.2.3 RDN-Boost

RDN-Boost [Natarajan et al., 2011] learns regression trees whose intermediate nodes may contain thresholds over explicit functions mentioned in subsection 4.1. Therefore, the type of aggregation in RDN-boost is a combination of thresholding functions. We use RDN-Boost as a test for the standard explicit exaggerators. If the standard aggregators are useful, then RDN-Boost should perform well. Our test results show that RDN-Boost does not perform well, which we take as evidence that the underlying aggregations are not extracting useful information (as one might expect).

4.3 New Aggregators

In this section, we propose some new aggregators. We tried and rejected many aggregators; the ones presented are the ones that are best for each type of aggegator.

4.3.1 Treating each movie as a dataset

One intuition is that the movies can be seen as datasets in themselves. A movie with ratings can be seen as data points about the users who rated this movie. The idea behind this approach is similar to rule combining of Natarajan et al. [2008], where each parent independently predicts the target and then these independent predictions are combined. However, we do not tie the parameters for parents, and we do not consider intermediate hidden variables for each parent’s prediction that should be learned using EM. One way this can be used is to use all of the people who rated the movies that rated:

(6)

Where the denominator only includes ratings by users whose gender is known, and is a pseudo-count (beta/Dirichlet prior), which is set in our experiments by 5-fold cross validation (and was often quite large).

Counting only the users that rated the same as worked slightly worse (presumably because there were fewer datapoints for each user). Using more informed prior counts (which predict the training average for users that rated no movies) also worked worse. We will not consider these variants here.

The second is to average over all movies that rated:

(7)

where

The value of was set by 5-fold cross validation, but was often around 1 (which corresponds to Laplace smoothing).

4.3.2 Limiting the number of neighbors

One “obvious” solution is a generative model where the gender produces the ratings, which results in a Naive Bayes model under the assumption that ratings are independent given the gender. Naive Bayes does not work well because the independence assumption is not appropriate; the ratings are not independent given the gender.

Naive Bayes is optimal if there is a single rating (assuming others are missing at random). However, the model becomes extremely overconfident (probabilities approach 0 or 1) as the number of ratings increases. The independence, however, is approximately correct; it works when there are a few ratings, just not hundreds. One way to fix this is to limit the number of movies considered for each user; instead of using all of the ratings, a subset can be used. The movies with very few ratings do not provide a useful signal (as they need to be regularized too much), and the movies with very many ratings also turn out to be not very useful, because they are just popular independently of gender. We tested selecting ratings at random for each user. The value of was selected by 5-fold cross validation. The problem with this method is that the answer is sensitive to which movies are selected, and so the prediction is the average of many selections, which makes predicting slow or noisy.

4.3.3 MLN/RLR with relational dropout

Dropout [Srivastava et al., 2014]

is a regularization technique designed mainly to avoid overfitting in deep neural networks. In training a deep network with dropout, each iteration will randomly select a proportion (e.g., 30%) of neurons and turn them off, thus forcing the network to make predictions using only the other neurons. For making a prediction about an unseen case, the network is used several times (each time a proportion of neurons are selected randomly and turned off) and the average is reported. This allows the network to average over multiple configurations.

We utilize the idea of dropout to avoid making over-confident predictions in MLN/RLR. During training the MLN/RLR model in Eq (3) and (4), in each iteration we turn off a part of the observations randomly and make predictions using only the rest of the observations. However, instead of keeping a proportion of the observations on (like standard dropout), we keep a fixed number (e.g., only considering 10 random movies rated by a user) of them on. The reason why we keep a fixed number instead of a proportion of the observations is that the number of observations (and thus a proportion of them) varies per object. In our running example, for instance, each user has rated a different number of movies. In test time, we report the average over multiple predictions of the model, where each time a fixed number of observations are selected randomly and kept on.

Note that similar to naive Bayes with limited neighbors, averaging over multiple fixed-length subsets of the movies makes the predicting slow or noisy.

Target object Other object Ratings
Dataset Type #Train #Test Type Count Count
MovieLens-60K User 419 171 Movie 1511 60,000
MovieLens-600K User 2665 1416 Movie 3575 600,000
Yelp Business 3525 882 User 2108 38,327
Table 1: Summary of the datasets used for experiments.

4.3.4 Matrix Factorization

Matrix factorization techniques have proved effective especially in recommendation systems [Koren et al., 2009]. The idea is to factorize a relation matrix over two types of objects (e.g., users and movies) into latent features for each type of objects such that the relation matrix can be approximated using the latent features. Koren et al. [2009] use matrix factorization for movie recommendation and factorize the rating matrix as follows:

where is the predicted rating of user for movie , is a bias, and are user and movie biases, and and are the f-th latent property of the user and movie respectively.

When the ratings are in binary (e.g., greater than equal to 4 or less than 4), the model can be trained to predict:

where is true when the rating for user on item is 4 or greater and is false when it is less than 4. The same training algorithm as Koren et al. [2009] can also be used here to minimize log loss.

While matrix factorization method might not be new, it has not be used explicitly for aggregation.

One way to use matrix factorization for aggregation is to factorize the relation matrix into latent features, then use the latent features of the desired objects as features and build a model over these features to make predictions about them. For our running example, we train a logistic regression on and for each to predict the gender of the users, which takes the following form:

where is, as previously mentioned, the f-th latent property of the user . This was th method reported as “matrix factorization” in the results.

Note that we also tried a number of other methods such as forcing one of the features to be gender, but none of them worked as well as what is described here. We had also tried using Bayesian probabilistic matrix factorization, a graphical model variant of matrix factorization proposed by Salakhutdinov and Mnih [2008] which provides automatic complexity control. But it mostly had a similar performance as the regular matrix factorization method so we did not include it in our final comparison.

One thing that works well is to initialize one of the latent features to correlate with gender (e.g., to initialize the feature with for females and for males, and a random value in (-0.1,0.1) for other objects and for other features).

Instead of learning the latent features over the rating matrices, and using them to predict gender, we also tried learning latent features that predict both ratings and gender simultaneously. This, however, did not do well. We identified two reasons for the poor performance of this model:

  • Since there are many more ratings than genders, the latent features tend to be more inclined toward predicting ratings.

  • One of the latent features of the users may become identical to the gender of the user (thus completely over-fitting to train data), while the rest of the latent features predict the ratings.

Learning latent features that jointly predict multiple targets has been used extensively for knowledge base completion (see e.g., [Nickel et al., 2012, Bordes et al., 2013, Nickel et al., 2016, Trouillon et al., 2016, Nguyen, 2017]). However, to the best of our knowledge only Nickel et al. [2012] considers the properties of objects; the other ones only consider relationships and ignore object properties. We believe knowledge base completion tools should take the two issues we identified into account if they want to include object properties along with object relations.

5 Empirical Evaluation

5.1 Evaluation Measures

We compare the algorithms on the test set using two measures that are suited for probabilistic predictions for Boolean variables.

  • mean squared error (MSE):

    where is the set of test examples, is the prediction on example and is the actual value for example . Predicting always has an error of 0.25.

  • log loss (LL), the negative of log likelihood, divided by the number of examples:

    which is undefined if either of the logarithms is given a number less than or equal to 0. We use base-2 logarithms so the answer can be interpreted in bits. Predicting 0.5 has a log loss of 1.

Log loss puts a larger penalty on extreme predictions that are incorrect, which is appropriate if using the predictions for making decisions.

5.2 Datasets

The datasets used in our experiments are:

  1. Our running example: MovieLens 60K.

  2. A subset of the larger MovieLens 1M dataset [Harper and Konstan, 2015]

    that we call MovieLens 600K. The 1M dataset is the largest MovieLens dataset that contains gender. As in the 60K data, we used only the first 60% of the ratings, of which we used the users who had rated the movies in the first 40% as the training set and the remaining users as the test set. This is more challenging than using the whole dataset as it is less cleaned (the complete dataset removes users with few rating, but we include these), and because predicting future users is an extrapolation task that should be more challenging than interpolation.

  3. A business prediction task extracted from the Yelp challenge dataset222https://www.yelp.com/dataset_challenge for predicting the type of food offered in a restaurant. We considered all restaurants offering either Chinese or Mexican food (but not both), and restricted the users to those who have rated at least 10 of these restaurants. The number of ratings per restaurant ranges from to .

Table 1 represents a summary of the three datasets.

5.3 Results

For many of these, we tried to use the standard software, but often this did not work for our datasets. For the software we wrote, unless otherwise specified, we typically used gradient descent, trained enough to reach a local minimum on the test set. Hyper parameters (typically regularization parameters) were trained by 5-fold cross validation.

The results on the three datasets are shown in Table 2.

MovieLens 60K MovieLens 600K Yelp
Method MSE LL MSE LL MSE LL
Predict 0.5 0.250 1.000 0.250 1.000 0.250 1.000
Training average 0.216 0.900 0.204 0.864 0.236 0.960
MLN/RLR (no hidden) (Eq (1)) 0.211 0.881 0.204 0.863 0.234 0.953
Alchemy (with model of Eq (3)) 0.220 1.149 0.159 0.853 0.198 0.871
Problog (noisy-or) 0.211 0.882 0.203 0.861 0.233 0.952
RDN-Boost 0.215 0.899 0.204 0.864 0.234 0.953
Movies as a dataset (Eq (6)) 0.207 0.868 0.195 0.831 0.198 0.833
Average of each movie as dataset (Eq (7)) 0.209 0.875 0.190 0.811 0.197 0.831
MLN/RLR with relational dropout 0.199 0.849 0.148 0.668 0.198 0.837
Matrix Factorization 0.200 0.844 0.193 0.824 0.236 0.959
Table 2: Results of various aggregators (lower is better)

For the MLN and RLR results, we tried Alchemy 2.0 [Kok et al., 2009]. To keep the evaluations consistent with the other methods, we directly optimized Equation (2

) by stochastic gradient descent. Our results are consistent with Alchemy, but were much faster to obtain (but admittedly our programs were not as general as Alchemy).

The weights found for the model (1) were approximately , and , which indicates that ratings 4 and over were positive evidence for being a female and, ratings of less than 4 were negative evidence for being female. The parameters and are very small as they are multiplied by the number of ratings. The model with a hidden variable did not perform better (but found many different parameterizations all with essentially the same error).

Problog ran for a few days to train the parameters and then crashed (even for the 60k dataset). To judge the results it would have given, we directly optimized Equation (5) by stochastic gradient descent, which works well for this particular problem but may not be a general solution. The program found negative probability for , for the same reason that the MLN found a negative weight for its corresponding weight. This could indicate that this model is not appropriate, but note that negative probabilities have been advocated as being needed for probabilistic logic programs [Buchman and Poole, 2016].

RDN-Boost did not perform very well for this task. This is because the aggregation primitives used in their code could not extract useful features. We did not try using RDN-Boost with other aggregation methods as their base learners. It would be interesting to try RDN-Boost with the other aggregators that we have found to be useful.

The next model is the MLN/RLR model using a bias and two weights per item, following the clauses of formula (3). The table shows the results for Alchemy (using an equivalent formalism using disjunctions, as Alchemy seems to work better with disjunctive formulae). The log likelihood is terrible for the 60K dataset because it becomes overconfident in predictions that are wrong, which are penalized much more in log likelihood than in squared error.

The use of the movies as datasets improves over the previous methods. These results just considered whether the movie was rated, and not what the rating was. Using the actual rating (or whether it was or ) performed slightly worse.

Limiting the number of neighbors in relational dropout makes these competitive, which gives evidence that the problem is overconfidence. The overconfidence problem, and the ability of relational dropout to overcome it to a certain amount, can be especially viewed for ML-600K where relational dropout outperforms all other methods. Limiting the number of parents is a simple idea to solve the problem, and so we think that there should be ways to improve such methods.

6 Conclusion

One aspect of relation models that differs from standard graphical models is the need for aggregation where the number of neighbors varies for the grounding of the atoms for different variables. The difference can be orders of magnitude. We showed empirically that the current models cannot handle this diversity, and using simple methods (or by simple modifications of existing methods) we can do better than the existing relational models. We discussed the reasons why the current models perform poorly.

In this paper we have made an attempt at understanding a major subproblem that arises in statistical relational learning. We hope that building on the foundations of good solutions to subproblems will form the foundations for the next generation of representations.

References

  • Bordes et al. [2013] Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., and Yakhnenko, O. (2013), Translating embeddings for modeling multi-relational data. In Advances in neural information processing systems, pages 2787–2795.
  • Buchman and Poole [2016] Buchman, D. and Poole, D. (2016), Negation without negation in probabilistic logic programming. In Proc. 15th International Conference on Principles of Knowledge Representation and Reasoning.
  • De Raedt et al. [2016] De Raedt, L., Kersting, K., Natarajan, S., and Poole, D. (2016),

    Statistical Relational Artificial Intelligence: Logic, Probability, and Computation

    . Morgan and Claypool Publishers.
  • De Raedt et al. [2007] De Raedt, L., Kimmig, A., and Toivonen, H. (2007), ProbLog: A probabilistic Prolog and its application in link discovery. In Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI-2007), pages 2462–2467.
  • Domingos and Lowd [2009] Domingos, P. and Lowd, D. (2009), Markov Logic: An Interface Layer for Artificial Intelligence. Synthesis Lectures on Artificial Intelligence and Machine Learning, Morgan and Claypool.
  • Fatemi et al. [2016] Fatemi, B., Kazemi, S. M., and Poole, D. (2016), A learning algorithm for relational logistic regression: Preliminary results. arXiv preprint arXiv:1606.08531.
  • Friedman et al. [1999] Friedman, N., Getoor, L., Koller, D., and Pfeffer, A. (1999), Learning probabilistic relational models. In Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence (IJCAI-99), pages 1300–1309, Morgan Kaufman.
  • Getoor and Taskar [2007] Getoor, L. and Taskar, B. (Eds.) (2007), Introduction to Statistical Relational Learning. MIT Press.
  • Harper and Konstan [2015] Harper, F. M. and Konstan, J. A. (2015), The movielens datasets: History and context. ACM Transactions on Interactive Intelligent Systems (TiiS), 5(4).
  • Horsch and Poole [1990]

    Horsch, M. and Poole, D. (1990), A dynamic approach to probabilistic inference using Bayesian networks. In

    Proc. Sixth Conference on Uncertainty in AI, pages 155–161.
  • Kazemi et al. [2014] Kazemi, S. M., Buchman, D., Kersting, K., Natarajan, S., and Poole, D. (2014), Relational logistic regression. In Proc. 14th International Conference on Principles of Knowledge Representation and Reasoning (KR-2014).
  • Kimmig et al. [2012] Kimmig, A., Bach, S., Broecheler, M., Huang, B., and Getoor, L. (2012), A short introduction to probabilistic soft logic. In Proceedings of the NIPS Workshop on Probabilistic Programming: Foundations and Applications, pages 1–4.
  • Kisynski and Poole [2009] Kisynski, J. and Poole, D. (2009), Lifted aggregation in directed first-order probabilistic models. In Proc. Twenty-first International Joint Conference on Artificial Intelligence (IJCAI-09), pages 1922–1929.
  • Kok et al. [2009] Kok, S., Sumner, M., Richardson, M., Singla, P., Poon, H., Lowd, D., Wang, J., and Domingos, P. (2009), The alchemy system for statistical relational AI.
  • Koren et al. [2009] Koren, Y., Bell, R., and Volinsky, C. (2009), Matrix factorization techniques for recommender systems. IEEE Computer, 42(8):30–37.
  • Natarajan et al. [2011] Natarajan, S., Khot, T., Kersting, K., Gutmann, B., and Shavlik, J. (2011), Gradient–based boosting for statistical relational learning: The relational dependency network case. Machine Learning Journal, (Online First).
  • Natarajan et al. [2008] Natarajan, S., Tadepalli, P., Dietterich, T. G., and Fern, A. (2008), Learning first-order probabilistic models with combining rules. Annals of Mathematics and Artificial Intelligence, 54(1-3):223–256.
  • Neville and Jensen [2007] Neville, J. and Jensen, D. (2007), Relational dependency networks. Journal of Machine Learning Research (JMLR), 8:653–692.
  • Neville et al. [2003] Neville, J., Jensen, D., Friedland, L., and Hay, M. (2003), Learning relational probability trees. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 625–630, ACM.
  • Nguyen [2017] Nguyen, D. Q. (2017), An overview of embedding models of entities and relationships for knowledge base completion. arXiv preprint arXiv:1703.08098.
  • Nickel et al. [2016]

    Nickel, M., Rosasco, L., Poggio, T. A., et al. (2016), Holographic embeddings of knowledge graphs. In

    AAAI, pages 1955–1961.
  • Nickel et al. [2012] Nickel, M., Tresp, V., and Kriegel, H.-P. (2012), Factorizing yago: scalable machine learning for linked data. In Proceedings of the 21st international conference on World Wide Web, pages 271–280, ACM.
  • Perlich and Provost [2003] Perlich, C. and Provost, F. (2003), Aggregation-based feature invention and relational concept classes. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 167–176, ACM.
  • Poole [1993] Poole, D. (1993), Probabilistic Horn abduction and Bayesian networks. Artificial Intelligence, 64(1):81–129.
  • Poole et al. [2014] Poole, D., Buchman, D., Kazemi, S. M., Kersting, K., and Natarajan, S. (2014), Population size extrapolation in relational probabilistic modelling. In International Conference on Scalable Uncertainty Management, pages 292–305, Springer.
  • Salakhutdinov and Mnih [2008]

    Salakhutdinov, R. and Mnih, A. (2008), Bayesian probabilistic matrix factorization using markov chain monte carlo. In

    Proceedings of the 25th international conference on Machine learning, pages 880–887, ACM.
  • Sato and Kameya [1997] Sato, T. and Kameya, Y. (1997), PRISM: A symbolic-statistical modeling language. In Proceedings of the 15th International Joint Conference on Artificial Intelligence (IJCAI-97), pages 1330–1335.
  • Srivastava et al. [2014] Srivastava, N., Hinton, G. E., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014), Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958.
  • Trouillon et al. [2016] Trouillon, T., Welbl, J., Riedel, S., Gaussier, É., and Bouchard, G. (2016), Complex embeddings for simple link prediction. In International Conference on Machine Learning, pages 2071–2080.
  • Weinsberg et al. [2012] Weinsberg, U., Bhagat, S., Ioannidis, S., and Taft, N. (2012), Blurme: Inferring and obfuscating user gender based on ratings. In Proceedings of the Sixth ACM Conference on Recommender Systems, RecSys ’12, pages 195–202.