1 Synonyms
Relational learning, statistical relational models, statistical relational learning, relational data mining
2 Glossary
 Entities

are (abstract) objects. We denote an entity by a lowercase . An actor in a social network can be modelled as an entity. There can be multiple types of entities in a domain (e.g., individuals, cities, companies), entity attributes (e.g., income, gender) and relationships between entities (e.g., knows, likes, brother, sister). Entities, relationships and attributes are defined in the entityrelationship model, which is used in the design of a formal relational model
 Relation

A relation or relation instance is a set of tuples. A tuple is an ordered list of elements , which, in the context of this discussion, represent entities. The arity of a relation is the number of elements in each of its tuples, e.g., a relation might be unary, binary or higher order. is the name or type of the relation. For example, (Jack, Mary) might be a tuple of the relation instance knows, indicating that Jack knows Mary. A database instance (or world) is a set of relation instances. For example, a database instance might contain instances of the unary relations student, teacher, male, female, and instances of the binary relations knows, likes, brother, sister (see Figure 1)
 Predicate

A predicate is a mapping of tuples to true or false. is a ground predicate and is true when , otherwise it is false. Note that we do not distinguish between the relation name and the predicate name . Example: knows is a predicate and knows(Jack, Mary) returns True if it is true that Jack knows Mary, i.e., that . The convention is that relations and predicates are written in lowercase and entities in uppercase
 Probabilistic Database

A (possible) world corresponds to a database instance. In a probabilistic database, a probability distribution is defined over all possible worlds under consideration. Probabilistic databases with potentially complex dependencies can be described by probabilistic graphical models. In a canonical representation, one assigns a binary random variable
to each possible tuple in each relation . Thenand
The probability for a world is written as where is the set of random variables and denotes their values in the world (see Figure 1)
 Triple Database

A triple database consists of binary relations represented as subjectpredicateobject triples. An example of a triple is: (Jack, knows, Mary). A triple database can be represented as a knowledge graph with entities as nodes and predicates as directed links, pointing from the subject node to the object node. The Resource Description Framework (RDF) is triple based and is the basic data model of the Semantic Web’s Linked Open Data. In social network analysis, nodes would be individuals or actors and links would correspond to ties
 Linked Data

Linked Open Data describes a method for publishing structured data so that it can be interlinked and can be exploited by machines. Linked Open Data uses the RDF data model
 Collective learning

refers to the effect that an entity’s relationships, attributes or class membership can be predicted not only from its attributes but also from its (social) network environment
 Collective classification

A special case of collective learning: The class membership of entities can be predicted from the class memberships of entities in their (social) network environment. Example: Individuals’ income classes can be predicted from those of their friends
 Relationship prediction

The prediction of the existence of a relationship between entities, for example friendship between individuals. A relationship is typically modelled as a binary relation
 Entity resolution

The task of predicting if two constants refer to the same entity
 Homophily

The tendency of an individual to associate with similar others
 Graphical models

A graphical description of a probabilistic domain where nodes represent random variables and edges represent direct probabilistic dependencies
 Latent Variables

Latent variables are quantities which are not measured directly and whose states are inferred from data
3 Definition
Relational models are machinelearning models that are able to truthfully represent some or all distinguishing features of a relational domain such as longrange dependencies over multiple relationships. Typical examples for relational domains include social networks and knowledge bases. Relational models concern nontrivial relational domains with at least one relation with an arity of two or larger that describes the relationship between entities, e.g., knows, likes, dislikes. In the following we will focus on nontrivial relational domains.
4 Introduction
Social networks can be modelled as graphs, where actors correspond to nodes and where relationships between actors such as friendship, kinship, organizational position, or sexual relationships are represented by directed labelled links (or ties) between the respective nodes. Typical machine learning tasks would concern the prediction of unknown relationship instances between actors, as well as the prediction of actors’s attributes and class labels. In addition, one might be interested in a clustering of actors. To obtain best results, machine learning should take an actors’s network environment into account. Thus two individuals might appear in the same cluster because they have common friends.
Relational learning is a branch of machine learning that is concerned with these tasks, i.e. to learn efficiently from data where information is represented in form of relationships between entities.
Relational models are machine learning models that truthfully model some or all distinguishing features of relational data such as longrange dependencies propagated via relational chains and homophily, i.e. the fact that entities with similar attributes are neighbors in the relationship structure. In addition to social network analysis, relational models are used to model knowledge graphs, preference networks, citation networks, and biomedical networks such as genedisease networks or proteinprotein interaction networks. Relational models can be used to solve the aforementioned machine learning tasks, i.e., classification, attribute prediction, clustering. Moreover, relational models can be used to solve additional relational learning tasks such as relationship prediction and entity resolution. Relational models are derived from directed and undirected graphical models or latent variable models and typically define a probability distribution over a relational domain.
5 Key Points
Statistical relational learning is a subfield of machine learning. Relational models learn a probabilistic model of a complete networked domain by taking into account global dependencies in the data. Relational models can lead to more accurate predictions if compared to nonrelational machine learning approaches. Relational models typically are based on probabilistic graphical models, e.g., Bayesian networks, Markov networks, or latent variable models.
6 Historical Background
Inductive logic programming (ILP) was maybe the first machine learning effort that seriously focussed on a relational representation. It gained attention in the early 1990s and focusses on learning deterministic or closetodeterministic dependencies, with representations derived from first order logic. As a field, ILP was introduced in a seminal paper by Muggleton
[21]. A very early and still very influential algorithm is Quinlan’s FOIL [29]. ILP will not be a focus in the following, since social networks exhibit primarily statistical dependencies. Statistical relational learning started around the beginning of the millennium with the work by Koller, Pfeffer, Getoor and Friedman [17, 9]. Since then many combinations of ILP and relational learning have been explored. The Semantic Web, Linked Open Data are producing vast quantities of relational data and [39, 27] describe the application of statistical relational learning to these emerging fields. Relational learning has been applied to the learning of knowledge graphs, which model large domains as triple databases. [24] is a recent review on the application of relational learning to knowledge graphs. An interesting application is the semiautomatic completion of knowledge graphs by analysing information from the Web and other sources, in combination with relational learning, which exploits the information already present on the knowledge graph [5].7 Machine Learning in Relational Domains
7.1 Relational Domains
Relational domains are domains that can truthfully be represented by relational databases. The glossary defines the key terms such as a relation, a predicate, a tuple and a database. Nontrivial relational domains contain at least one relation with an arity of two or larger that describes the relationship between entities, e.g., knows, likes, dislikes. The main focus here is on nontrivial relational domains.
Social networks are typical relational domains, where information is represented by multiple types of relationships (e.g., knows, likes, dislikes) between entities (here: actors), as well as through the attributes of entities.
7.2 Generative Models for a Relational Database
Typically, relational models can exploit longrange or even global dependencies and have principled ways of dealing with missing data. Relational models are often displayed as probabilistic graphical models and can be thought of as relational versions of regular graphical models, e.g., Bayesian networks, Markov networks, and latent variable models. The approaches often have a “Bayesian flavor” but a fully Bayesian statistical treatment is not always performed.
The following section describes common relational graphical models.
7.3 Nonrelational Learning
Although we are mostly concerned with relational learning, it is instructive to analyse the special case of nonrelational learning. Consider a database with a key entity class actor with elements
and with only unary relations; thus we are considering a trivial relational domain. Then one can partition the random variables into independent disjoint sets according to the entities, and the joint distribution factorizes as
where the binary random variable is assigned to tuple in unary relation (see glossary).
Thus the set of random variables can be reduced to nonoverlapping independent sets of random variables. This is the common nonrelational learning setting with i.i.d. instances, corresponding to the different actors.
7.4 Nonrelational Learning in a Relational Domain
An common approximation to a relational model is to model unary relations of key entities in a similar way as in a nonrelational model as
where
is a vector of relational features that are derived from the relational network environment of the actor
. Relational features provide additional information to support learning and prediction tasks. For instance, the average income of an individual’s friends might be a good covariate to predict an individual’s income in a social network. The underlying mechanism that forms these patterns might be homophily, the tendency of individuals to associate with similar others. The goal of this approach is to be able to use i.i.d. machine learning by exploiting some of the relational information. This approach is commonly used in applications where probabilistic models are computationally too expensive. The application of nonrelational machine learning to relational domains is sometimes referred to as propositionalization.Relational features are often highdimensional and sparse (e.g., there are many people, but only a small number of them are an individual’s friends; there are many items but an individual has only bought a small number of them) and in some domains it can be easier to define useful kernels than to define useful features. Relational kernels often reflect the similarity of entities with regard to the network topology. For example a kernel can be defined based on counting the substructures of interest in the intersection of two graphs defined by neighborhoods of the two entities [20] (see also the discussion on RDF graphs further down).
7.5 Learning RulePremises in Inductive Logic Programming
Some researchers apply a systematic search for good features and consider this as an essential distinction between relational learning and nonrelational learning: in nonrelational learning features are essentially defined prior to the training phase whereas relational learning includes a systematic and automatic search for features in the relational context of the involved entities. Inductive logic programming (ILP) is a form of relational learning with the goal of finding deterministic or closetodeterministic dependencies, which are described in logical form such as Horn clauses. Traditionally, ILP involves a systematic search for sensible relational features that form the rule premises [6].
8 Relational Models
In this section we describe the most important relational models in some detail. These are based on probabilistic graphical models, which efficiently model highdimensional probability distributions by exploiting independencies between random variables. In particular, we consider Bayesian networks, Markov networks and latent variable models. We start with a more detailed discussion on possible world models for relational domains and with a discussion on the dual structures of the triple graph and the probabilistic graph.
8.1 Random Variables for Relational Models
As mentioned before, a probabilistic database defines a probability distribution over the possible worlds under consideration. The goal of relational learning is to derive a model of this probability distribution.
In a canonical representation, we assign a binary random variable to each possible tuple in each relation. Then
and
The probability for a world is written as where is the set of random variables and denotes their values in the world (see Figure 1). What we have just described corresponds to a closedworld assumption where all tuples, which are not part of the database instance, map to and thus . In contrast in an open world assumption, we would consider the corresponding truth values and states as being unknown and the database instance as being only partially observed. Often in machine learning some form of a local closedworld assumption is applied with a mixture of true, false and unknown ground predicates [5, 18]. For example one might assume that, if at least one child of an individual is specified, it implies that all children are specified (closedworld), whereas if no child is specified, children are considered unknown (openworld). Another aspect is that type constraint imply that certain ground predicates are false. For example, only individuals can get married, but neither cities or buildings. Other types of background knowledge might materialize tuples that are not explicitly specified. For example, if individuals live in Munich, by simple reasoning one can conclude that they also live in Bavaria and Germany. The corresponding tuples can be added to the database.
Based on background knowledge, one might want to modify the canonical representation, which uses only binary random variables. For example, discrete random variables with
states are often used to implement the constraint that exactly one out off ground predicates is true, e.g. that an individual belongs exactly to one out of income classes or age classes. It is also possible to extend the model towards continuous variables.So far we have considered an underlying probabilistic model and an observed world. In probabilistic databases one often assumes a noise process between the actual database instance and the observed database instance by specifying a conditional probability
Thus only is observed whereas the real interest is on : One observes a from which one can infer for the database instance . With an observed , there is a certain probability that (error in the database) and with an observed there is a certain probability that (missing tuples).
The theory of probabilistic databases focussed on the issues of complex query answering under a probabilistic model. In probabilistic databases [36] the canonical representation is used in tupleindependent databases, while multistate random variables are used in blockindependentdisjoint (BID) databases.
Most relational models assume that all entities (or constants) and all predicates are known and fixed (domain closure assumption). In general these constraints can be relaxed, for example if one needs to include new individuals in the model. Also, latent variables derived from a cluster or a factor analysis can be interpreted as new “invented” predicates.
8.2 Triple Graphs and Probabilistic Graphical Networks
A triple database consists of binary relations represented as subjectpredicateobject triples. An example of a triple is: (Jack, knows, Mary). A triple database can be represented as a knowledge graph with entities as nodes and predicates as directed links, pointing from the subject node to the object node. Triple databases are able to represent webscale knowledge bases and sociograms that allow multiple types of directed links. Relations of higher order can be reduced to binary relations by introducing auxiliary entities (“blank nodes”). Figure 2 shows an example of a triple graph. The Resource Description Framework (RDF) is triple based and is the basic data model of the Semantic Web’s Linked Open Data. In social network analysis, nodes would be individuals or actors and links would correspond to ties.
For each triple a random variable is introduced. In Figure 2 these random variables are represented as elliptical red nodes. The binary random variable associated with the tripe will be denoted as .
8.3 Directed Relational Models
The probability distribution of a directed relational model, i.e. a relational Bayesian model, can be written as
(1) 
Here refers to the set of random variables in the directed relational model, while denotes a particular random variable. In a graphical representation, directed arcs are pointing from all parent nodes to the node (Figure 2). As Equation 1 indicates the model requires the specification of the parents of a node and the specification of the probabilistic dependency of a node, given the states of its parent nodes. In specifying the former, one often follows a causal ordering of the nodes, i.e., one assumes that the parent nodes causally influence child nodes and their descendents. An important constraint is that the resulting directed graph is not permitted to have directed loops, i.e. that it is a directed acyclic graph. A major challenge is to specify , which might require the calculation of complex aggregational features as intermediate steps.
8.3.1 Probabilistic Relational Models
Probabilistic relational models (PRMs) were one of the first published directed relational models and found great interest in the statistical machine learning community [17, 10]. An example of a PRM is shown in Figure 3. PRMs combine a framebased (i.e., objectoriented) logical representation with probabilistic semantics based on directed graphical models. The PRM provides a template for specifying the graphical probabilistic structure and the quantification of the probabilistic dependencies for any ground PRM. In the basic PRM models only the entities’ attributes are uncertain whereas the relationships between entities are assumed to be known. Naturally, this assumption greatly simplifies the model. Subsequently, PRMs have been extended to also consider the case that relationships between entities are unknown, which is called structural uncertainty in the PRM framework [10].
In PRMs one can distinguish parameter learning and structural learning. In the simplest case the dependency structure is known and the truth values of all ground predicates are known as well in the training data. In this case, parameter learning consists of estimating parameters in the conditional probabilities. If the dependency structure is unknown, structural learning is applied, which optimizes an appropriate cost function and typically uses a greedy search strategy to find the optimal dependency structure. In structural learning, one needs to guarantee that the ground Bayesian network does not contain directed loops.
In general the data will contain missing information, i.e., not all truth values of all ground predicates are known in the available data. For some PRMs, regularities in the PRM structure can be exploited (encapsulation) and even exact inference to estimate the missing information is possible. Large PRMs require approximate inference; commonly, loopy belief propagation is being used.
8.3.2 More Directed Relational Graphical Models
A Bayesian logic program is defined as a set of Bayesian clauses [16]
. A Bayesian clause specifies the conditional probability distribution of a random variable given its parents. A special feature is that, for a given random variable,
several such conditional probability distributions might be given and combined based on various combination rules (e.g., noisyor). In a Bayesian logic program, for each clause there is one conditional probability distribution and for each random variable there is one combination rule. Relational Bayesian networks [14] are related to Bayesian logic programs and use probability formulae for specifying conditional probabilities. The probabilistic entityrelationship (PER) models [12] are related to the PRM framework and use the entityrelationship model as a basis, which is often used in the design of a relational database. Relational dependency networks [22] also belong to the family of directed relational models and learn the dependency of a node given its Markov blanket (the smallest node set that make the node of interest independent of the remaining network). Relational dependency networks are generalizations of dependency networks as introduced by [11, 13]. A relational dependency networks typically contains directed loops and thus is not a proper Bayesian network.8.4 Undirected Relational Graphical Models
The probability distribution of an undirected graphical model, i.e. a Markov network, is written as a loglinear model in the form
where the feature functions can be any realvalued function on the set and where . In a probabilistic graphical representation one forms undirected edges between all nodes that jointly appear in a feature function. Consequently, all nodes that appear jointly in a function will form a clique in the graphical representation. is the partition function normalizing the distribution.
A major advantage is that undirected graphical models can elegantly model symmetrical dependencies, which are common in social networks.
8.4.1 Markov Logic Network (MLN)
A Markov logic network (MLN) is a probabilistic logic which combines Markov networks with firstorder logic. In MLNs the random variables, representing ground predicates, are part of a Markov network, whose dependency structure is derived from a set of firstorder logic formulae (Figure 4).
Formally, a MLN is defined as follows: Let be a firstorder formula, (i.e., a logical expression containing constants, variables, functions and predicates) and let be a weight attached to each formula. Then is defined as a set of pairs [32, 4].
From the ground Markov network is generated as follows. First, one generates nodes (random variables) by introducing a binary node for each possible grounding of each predicate appearing in given a set of constants (see the discussion on the canonical probabilistic representation). The state of a node is equal to one if the ground predicate is true, and zero otherwise. The feature functions , which define the probabilistic dependencies in the Markov network, are derived from the formulae by grounding them in a domain. For formulae that are universally quantified, grounding is an assignment of constants to the variables in the formula. If a formula contains variables, then there are such assignments. The feature function is equal to one if the ground formula is true, and zero otherwise. The probability distribution of the can then be written as
where is the number of formula groundings that are true for and where the weight is associated with formula in .
The joint distribution will be maximized when large weights are assigned to formulae that are frequently true. In fact, the larger the weight, the higher is the confidence that a formula is true for many groundings. Learning in MLNs consists of estimating the weights from data. In learning, MLN makes a closedworld assumption and employs a pseudolikelihood cost function, which is the product of the probabilities of each node given its Markov blanket. Optimization is performed using a limited memory BFGS algorithm.
The simplest form of inference in a MLN concerns the prediction of the truth value of a ground predicate given the truth values of other ground predicates. For this task an efficient algorithm can be derived: In the first phase of the algorithm, the minimal subset of the ground Markov network is computed that is required to calculate the conditional probability of the queried ground predicate. It is essential that this subset is small since in the worst case, inference could involve all nodes. In the second phase, the conditional probability is then computed by applying Gibbs sampling to the reduced network.
Finally, there is the issue of structural learning, which, in this context, means the learning of first order formulae. Formulae can be learned by directly optimizing the pseudolikelihood cost function or by using ILP algorithms. For the latter, the authors use CLAUDIAN [30], which can learn arbitrary firstorder clauses (not just Horn clauses, as in many other ILP approaches).
An advantage of MLNs is that the features and thus the dependency structure is defined using a wellestablished logical representation. On the other hand, many people are unfamiliar with logical formulae and might consider the PRM framework to be more intuitive.
8.4.2 Relational Markov Networks (RMNs)
RMNs generalize many concepts of PRMs to undirected relational models [37]. RMNs use conjunctive database queries as clique templates, where a clique in an undirected graph is a subset of its nodes such that every two nodes in the subset are connected by an edge. RMNs are mostly trained discriminately. In contrast to MLNs and similarly to PRMs, RMNs do not make a closedworld assumption during learning.
8.5 Relational Latent Variable Models
In the approaches described so far, the structures in the graphical models were either defined using expert knowledge or were learned directly from data using some form of structural learning. Both can be problematic since appropriate expert domain knowledge might not be available, while structural learning can be very time consuming and possibly results in local optima which are difficult to interpret. In this context, the advantage of relational latent variable models is that the structure in the associated graphical models is purely defined by the entities and relations in the domain.
The additional complexity of working with a latent representation is counterbalanced by the great simplification by avoiding structural learning. In the following discussion, we assume that data is in triple format; generalizations to relational databases haven been described [41, 19].
8.5.1 The IHRM: A Latent Class Model
The infinite hidden relational model (IHRM) [41] (a.k.a infinite relational model [15]) is a generalization to a probabilistic mixture model where a latent variable with states is assigned to each entity . If the latent variable for subject is in state and the latent variable for object is in state , then the triple exists with probability . Since the latent states are unobserved, we obtain
which can be implemented as the sumproduct network of Figure 5.
In the IHRM the number of states (latent classes) in each latent variable is allowed to be infinite and fully Bayesian learning is performed based on a Dirichlet process mixture model. For inference Gibbs sampling is employed where only a small number of the infinite states are occupied in sampling, leading to a clustering solution where the number of states in the latent variables is automatically determined. Models with a finite number of states have been studied as stochastic block models [28].
Since the dependency structure in the ground Bayesian network is local, one might get the impression that only local information influences prediction. This is not true, since latent representations are shared and in the ground Bayesian network the latter are parents to the random network variables . Thus common children with evidence lead to interactions between the parent latent variables. Thus information can propagate in the network of latent variables.
The IHRM has a number of key advantages. First, no structural learning is required, since the directed arcs in the ground Bayesian network are directly given by the structure of the triple graph. Second, the IHRM model can be thought of as an infinite relational mixture model, realizing hierarchical Bayesian modeling. Third, the mixture model can be used for a cluster analysis providing insight into the relational domain.
The IHRM has been applied to social networks, recommender systems, for gene function prediction and to develop medical recommender systems. The IHRM was the first relational model applied to trust learning [31].
In [1] the IHRM is generalized to a mixedmembership stochastic block model, where entities can belong to several classes.
8.5.2 RESCAL: A Latent Factor Model
The RESCAL model was introduced in [26] and follows a similar dependency structure as the IHRM as shown in Figure 5. The main differences are that, first, the latent variables do not describe entity classes but are latent entity factors and that, second, there are no nonnegativity or normalization constraints on the factors. The probability of a triple is calculated with
(2) 
as
where .
As in the IHRM, factors are unique to entities which leads to interactions between the factors in the ground Bayesian network, enabling the propagation of information in the network of latent factors. The relationspecific matrix encodes the factor interactions for a specific relation and its asymmetry permits the representation of directed relationships.
The calculation of the latent factors is based on the factorization of a multirelational adjacency tensor where two modes represent the entities in the domain and the third mode represents the relation type (Figure
6). With a closedworld assumption and a squarederror cost function, efficient alternating least squares (ALS) algorithm can be used; for local closed world assumptions and open world assumptions, stochastic gradient descent is being used.
The relational learning capabilities of the RESCAL model have been demonstrated on classification tasks and entity resolution tasks, i.e., the mapping of entities between knowledge bases. One of the great advantages of the RESCAL model is its scalability: RESCAL has been applied to the YAGO ontology [35] with several million entities and 40 relation types [27]! The YAGO ontology, closely related to DBpedia [2] and the Google Knowledge Graph [33], contains formalized knowledge from Wikipedia and other sources.
RESCAL is part of a tradition on relation prediction using factorization of matrices and tensors. [42] describes a Gaussian processbased approach for predicting a single relation type, which has been generalized to a mutlirelational setting in [40].
A number of variations and extensions exist. The SUNS approach [39]
is based on a Tucker1 decomposition of the adjacency tensor, which can be computed by a singular value decomposition (SVD). The Neural Tensor Network
[34] combines several tensor decompositions. Approaches with a smaller memory footprint are TransE [3] and HolE [25]. The multiway neural network in the Knowledge Vault project
[5] combines the strengths of latent factor models and neural networks and was successfully used in semiautomatic completion of knowledge graphs. [24] is a recent review on the application of relational learning to knowledge graphs.9 Key Applications
Typical applications of relational models are in social networks analysis, knowledge graphs, bioinformatics, recommendation systems, natural language processing, medical decision support, and Linked Open Data.
10 Future Directions
As a number of publications have shown, best results can be achieved by committee solutions integrating factorization approaches with user defined or learned rule patterns [23, 5]. The most interesting application in recent years was in projects involving large knowledge graphs, where performance and scalability could clearly be demonstrated [5, 24]. The application of relational learning to sequential data and time series opens up new application areas, for example in clinical decision support and sensor networks [7, 8]. [38] studies the relevance of relational learning to cognitive brain functions.
References
 [1] Edoardo M. Airoldi, David M. Blei, Stephen E. Fienberg, and Eric P. Xing. Mixed membership stochastic blockmodels. Journal of Machine Learning Research, 9:1981–2014, 2008.
 [2] Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary G. Ives. Dbpedia: A nucleus for a web of open data. In ISWC/ASWC, pages 722–735, 2007.
 [3] Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multirelational data. In Advances in Neural Information Processing Systems, pages 2787–2795, 2013.
 [4] Pedro Domingos and Matthew Richardson. Markov logic: A unifying framework for statistical relational learning. In Lise Getoor and Benjamin Taskar, editors, Introduction to Statistical Relational Learning, pages 339–369. MIT Press, 2007.
 [5] Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. Knowledge vault: A webscale approach to probabilistic knowledge fusion. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 601–610. ACM, 2014.
 [6] Saso Dzeroski. Inductive logic programming in a nutshell. In Lise Getoor and Benjamin Taskar, editors, Introduction to Statistical Relational Learning, pages 57–92. MIT Press, 2007.
 [7] Cristóbal Esteban, Danilo Schmidt, Denis Krompaß, and Volker Tresp. Predicting sequences of clinical events by using a personalized temporal latent embedding model. In Healthcare Informatics (ICHI), 2015 International Conference on, pages 130–139. IEEE, 2015.
 [8] Cristóbal Esteban, Volker Tresp, Yinchong Yang, and Denis Baier, Stephan Krompaß. Predicting the coevolution of event and knowledge graphs. In International Conference on Information Fusion, 2016.
 [9] Nir Friedman, Lise Getoor, Daphne Koller, and Avi Pfeffer. Learning probabilistic relational models. In IJCAI, pages 1300–1309, 1999.
 [10] Lise Getoor, Nir Friedman, Daphne Koller, Avi Pferrer, and Benjamin Taskar. Probabilistic relational models. In Lise Getoor and Benjamin Taskar, editors, Introduction to Statistical Relational Learning, pages 129–174. MIT Press, 2007.

[11]
David Heckerman, David Maxwell Chickering, Christopher Meek, Robert
Rounthwaite, and Carl Myers Kadie.
Dependency networks for inference, collaborative filtering, and data visualization.
Journal of Machine Learning Research, 1:49–75, 2000.  [12] David Heckerman, Christopher Meek, and Daphne Koller. Probabilistic entityrelationship models, prms, and plate models. In Lise Getoor and Benjamin Taskar, editors, Introduction to Statistical Relational Learning, pages 201–238. MIT Press, 2007.
 [13] Reimar Hofmann and Volker Tresp. Nonlinear markov networks for continuous variables. In NIPS, 1997.
 [14] Manfred Jaeger. Relational bayesian networks. In UAI, pages 266–273, 1997.
 [15] Charles Kemp, Joshua B. Tenenbaum, Thomas L. Griffiths, Takeshi Yamada, and Naonori Ueda. Learning systems of concepts with an infinite relational model. In AAAI, pages 381–388, 2006.
 [16] Kristian Kersting and Luc De Raedt. Bayesian logic programs. CoRR, cs.AI/0111058, 2001.
 [17] Daphne Koller and Avi Pfeffer. Probabilistic framebased systems. In AAAI/IAAI, pages 580–587, 1998.
 [18] Denis Krompaß, Stephan Baier, and Volker Tresp. Typeconstrained representation learning in knowledge graphs. In International Semantic Web Conference, pages 640–655. Springer, 2015.
 [19] Denis Krompaß, Xueyian Jiang, Maximilian Nickel, and Volker Tresp. Probabilistic latentfactor database models. Linked Data for Knowledge Discovery, page 74, 2014.
 [20] Uta Lösch, Stephan Bloehdorn, and Achim Rettinger. Graph kernels for rdf data. In ESWC, pages 134–148, 2012.
 [21] Stephen Muggleton. Inductive logic programming. New Generation Comput., 8(4):295–318, 1991.
 [22] Jennifer Neville and David Jensen. Dependency networks for relational data. In ICDM, pages 170–177, 2004.
 [23] Maximilian Nickel, Xueyan Jiang, and Volker Tresp. Reducing the rank in relational factorization models by including observable patterns. In Advances in Neural Information Processing Systems, pages 1179–1187, 2014.
 [24] Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. A review of relational machine learning for knowledge graphs. Proceedings of the IEEE, 104(1):11–33, 2016.
 [25] Maximilian Nickel, Lorenzo Rosasco, and Tomaso Poggio. Holographic embeddings of knowledge graphs. arXiv preprint arXiv:1510.04935, 2015.
 [26] Maximilian Nickel, Volker Tresp, and HansPeter Kriegel. A threeway model for collective learning on multirelational data. In ICML, pages 809–816, 2011.
 [27] Maximilian Nickel, Volker Tresp, and HansPeter Kriegel. Factorizing yago: scalable machine learning for linked data. In WWW, pages 271–280, 2012.
 [28] Krzysztof Nowicki and Tom A B Snijders. Estimation and prediction for stochastic blockstructures. Journal of the American Statistical Association, 96(455):1077–1087, 2001.
 [29] J. Ross Quinlan. Learning logical definitions from relations. Machine Learning, 5:239–266, 1990.
 [30] Luc De Raedt and Luc Dehaspe. Clausal discovery. Machine Learning, 26(23):99–146, 1997.
 [31] Achim Rettinger, Matthias Nickles, and Volker Tresp. A statistical relational model for trust learning. In AAMAS (2), pages 763–770, 2008.
 [32] Matthew Richardson and Pedro Domingos. Markov logic networks. Machine Learning, 62(12):107–136, 2006.
 [33] Amit Singhal. Introducing the knowledge graph: things, not strings. Technical report, Ofﬁcial Google Blog, May 2012. http://googleblog.blogspot.com/2012/05/introducingknowledgegraphthingsnot.html, 2012.
 [34] Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural Information Processing Systems, pages 926–934, 2013.
 [35] Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. Yago: a core of semantic knowledge. In WWW, pages 697–706, 2007.
 [36] Dan Suciu, Dan Olteanu, Christopher Ré, and Christoph Koch. Probabilistic Databases. Synthesis Lectures on Data Management. Morgan & Claypool Publishers, 2011.
 [37] Benjamin Taskar, Pieter Abbeel, and Daphne Koller. Discriminative probabilistic models for relational data. In UAI, pages 485–492, 2002.
 [38] Volker Tresp, Cristóbal Esteban, Yinchong Yang, Stephan Baier, and Denis Krompaß. Learning with memory embeddings. arXiv preprint arXiv:1511.07972, 2015.
 [39] Volker Tresp, Yi Huang, Markus Bundschus, and Achim Rettinger. Materializing and querying learned knowledge. In First ESWC Workshop on Inductive Reasoning and Machine Learning on the Semantic Web (IRMLeS 2009), 2009.
 [40] Zhao Xu, Kristian Kersting, and Volker Tresp. Multirelational learning with gaussian processes. In IJCAI, pages 1309–1314, 2009.
 [41] Zhao Xu, Volker Tresp, Kai Yu, and HansPeter Kriegel. Infinite hidden relational models. In UAI, 2006.
 [42] Kai Yu, Wei Chu, Shipeng Yu, Volker Tresp, and Zhao Xu. Stochastic relational models for discriminative link prediction. In NIPS, pages 1553–1560, 2006.
Comments
There are no comments yet.