While successful, deep networks have a few important limitations. Apart from the key issue of interpretability, the other major limitation is the requirement of a flat inputs (vectors, matrics, tensors), which limits applications to tabular,propositional representations. On the other hand, symbolic and structured representations [14, 7, 13, 38, 1] have the advantage of being interpretable, while also allowing for rich representations that allow for learning and reasoning with multiple levels of abstraction. This representability allows them to model complex data structures such as graphs far more easily and interpretably than basic propositional representations. While expressive, these models do not incorporate or discover latent relationships between features as effectively as deep networks.
Consequently, there has been focus on achieving the dream team of logical and statistical learning methods such as relational neural networks [19, 42]. While specific architectures differ, these methods generally employ hand-coded relational rules
or Inductive Logic Programming (ILP,) to identify the domain’s structural rules; these rules are then used with the observed data to unroll and learn a neural network. We improve upon these methods in two specific ways: (1) we employ a rule learner that has been recently successful to automatically extract interpretable rules that are then employed as hidden layer of the neural network; (2) we exploit the notion of parameter tying from the perspective of statistical relational learning models that allow multiple instances of the same rule share the same parameter. These two extensions significantly improve the adaptation of neural networks (NNs) for relational data.
We employ Relational Random Walks  to extract relational rules from a database, which are then used as the first layer of the NN. These random walks have the advantages of being learned from data (instead of time-consumingly hand-coded), and interpretable (as walks are rules in a database schema). Given evidence (facts), relational random walks are instantiated (grounded); parameter tying ensures that groundings of the same random walk share the same parameters with far fewer network parameters to be learned during training.
For combining outputs from different groundings of the same clause, we employ combination functions [30, 16]. For instance, given a rule: , , the - - pair could have coauthored papers, while the - pair could have coauthored publications (). Combination functions are a natural way to compare such relational features arising from rules. Our network handles this in two steps: first, by ensuring that all instances (papers) of a particular pair share the same weights. Second, by combining predictions from each of these instances (papers) using a combination function. We explore the use of Or, Max and Average
combination functions. Once the network weights are appropriately constrained by parameter tying and combination functions, they can be learned using standard techniques such as backpropagation.
We make the following contributions: (1) we learn a NN that can be fully trained from data and with no significant engineering, unlike previous approaches; (2) we combine the successful paradigms of relational random walks and parameter tying from SRL methods; this allows the resulting NN to faithfully model relational data while being fully learnable; (3) we evaluate the proposed approach against recent relational NN approaches and demonstrate its efficacy.
2 Related Work
Lifted Relational Neural Networks. Our work is closest to Lifted Relational Neural Networks (LRNN)  due to Šourek et al., in terms of the architecture. LRNN uses expert hand-crafted relational rules as input, which are then instantiated (based on data) and rolled out as a ground network. While at a high-level, our approach appears similar to the LRNN framework, there are significant differences. First, while Šourek et al., exploit tied parameters across examples within the same rule, there is no parameter tying across multiple instances; our model, however, ensures parameter tying of multiple ground instances of the rule (in our case, a relational random walk). Second, since they adopt a fuzzy notion, their system supports weighted facts (called ground atoms in logic literature). We take a more standard approach and our observations are Boolean. Third, while the previous difference appears to be limiting in our case, note that this leads to a reduction in the number of network weight parameters.
Ŝourek et al., have extended their work to learn network structure using predicate invention ; our work learns relational random walks as rules for the network structure. As we show in our experiments, NNs cannot only easily handle such large number of such random walks, but can also use them effectively as a bag of weakly predictive intermediate layers capturing local features. This allows for learning a more robust model than the induced rules, which take a more global view of the domain. Another recent approach is due to Kazemi and Poole 
, who proposed a relational neural network by adding hidden layers to their Relational Logistic Regression model. A key limitation of their work is that they are restricted to unary relation predictions, that is, they can only predict attributes of objects instead of relations between. In contrast, ours is a more general framework in that can be used to predict relations between objects.
Much of this recent work is closely related to a significant body of research called neural-symbolic integration 
, which aims to combine (arguably) two of the oldest formalisms in machine learning: symbolic representations with neural learning architectures. Some of the earliest systems such as KBANN date back to the early 90s; KBANN also rolls out the network architecture from rules, though it only supports propositional rules. Current work, including ours, instead explores relational rules which serve as templates to roll out more complex architectures. Other recent approaches such as CILP++  and Deep Relational Machines  incorporate relational information as network layers. However, such models propositionalize relational data into flat-feature vector and hence, cannot be seen as truly relational models. A rather distinctive approach in this vein is due to Hu et al. , where two independent networks incorporating rules and data are trained together. Finally, NNs have also been trained to approximate ILP clause evaluation , perform SLD-resolution in first-order logic , and approximate entailment operators in propositional logic .
considered random walks between query entities to perform composition of embeddings of relations on each walk with recurrent neural networks. DeepWalks performs random walks on graphs by treating each node as a word, which results in learning embeddings for each node of graph. Kaur et al.
consider relational random walks to generate count and existential features to train a relational restricted Boltzmann machine. This feature transformation induces propositionalization that could potentially result in loss of information, as we show in our experiments.
Tensor Based Models. Recently, several tensor-based models [31, 4, 41, 3, 47] have been proposed to learn embeddings of objects and relations. Such models have been very effective for large-scale knowledge-base construction. However, they are computationally expensive as they learn parameters for each object and relation in the knowledge base. Furthermore, the embedding into some ambient vector space makes the models more difficult to interpret. Though rule distillation can yield human-readable rules , it is another computationally intensive post-processing step, which limits the size of the interpreted rules.
Other Models. Several NNs have been utilized with relational databases schemas [2, 37]. These models differ on how they handle 1-to- joins, cyclicity, and indirect relationships between relations. However, they all learn one network per relation, which makes them computationally expensive. In the same vein, graph-based models take graph structure into consideration during training. Pham et al.  perform collective classification via a deep neural network where connections between adjacent layers are established according to given graph structure. Niepert et al.  proposed an algorithm that prepares the relational data to be directly input to standard convolutional network by assigning an ordering to enable feature convolution. Scarselli et al. 
proposed Graph Neural Networks in which one neural network is installed at each node of the graph, which is trained by obtaining input from all the incoming edges of graph. One neural network per node makes the model computationally very expensive Finally, with the rapid growth of deep learning, relational counterparts of most of existing connectionist models have been also proposed[40, 33, 46, 49].
3 Neural Networks with Relational Parameter Tying
We first introduce some notation for relational logic, which is used for relational representation, with the domain being represented using constants, variables and predicates. We adopt the following conventions: (1) constants used to represent entities in the domain are written in lower-case (e.g., , ); (2) variables and entity types are capitalized (e.g., , ); and (3) relations and predicate symbols between entities and attributes are represented as . A grounding is a predicate applied to a tuple of terms (i.e., either a full or partial instantiation), e.g. , is a partial instantiation.
Rules are constructed from atoms using logical connectives (, ) and quantifiers (, ). Due to the use of relational random walks, the relational rules that we employ are universally conjunctions of the form , where the head is the target of prediction and the body corresponds to conditions that make up the rule (that is, each literal in the body is a predicate ). We do not consider negations in this work.
An example rule could be . This rules states that if a is a part of the project that the works on, then the is advised by that . The body of the rule is learned as a random walk that starts with and ends with . Such a random walk represents a chain of relations that could possibly connect a to a and is a relational feature that could help in the prediction. The rule head is the target that we are interested in predicting. Since these rules are essentially “soft” rules, we can also associate clauses with weights, i.e., weighted rules: .
A relational neural network is a set of weighted rules describing interactions in the domain . We are given a set of atomic facts , known to be true (the evidence) and labeled relational training examples . In general, labels can take multiple values corresponding to a multi-class problem. We seek to learn a relational neural network model to predict a relation, given relational examples , that is: .
Given: Set of instances , relation, relational data set ; Construct (structure learning): , relational random walk rules (relational feature describing the network structure of ); Train (parameter learning): , rule weights via gradient descent with rule-based parameter tying to identify a sparse set of network weights of
The movie domain contains the entity types (variables) , and . In addition there are relations (features): , and . The domain also has relations for entity resolution: and . The task is to predict if worked under , with the target predicate (label): .
3.1 Generating Lifted Random Walks
The core component of a neural network model is the architecture, which determines how the various neurons are connected to each other, and ultimately how all the input features interact with each other. In a relational neural network, the architecture is determined by thedomain structure, or the set of relational rules that determines how various relations, entities and attributes interact in the domain as shown earlier with the example. While previous approaches employed carefully hand-crafted rules, we, instead, use relational random walks to define the network architecture and model the local relational structure of the domain. A similar approach was also used by Kaur et al , though the random walk features were used to instantiate a restricted Boltzmann machine, which has a far more limited architecture and their work is not lifted since it instantiates the entire network before learning.
Relational data is often represented using a lifted graph, which defines the domain’s schema; in such a representation, a relation is a predicate edge between two type nodes: . A relational random walk through a graph is a chain of such edges corresponding to a conjunction of predicates. For a random walk to be semantically sound, we should ensure that the input type (argument domain) of the -th predicate is the same as the output type (argument range) of the -th predicate.
The body of the rule
can be represented graphically as
This is a lifted random walk between two entities in the target predicate, . It is semantically sound as it is possible to chain the second argument of a predicate to the first argument of the succeeding predicate. This walk also contains an inverse predicate , which is distinct from (since the argument types are reversed).
We use path-constrained random walks  approach to generate lifted random walks , . These random walks form the backbone of the lifted neural network, as they are templates for various feature combinations in the domain. They can also be interpreted as domain rules as they impart localized structure to the domain model, that is, they provide a qualitative description of the domain. When these rules, or lifted random walks have weights associated with them, we are then able to endow the rules with a quantitative influence on the target predicate. We now describe a novel approach to network instantiation using these random-walk-based relational features. A key component of the proposed instantiation is rule-based parameter tying, which reduces the number of network parameters to be learned significantly, while still effectively maintaining the quantitative influences as described by the relational random walks.
3.2 Network Instantiation
The relational random walks () generated in the previous subsection are the relational features of the lifted relational neural network, . Our goal is to unroll and ground the network with several intermediate layers that capture the relationships expressed by the random walks. A key difference in network construction between our proposed work and recent approaches such as that of Šourek et al.,  is that we do not perform an exhaustive grounding to generate all possible instances before constructing the network. Instead, we only ground as needed leading to a much more compact network. We unroll the network in the following manner (cf. Figure 1).
Output Layer: For the , which is also the head in all the rules , introduce an output neuron called the target neuron,
. With one-hot encoding of the target labels, this architecture can handle multi-class problems. The target neuron uses the
softmax activation function. Without loss of generality, we describe the rest of the network unrolling assuming a single output neuron.
Combining Rules Layer: The target neuron is connected to lifted rule neurons, each corresponding to one of the lifted relational random walks, . Each rule is a conjunction of predicates defined by random walks:
and corresponds to the lifted rule neuron . This layer of neurons is fully connected to the output layer to ensure that all the lifted random walks (that capture the domain structure) influence the output. The extent of their influence is determined by learnable weights, between and the output neuron .
In Fig. 1, we see that the rule neuron is connected to the neurons ; these neurons correspond to instantiations of the random-walk . The lifted rule neuron aims to combine the influence of the groundings/instantiations of the random-walk feature that are true in the evidence. Thus, each lifted rule neuron can also be viewed as a rule combination neuron. The activation function of a rule combination neuron can be any aggregator or combining rule . This can include value aggregators such as weighted mean, max0 or distribution aggregators
(if inputs to the this layer are probabilities) such asNoisy-Or. Many such aggregators can be incorporated into the combining rules layer with appropriate weights () and activation functions of the rule neurons. For instance, combining rule instantiations with a weighted mean will require learning , with the nodes using unit functions for activation. The formulation of this layer is much more general and subsumes the approach of Šourek et al , which uses a max combination layer.
Grounding Layer: For each instantiated (ground) random walk , we introduce a ground rule neuron, . This ground rule neuron represents the -th instantiation (grounding) of the body of the -th rule, : (cf. eqn 1). The activation function of a ground rule neuron is a logical AND (); it is only activated when all its constituent inputs are true (that is, only when the entire instantiation is true in the evidence).
This requires all the constituent facts to be in the evidence. Thus, the -th ground rule neuron is connected to all the fact neurons that appear in its corresponding instantiated rule body. A key novelty of our approach is regarding relational parameter tying: the weights of connections between the fact and grounding layers are tied by the rule these facts appear in together. This is described in detail further below.
Input Layer: Each instantiated (grounded) predicate that appears as a part of an instantiated rule body is a fact, that is . For each such instantiated fact, we create a fact neuron , ensuring that each unique fact in evidence has only one single neuron associated with it. Every example is a collection of facts, that is, example . Thus, an example is input into the system by simply activating its constituent facts in the input layer.
Relational Parameter Tying: The most important thing to note about this construction is that we employ rule-based parameter tying for the weights between the grounding layer and the input/facts layer. Parameter tying ensures that instances corresponding to an example all share the same weight if they occur in the same lifted rule . The shared weights are propagated through the network in a bottom-up fashion, ensuring that weights in the succeeding hidden layers are influenced by them.
Our approach to parameter tying is in sharp contrast to that of Šourek et al., , who learn the weights of the network edges between the output layer and the combining rules layer. Furthermore, they also use fuzzy facts (weighted instances), whereas in our case, the facts/instances are Boolean, though their edge weights are tied. Our approach also differs from that of Kaur et al.,  who also use relational random walks. From a parametric standpoint, Kaur et al., used relational random walks as features for a restricted Boltzmann machine, where the instance neurons and the rule neurons form a bipartite graph. Thus, the relational RBM formulation has significantly more edges, and commensurately many more parameters to optimize during learning.
Example (continued, see Fig. 2)
Consider two lifted random walks and for the target predicate
Note that while the inverse predicate is syntactically different from (argument order is reversed), they are both semantically same. The output layer consists of a single neuron corresponding to the binary target . The lifted rule layer (also known as combining rules layer) has two lifted rule nodes corresponding to rule and corresponding to rule . These rule nodes combine inputs corresponding to instantiations that are true in the evidence. The network is unrolled based on the specific training example, for instance: . For this example, the rule has two instantiations that are true in the evidence. Then, we introduce a ground rule node for each such instantiation:
The rule has only one instantiation, and consequently only one node:
The grounding layer consists of ground rule nodes corresponding to instantiations of rules that are true in the evidence. The edges have weights that depend on the combining rule implemented in . In this example, the combining rule is average, so we have and . The input layer consists of atomics fact in evidence: . The fact nodes and appear in the grounding and are connected to the corresponding ground rule neuron . Finally, parameters are tied on the edges between the facts layer and the grounding layer. This ensures that all facts that ultimately contribute to a rule are pooled together, which increases the influence of the rule during weight learning. This, in turn, ensures that a rule that holds strongly in the evidence gets a higher weight.
Once the network is instantiated, the weights and can be learned using standard techniques such as backpropagation. We denote our approach Neural Networks with Relational Parameter Tying (NNRPT). The tied parameters incorporate the structure captured by the relational features (lifted random walks), leading to a network with significantly fewer weights, while also endowing the it with semantic interpretability regarding the discriminative power of the relational features. We now demonstrate the importance of parameter tying and the use of relational random walks as compared to previous frameworks.
Our empirical evaluation aims to answer the following questions explicitly111 https://github.com/navdeepkjohal/NNRPT: Q1:] How does compare to the state-of-the-art SRL models i.e., what the value of learning a neural net over standard models? Q2: How does compare to propositionalization models i.e., what is the need for parameterization of standard neural networks? Q3: How does compare to other relational neural networks in literature?
We use five standard data sets to evaluate our algorithm (see Table 1): Uw-Cse.  is a standard data set that consists of predicates and relations such as , , , and etc. The data set contains information from different areas of computer science about professors, students and courses, and the task is to predict the relationship between a professor and a student. Imdb was first created by Mihalkova and Mooney  and contains nine predicates such as , , , and . We predict whether an actor has a director. Cora is a citation matching data set modified by Poon and Domingos . It contains predicates , , , , , , , and . The task is to predict if one venue is as another.
Mutagenesis  was originally used to predict whether a compound is mutagenetic or not. It consists of properties of compounds, their constituent atoms and the type of bond that exists between atoms. We performed relation prediction of whether an atom is a constituent of a given molecule or not (). Sports consists of facts from the sports domain crawled by the Never-Ending Language Learner (NELL, ) including details of players, sports, individual plays, league information etc. The goal is to predict which sport a particular team plays.
Baselines and Experimental Details:
To answer Q1, we compare
with the more recent and state-of-the-art relational gradient-boosting methods,-, - , and relational restricted Boltzmann machines -, - . As the random walks chain binary predicates in our model, we convert unary and ternary predicates into binary predicates for all data sets. Further, to maintain consistency in experimentation, we use the same resulting predicates across all our baselines as well. We run - and - with their default settings and learn trees for each model. Also, we train - and - according to the settings recommended in .
For , we generate random walks by considering each predicate and its inverse to be two distinct predicates. Also, we avoid loops in the random walks by enforcing sanity constraints on the random walk generation. We consider random walks for Mutagenesis, Cora, random walks for Imdb, random walks for Sports and random walks for Uw-Cse as suggested by Kaur et al  (see Table 1). Since we use a large number of random walks, exhaustive grounding becomes prohibitively expensive. To overcome this, we sample groundings for each random walk for large data sets. Specifically, we sample groundings per random walk per example for Cora, Sports, Mutagenesis, and groundings per random walk per example for Uw-Cse (see Table 1).
For all experiments, we set the positive to negative example ratio to be for training, set combination function to be average and perform -fold cross validation. For , we set the learning rate to be , batch size to
, and number of epochs to. We train our model with -regularized AdaGrad 
. Since these are relational data sets where the data is skewed, AUC-PR and AUC-ROC are better measures than likelihood and accuracy.
To answer Q2, we generated flat feature vectors by Bottom Clause Propositionalization (BCP, ), according to which one bottom clause is generated for each example. BCP considers each predicate in the body of the bottom clause as a unique feature when it propositionalizes bottom clauses to flat feature vector. We use Progol  to generate these bottom clauses. After propositionalization, we train two connectionist models: a propositionalized restricted Boltzmann machine (-) and a propositionalized neural network (-). The NN has two hidden layers in our experiments, which makes - model a modified version of CILP++  that had one hidden layer. The hyper-parameters of both the models were optimized by line search on validation set.
To answer Q3, we compare our model with Lifted Relational Neural Networks (, ). To ensure fairness, we perform structure learning by using PROGOL  and input the same clauses to both and . PROGOL learned clauses for Cora, clauses for Imdb, clauses for Sports, clauses for Uw-Cse and clauses for Mutagenesis in our experiment.
Table 2 compares our to -, -, - and - to answer Q1. As we see, is significantly better than - for Cora and Sports on both AUC-ROC and AUC-PR, and performs comparably to the other data sets. It also performs better than -, - on Imdb and Cora data sets, and comparably on other data sets. Similarly, it performs better than - on Sports, both on AUC-ROC and AUC-PR and comparably on other data sets. Broadly, Q1 can be answered affirmatively in that performs comparably to or better than state-of-the-art SRL models.
Table 3 shows the comparison of with two propositionalization models: - and - in order to answer Q2. performs better than - on all the data sets except Mutagenesis, where the two models have similar performance. also performs better than - on all data sets. It should be noted that BCP feature generation sometimes introduces a large positive-to-negative example skew (for example, in the Imdb data set), which can sometimes gravely affect the performance of the propositional model, as we observe in Table 3. This emphasizes the need for designing models that can handle relational data directly and without propositionalization; our proposed model as an effort in this direction. Q2 can now be answered affirmatively: that performs better than propositionalization models.
Table 4 compares the performance of and when both use clauses learned by PROGOL . performs better on Uw-Cse, Sports evaluated using AUC-PR. This result is especially significant because these data sets are considerably skewed. also outperforms on Cora and Mutagenesis. Lastly, has comparable performance on Imdb on both AUC-ROC and AUC-PR. The reason for this big performance gap between the two models on Cora is likely because could not build effective models with the fewer number of clauses (i.e. four) typically learned by PROGOL. In contrast, even with very few clauses, is able to outperform . This helps us answer Q3, affirmatively, that: offers many advantages over state-of-the-art relational neural networks.
In summary, our experiments clearly show the benefits of parameter tying as well as the expressivity of relational random walks in tightly integrating with a neural network model across a wide variety of domains and settings. The key strengths of are that it can (1) efficiently incorporate a large number of relational features, (2) capture local qualitative structure through relational random walk features, (3) tie feature weights (parameter-tying) in a manner that captures the global quantitative influences.
can be considered a special instance of a convolutional network in relational domains, where the fact-grounding layer edges are the equivalent of convolution, combining rules layer represents pooling, and softmax layer is the fully-connected layer. If we perform a full and exhaustive grounding of the neural network in, is the number of lifted random walks (template rules), is the number of grounded random walks (instances of a template rule) and is the number of all facts (atomic instances). The data can be represented as a three-dimensional tensor of size , whose elements are precisely (see the discussion of the Input Layer in Section 3.2). In addition, if we consider the rule layer as tensor , where parameters are tied across , then constitutes the convolving filter that is repeatedly applied to each of ground instances. The resulting tensor obtained by composing representing the output of grounded layer passes through a pooling layer (which is the rule-combination layer, here) to downsample the data produce a new tensor . The tensor , when composed with the fully-connected non-linear layer of our model produces tensor of size that represents the probability of each class in the output: .
5 Conclusion and Future Work
We considered the problem of learning neural networks from relational data. Our proposed architecture was able to exploit parameter tying i.e., different instances of the same rule shared the same parameters inside the same training example. In addition, we explored the use of relational random walks to create relational features for training these neural nets. Further experiments on larger data sets could yield insights into the scalability of this approach. Integration with an approximate-counting method could potentially reduce the training time. Given the relation to CNNs, stacking could allow for our method to be deeper. Finally, understanding the use of such random-walk-based neural network as a function approximator can allow for efficient and interpretable learning in relational domains with minimal feature engineering.
-  Bach, S., Broecheler, M., Huang, B., Getoor, L.: Hinge-loss Markov random fields and probabilistic soft logic. JMLR (2017)
-  Blockeel, H., Uwents, W.: Using neural networks for relational learning. In: ICML Workshop (2004)
-  Bordes, A., Glorot, X., Weston, J., Bengio, Y.: Joint learning of words and meaning representations for open-text semantic parsing. In: AISTATS (2012)
-  Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., Yakhnenko, O.: Translating embeddings for modeling multi-relational data. In: NeurIPS (2013)
-  Carlson, A., Betteridge, J., Kisiel, B., Settles, B., Hruschka, Jr., E.R., Mitchell, T.M.: Toward an architecture for never-ending language learning. In: AAAI (2010)
-  Das, R., Neelakantan, A., Belanger, D., McCallum, A.: Chains of reasoning over entities, relations, and text using recurrent neural networks. In: EACL (2017)
De Raedt, L., Kersting, K., Natarajan, S., Poole, D.: Statistical Relational Artificial Intelligence: Logic, Probability, and Computation. Morgan & Claypool (2016)
-  DiMaio, F., Shavlik, J.: Learning an approximation to inductive logic programming clause evaluation. In: ILP (2004)
Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. JMLR (2011)
-  Evans, R., et al.: Can neural networks understand logical entailment? ICLR (2018)
-  França, M.V.M., Zaverucha, G., d’Avila Garcez, A.S.: Fast relational learning using bottom clause propositionalization with artificial neural networks. MLJ (2014)
-  Garcez, A.S.d., Gabbay, D.M., Broda, K.B.: Neural-Symbolic Learning System: Foundations and Applications. Springer-Verlag (2002)
-  Getoor, L., Friedman, N., Koller, D., Pfeffer, A.: Learning probabilistic relational models. RDM (2001)
-  Getoor, L., Taskar, B.: Introduction to Statistical Relational Learning. MIT Press (2007)
-  Hu, Z., Ma, X., Liu, Z., Hovy, E.H., Xing, E.P.: Harnessing deep neural networks with logic rules. In: ACL (2016)
Jaeger, M.: Parameter learning for relational bayesian networks. In: ICML (2007)
-  Kaur, N., Kunapuli, G., Khot, T., Kersting, K., Cohen, W., Natarajan, S.: Relational restricted boltzmann machines: A probabilistic logic learning approach. In: ILP (2017)
-  Kazemi, S.M., Buchman, D., Kersting, K., Natarajan, S., Poole, D.: Relational logistic regression. In: KR (2014)
-  Kazemi, S.M., Poole, D.: RelNN: A deep neural model for relational learning. In: AAAI (2018)
-  Khot, T., Natarajan, S., Kersting, K., Shavlik, J.: Learning Markov logic networks via functional gradient boosting. In: ICDM (2011)
-  Komendantskaya, E.: First-order deduction in neural networks. In: LATA (2007)
-  Lao, N., Cohen, W.: Relational retrieval using a combination of path-constrained random walks. JMLR (2010)
-  Larochelle, H., Bengio, Y.: Classification using discriminative restricted boltzmann machines. In: ICML (2008)
-  Lavrac, N., Džeroski, v.: Inductive Logic Programming: Techniques and Applications. Prentice Hall (1993)
-  Lodhi, H., Muggleton, S.: Is mutagenesis still challenging ? In: ILP (2005)
-  Lodhi, H.: Deep relational machines. In: ICONIP (2013)
-  Mihalkova, L., Mooney, R.: Bottom-up learning of Markov logic network structure. In: ICML (2007)
-  Muggleton, S.: Inverse entailment and Progol. New Generation Computing (1995)
-  Natarajan, S., Khot, T., Kersting, K., Guttmann, B., Shavlik, J.: Gradient-based boosting for statistical relational learning: Relational dependency network case. MLJ (2012)
-  Natarajan, S., Tadepalli, P., Dietterich, T.G., Fern, A.: Learning first-order probabilistic models with combining rules. ANN MATH ARTIF INTEL (2008)
-  Nickel, M., Tresp, V., Kriegel, H.P.: A three-way model for collective learning on multirelational data. In: ICML (2011)
-  Niepert, M., Ahmed, M., Kutzkov, K.: Learning convolutional neural networks for graphs. In: ICML (2016)
-  Palm, R.B., Paquet, U., Winther, O.: Recurrent relational networks for complex relational reasoning. In: ICLR (2018)
-  Perozzi, B., Al-Rfou’, R., Skiena, S.: Deepwalk: online learning of social representations. In: KDD (2014)
-  Pham, T., Tran, T., Phung, D.Q., Venkatesh, S.: Column networks for collective classification. In: AAAI (2016)
-  Poon, H., Domingos, P.: Joint inference in information extraction. In: AAAI (2007)
-  Ramon, J., Raedt, L.D.: Multi instance neural network. In: ICML Workshop (2000)
-  Richardson, M., Domingos, P.: Markov logic networks. MLJ (2006)
-  Scarselli, F., Gori, M., Tsoi, A.C., Hagenbuchner, M., Monfardini, G.: The graph neural network model. IEEE Transactions on Neural Networks (2009)
-  Schlichtkrull, M., Kipf, T.N., Bloem, P., van den Berg, R., Titov, I., Welling, M.: Modeling relational data with graph convolutional networks. In: ESWC (2018)
-  Socher, R., Chen, D., Manning, C., Ng, A.: Reasoning with neural tensor networks for knowledge base completion. In: NeurIPS (2013)
-  Šourek, G., Manandhar, S., Železný, F., Schockaert, S., Kuželka, O.: Learning predictive categories using lifted relational neural networks. In: ILP (2016)
-  Towell, G.G., Shavlik, J.W., Noordewier, M.O.: Refinement of approximate domain theories by knowledge-based neural networks. In: AAAI (1990)
-  Šourek, G., Aschenbrenner, V., Železny, F., Kuželka, O.: Lifted relational neural networks. In: NeurIPS Workshop (2015)
-  Šourek, G., Svatoš, M., Železný, F., Schockaert, S., Kuželka, O.: Stacked structure learning for lifted relational neural networks. In: ILP (2017)
Wang, H., Shi, X., Yeung, D.: Relational stacked denoising autoencoder for tag recommendation. In: AAAI (2015)
-  Yang, B., Yih, W.T., He, X., Gao, J., Deng, L.: Embedding entitities and relations for learning and inference in knowledge bases. In: ICLR (2015)
-  Zeng, D., Liu, K., Lai, S., Zhou, G., Zhao, J.: Relation classification via convolutional deep neural network. In: COLING (2014)