SLDR-DL is a general purpose framework for SLD-resolution with deep learning. The name “SLD-resolution” is the abbreviation of SL-resolution for definite clauses [1, 2], while the name “SL-resolution” is the abbreviation of linear resolution with selection function . In the SLDR-DL framework, computers can reason and learn to reason by using definite clauses  and deep neural networks 
. The core concept of this framework is to train neural networks via successful resolution processes and to use the trained neural networks to guide new resolution processes heuristically.
The SLDR-DL framework has two aims: The first is to simulate the interaction between learning and reasoning: Systems are expected to learn from reasoning processes and to use learnt experiences to guide new reasoning processes. The second is to solve the problem of combinatorial explosion in automated reasoning : When a problem becomes complex, its search tree of reasoning often grows rapidly. Many complex problems fail to be resolved because it is difficult to find true answers from huge search trees.
A Prolog library of deep neural networks.
An implementation of SLD-resolution with deep learning.
Some worked examples.
The source code will be updated continuously.
2 Related Works
is a fundamental technique of automated reasoning. It has been used in many fields of artificial intelligence. For instance, Prolog is a programming language based on this technique[8, 7]
. Mathematical reasoning processes, such as pattern matching, variable substitution and implication, can be simulated in Prolog. Also, belief-desire-intention (BDI) agents can be developed with this technique [9, 10].
Recently, many researchers have explored how to use deep learning to realise reasoning: For instance, Irving et al.  have developed DeepMath which uses deep neural networks to select possible premises in automated theorem proving processes. Also, Serafini and Garcez  have proposed Real Logic for the integration of learning and reasoning. In the field of reinforcement, Garnelo et al.  have tried to teach deep neural networks to generate symbols and build representations. In addition, Cai et al.  have explored the possibility of using deep feedforward neural networks to guide algebraic reasoning processes.
3 SLD-Resolution with Deep Learning
SLD-resolution with deep learning is a fundamental technique of the SLDR-DL framework. It enables deep neural networks to guide new resolution processes after learning from old and successful resolution processes.
SLD-Resolution  is a process deciding whether a goal is satisfiable with a set of definite clauses. It is based on unification, definite clauses and resolution. In this section, we assume that readers have been familiar with these techniques, and only essential definitions and simple examples are carried out to aid the readability.
Unification  is one of the core algorithms in logic programming. It can make two terms become equivalent ones by substitution:
Definition 1 (Term). A term is a constant, a variable or a functor followed by a sequence of terms. Formally, it is defined as:
where is a constant, is a variable, is a functor and are terms.
A term can be used to represent facts. For instance, if means “ loves ”, and means “ knows ”, then “Haibara knows that Conan loves Ran” can be represented as .
Definition 2 (Unification). Unification is a process deciding whether or not two terms can be equivalent ones by substituting their variables for other variables or constants. The standard unification algorithm usually unifies two terms by computing the most general unifier (MGU). A unifier of two terms is a set of substitutions which can make the two terms to be equivalent ones, and the MGU of the two terms is the unifier which can be unified with all unifiers of the two terms. Formally, unification produces the MGU of two terms and such that:
For instance, and can be unified by applying the MGU , where “” is the substitution operation.
3.1.2 Definite Clauses
Definite clauses  are used to represent relations between terms, especially their implication relations:
Definition 3 (Definite Clause). A definite clause is an implication relation between multiple premises and a single conclusion. Formally, it is defined as:
where are premises, “” is the logical AND, “” is the implication symbol, and is a conclusion. All the premises and the conclusion are terms.
Definition 4 (Disjunction Form). The disjunction form of the definite clause is:
where “” is the logical OR, and “” is the logical NOT. In this formula, is called a positive literal, and are called negative literals. Formula (3) and Formula (4) can be proved to be logically equivalent .
For instance, “if is bigger than , and is bigger than , then is bigger than ” can be represented as: , and its disjunction form is .
The resolution algorithm  can decide whether or not a goal is satisfiable:
Definition 5 (Goal). A goal is a definite clause with an empty conclusion , and its disjunction form is .
Definition 6 (Rule). A rule is a definite clause with a conclusion , and its disjunction form is . In particular, a rule is called an assertion if its premise is empty. In this case, it becomes , and its disjunction form is .
Definition 7 (SLD-Resolution). SLD-resolution is a process analysing goals by applying rules: Assume that a goal is and a rule is . Firstly, a negative literal is selected from the goal. Secondly, unification is used to compute the MGU such that . Lastly, if the unification process is successful, then is replaced by , and the goal becomes . In particular, if the rule is an assertion, then the goal becomes , as is eliminated. The above process is run iteratively until the goal is empty, and backtracking is used to select new rules when unification fails.
For instance, given three rules:
and a goal:
SLD-resolution can prove , and by resolving the goal (“” is used to represent “empty”):
3.2 Deep Neural Networks
Deep neural networks are used to select rules during the process of SLD-resolution. In this section, we assume that readers have been familiar with deep neural networks, and only essential definitions are carried out.
Definition 8 (Deep Feedforward Neural Network). A deep feedforward neural network (DFNN) 
is a neural network satisfying: (1) It has 5 or more than 5 hidden layers. (2) Two neighbouring layers are fully connected. (3) It does not have any recurrent connections. A DFNN can map an input vector to an output vector.
Definition 9 (Back-Propagation). Back-propagation 
is a supervised learning method of neural networks. Given an input vector, a feedforward neural network can map it to an output vector, compute an error between the output vector and a target vector and use back-propagation to transfer the error to different layers and update the neural network.
3.3 The SLDR-DL Framework
The SLDR-DL framework is the combination of SLD-resolution and DFNNs. It enables the deep neural networks to guide and learn to guide resolution processes.
3.3.1 The Framework Structure
The core part of the SLDR-DL framework is an implementation of SLD-resolution with DFNNs.
Definition 10 (SLD-Resolution with DFNNs). SLD-resolution with DFNNs is adapted from the standard SLD-resolution (see  and Definition 7). When resolving a goal, the following strategy is used: Firstly, a goal literal is encoded to an input vector. Secondly, a trained neural network is used to maps the input vector to an output vector. Thirdly, the output vector is decoded to a ranking list of rules. Finally, rules are applied to the goal according to the ranking list. The methods of encoding and decoding will be discussed in 3.3.2.
In the above process, the neural network is used to predict the ranking list of rules for the given literal. Therefore, the neural network must learn to rank the rules before it is used for prediction.
Definition 11 (Learning by SLD-Resolution). Learning by SLD-resolution is a technique which trains neural networks by using successful resolution processes. Before learning, a goal must be successfully resolved, and records of resolution must be produced. Each record consists of a selected literal and the name of a rule which has been applied to the literal. These records are used to train the neural network: Firstly, the selected literal is encoded to an input vector. Then the name of the rule is encoded to a target vector. Finally, the input vector and the target vector are used to train the neural network with the back-propagation algorithm . The methods of encoding and decoding will be discussed in Section 3.3.2.
Based on the resolution function and the learning function discussed above, an SLDR-DL system usually consists of:
A deep neural network.
A rule set for resolution and the encoding and decoding of rules.
A symbol set for the encoding of literals.
Definition 12 (Rule Set). A rule set contains logical rules with unique names and unique IDs. These rules are definite clauses written in disjunction form. Their IDs should be positive integers.
For instance, a rule set can be:
Definition 13 (Symbol Set). A symbol set contains symbols with unique IDs. The IDs should be positive integers.
For instance, a symbol set can be:
3.3.2 Encoding and Decoding
To enable neural networks to guide resolution processes, encoding and decoding are required, as discussed by Section 3.3.1: (1) Selected literals should be encoded to input vectors; (2) Rules should be encoded to target vectors; (3) Output vectors should be decoded to ranking lists of rules. In the SLDR-DL framework, we have implemented the following encoding or decoding methods:
Given a symbol set , a predefined depth and a predefined breadth , a negative literal is encoded to a vector via the following steps: Firstly, all variables of are replaced by a notation “”. Let denote this new expression. Secondly, is rewritten to a completed term with the depth and the breadth . All positions exceed the depth and the breadth are omitted, and empty positions are filled by a notation “”. Thirdly, is flatten to a list . Finally,
is represented as a vector by using the one-hot encoding. Activated positions of the one-hot encoding are decided by the IDs of symbols in . In particular, “” is encoded to a zero block.
A rule is encoded to a vector via the one-hot encoding , according to its unique ID in a rule set.
An output vector is decoded to a ranking list via the following steps: Firstly, IDs are attached to all elements, so that the vector becomes a list . Secondly, the list is sorted by in descending order. Finally, the order of IDs is figured out from the sorted list, and the order decides a ranking list of rules.
3.3.3 The Education of SLDR-DL Systems
We use the word “education” instead of “training” because the process of optimising an SLDR-system is usually from simple problems to complex problems and requires the interaction between learning and reasoning, and this process is similar to the process of educating a human. In other words, resolution in SLDR-DL is a heuristic search process which can optimise its search strategy via learning. Before learning, it can resolve simple goals, but the resolution of complex goals may fail, because the search space may be huge. After learning in proper ways, the search space can be reduced, so that the complex goals can be resolved successfully. Therefore, the education of an SLDR-DL system usually requires a schedule in which problems are sorted from simplest to hardest. By the schedule, the system tries to resolve simple problems at the beginning, works out resolution records and learns the records. Then the system proceeds to more complex problems and continues learning until all problems are resolved.
3.3.4 A Prolog Library of Deep Neural Networks
The SLDR-DL framework also provides a Prolog library which supports essential neural network computations. Specifically, the library now supports:
Matrix addition and multiplication.
The back-propagation algorithm of feedforward neural networks.
The Softmax classifier.
Details of the above functions can be found from . To expanded the use of the framework, more functions will be added to the library in the future.
4 A Practical Guide
To build and use an SLDR-DL system, users need to define a rule set, a symbol set and a neural network. These definitions should be coded in Prolog (preferably SWI-Prolog) .
4.1 Defining a Rule Set
A rule set is defined as a list of rules (definite clauses) written in disjunction form with their unique IDs and names. A rule is defined in the following format:
The disjunction form of a rule is defined as:
where “” denotes a negative literal (premise), and “” denotes a positive literal (conclusion). In particular, the number of negative literals can be zero, and the rule becomes an assertion . Figure 1 provides an example of a rule set, where is the maximum ID of rules. It is important to note that when defining a rule set, we use the Prolog convention: A symbol is a constant if it is a number, or its first letter is in lower case. A symbol is a variable if its first letter is in lower case. For instance, “” means that for any and , if is a child of , and is a male, then is the father of .
4.2 Defining a Symbol Set
A symbol set is defined as a list of symbols with their unique IDs. A symbol is defined in the following format:
Figure 2 provides an example of a symbol set, where is the maximum ID of symbols.
4.3 Defining a Neural Network
A neural network can be defined as a list of layers:
Each layer can be initialised via:
Figure 3 provides an example of the definition of a neural network.
4.4 Learning and Reasoning
The framework provides a core function named “”:
Both learning and reasoning processes are based on the core function. Figure 4 provides an example about how to use the core function, where , , and are goals, “
” is used to define the number of learning epochsand the learning rate , “” is used to define the breadth and the depth of encodings, and “” is used to define the dimension of decodings . When running the process, the neural network learns from the resolution processes of , and and then tries to resolve . Figure 5 shows a result of running, including a record of cross-entropy losses and a resolution process of .
The SLDR-DL framework enables the interaction between resolution and deep learning. In the framework, users can define logical rules in the form of definite clauses, define neural networks and teach the neural networks to use the logical rules. The neural networks can learn from successful resolution processes and then use learnt experiences to guide new resolution processes. To expand the use of this framework, we will add more functions to it and refine it in the future.
-  Robert A. Kowalski. Predicate logic as programming language. In IFIP Congress, pages 569–574, 1974.
-  Krzysztof R. Apt and Maarten H. van Emden. Contributions to the theory of logic programming. J. ACM, 29(3):841–862, 1982.
-  Robert A. Kowalski and Donald Kuehner. Linear resolution with selection function. Artif. Intell., 2(3/4):227–260, 1971.
-  Alfred Horn. On sentences which are true of direct unions of algebras. J. Symb. Log., 16(1):14–21, 1951.
-  Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7533):436–444, May 2015.
-  Alan Bundy. The computer modelling of mathematical reasoning. Academic Press, 1983.
-  Jan Wielemaker, Tom Schrijvers, Markus Triska, and Torbjörn Lager. Swi-prolog. TPLP, 12(1-2):67–96, 2012.
-  Alain Colmerauer and Philippe Roussel. The birth of prolog. In History of Programming Languages Conference (HOPL-II), Preprints, Cambridge, Massachusetts, USA, April 20-23, 1993, pages 37–52, 1993.
-  Anand S. Rao. AgentSpeak(L): BDI agents speak out in a logical computable language. In Agents Breaking Away, 7th European Workshop on Modelling Autonomous Agents in a Multi-Agent World, Eindhoven, The Netherlands, January 22-25, 1996, Proceedings, pages 42–55, 1996.
-  Rafael H Bordini, Jomi Fred Bner, and Michael Wooldridge. Programming Multi-Agent Systems in AgentSpeak using Jason (Wiley Series in Agent Technology). John Wiley and Sons, 2007.
-  Geoffrey Irving, Christian Szegedy, Alexander A. Alemi, Niklas Eén, François Chollet, and Josef Urban. Deepmath - deep sequence models for premise selection. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 2235–2243, 2016.
-  Luciano Serafini and Artur S. d’Avila Garcez. Logic tensor networks: Deep learning and logical reasoning from data and knowledge. CoRR, abs/1606.04422, 2016.
-  Marta Garnelo, Kai Arulkumaran, and Murray Shanahan. Towards deep symbolic reinforcement learning. CoRR, abs/1609.05518, 2016.
-  Chenghao Cai, Dengfeng Ke, Yanyan Xu, and Kaile Su. Learning of human-like algebraic reasoning using deep feedforward neural networks. CoRR, abs/1704.07503, 2017.
-  Franz Baader and Wayne Snyder. Unification theory. In Handbook of Automated Reasoning (in 2 volumes), pages 445–532. 2001.
-  Michael Huth and Mark Dermot Ryan. Logic in computer science - modelling and reasoning about systems (2. ed.). Cambridge University Press, 2004.
Theory of the backpropagation neural network.Neural Networks, 1(Supplement-1):445–448, 1988.
Joseph P. Turian, Lev-Arie Ratinov, and Yoshua Bengio.
Word representations: A simple and general method for semi-supervised learning.In ACL 2010, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, July 11-16, 2010, Uppsala, Sweden, pages 384–394, 2010.
-  Christopher M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag New York, Inc., 2006.