Extending Logical Neural Networks using First-Order Theories

by   Aidan Evans, et al.
Yale University

Logical Neural Networks (LNNs) are a type of architecture which combine a neural network's abilities to learn and systems of formal logic's abilities to perform symbolic reasoning. LLNs provide programmers the ability to implicitly modify the underlying structure of the neural network via logical formulae. In this paper, we take advantage of this abstraction to extend LNNs to support equality and function symbols via first-order theories. This extension improves the power of LNNs by significantly increasing the types of problems they can tackle. As a proof of concept, we add support for the first-order theory of equality to IBM's LNN library and demonstrate how the introduction of this allows the LNN library to now reason about expressions without needing to make the unique-names assumption.


An Equational Logical Framework for Type Theories

A wide range of intuitionistic type theories may be presented as equatio...

Implementation of Two Layers Type Theory in Dedukti and Application to Cubical Type Theory

In this paper, we make a substantial step towards an encoding of Cubical...

Neural Logic Networks

Recent years have witnessed the great success of deep neural networks in...

Colored E-Graph: Equality Reasoning with Conditions

E-graphs are a prominent data structure that has been increasing in popu...

Enhancing Neural Mathematical Reasoning by Abductive Combination with Symbolic Library

Mathematical reasoning recently has been shown as a hard challenge for n...

A Logical Framework with Higher-Order Rational (Circular) Terms

Logical frameworks provide natural and direct ways of specifying and rea...

Deep Adaptive Semantic Logic (DASL): Compiling Declarative Knowledge into Deep Neural Networks

We introduce Deep Adaptive Semantic Logic (DASL), a novel framework for ...

Please sign up or login with your details

Forgot password? Click here to reset