FF-NSL: Feed-Forward Neural-Symbolic Learner

06/24/2021
by   Daniel Cunnington, et al.
0

Inductive Logic Programming (ILP) aims to learn generalised, interpretable hypotheses in a data-efficient manner. However, current ILP systems require training examples to be specified in a structured logical form. This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL), that integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data. FF-NSL uses a pre-trained neural network to extract symbolic facts from unstructured data and an ILP system to learn a hypothesis that performs a downstream classification task. In order to evaluate the applicability of our approach to real-world applications, the framework is evaluated on tasks where distributional shifts are introduced to unstructured input data, for which pre-trained neural networks are likely to predict incorrectly and with high confidence. Experimental results show that FF-NSL outperforms baseline approaches such as a random forest and deep neural networks by learning more accurate and interpretable hypotheses with fewer examples.

READ FULL TEXT
research
12/09/2020

NSL: Hybrid Interpretable Learning From Noisy Raw Data

Inductive Logic Programming (ILP) systems learn generalised, interpretab...
research
10/21/2021

Neuro-Symbolic Reinforcement Learning with First-Order Logic

Deep reinforcement learning (RL) methods often require many trials befor...
research
07/15/2023

NeurASP: Embracing Neural Networks into Answer Set Programming

We present NeurASP, a simple extension of answer set programs by embraci...
research
04/28/2022

Learning First-Order Rules with Differentiable Logic Program Semantics

Learning first-order logic programs (LPs) from relational facts which yi...
research
06/11/2018

When and where do feed-forward neural networks learn localist representations?

According to parallel distributed processing (PDP) theory in psychology,...
research
03/18/2021

Linear Iterative Feature Embedding: An Ensemble Framework for Interpretable Model

A new ensemble framework for interpretable model called Linear Iterative...
research
10/10/2016

Extrapolation and learning equations

In classical machine learning, regression is treated as a black box proc...

Please sign up or login with your details

Forgot password? Click here to reset