NSL: Hybrid Interpretable Learning From Noisy Raw Data

12/09/2020
by   Daniel Cunnington, et al.
0

Inductive Logic Programming (ILP) systems learn generalised, interpretable rules in a data-efficient manner utilising existing background knowledge. However, current ILP systems require training examples to be specified in a structured logical format. Neural networks learn from unstructured data, although their learned models may be difficult to interpret and are vulnerable to data perturbations at run-time. This paper introduces a hybrid neural-symbolic learning framework, called NSL, that learns interpretable rules from labelled unstructured data. NSL combines pre-trained neural networks for feature extraction with FastLAS, a state-of-the-art ILP system for rule learning under the answer set semantics. Features extracted by the neural components define the structured context of labelled examples and the confidence of the neural predictions determines the level of noise of the examples. Using the scoring function of FastLAS, NSL searches for short, interpretable rules that generalise over such noisy examples. We evaluate our framework on propositional and first-order classification tasks using the MNIST dataset as raw data. Specifically, we demonstrate that NSL is able to learn robust rules from perturbed MNIST data and achieve comparable or superior accuracy when compared to neural network and random forest baselines whilst being more general and interpretable.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/24/2021

FF-NSL: Feed-Forward Neural-Symbolic Learner

Inductive Logic Programming (ILP) aims to learn generalised, interpretab...
research
12/06/2021

Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks

Recent work on neuro-symbolic inductive logic programming has led to pro...
research
05/25/2022

Inductive Learning of Complex Knowledge from Raw Data

One of the ultimate goals of Artificial Intelligence is to learn general...
research
06/08/2019

Inductive Logic Programming via Differentiable Deep Neural Logic Networks

We propose a novel paradigm for solving Inductive Logic Programming (ILP...
research
12/15/2020

Rule Extraction from Binary Neural Networks with Convolutional Rules for Model Validation

Most deep neural networks are considered to be black boxes, meaning thei...
research
11/13/2017

Learning Explanatory Rules from Noisy Data

Artificial Neural Networks are powerful function approximators capable o...
research
03/31/2021

Neuro-Symbolic Constraint Programming for Structured Prediction

We propose Nester, a method for injecting neural networks into constrain...

Please sign up or login with your details

Forgot password? Click here to reset