NSL: Hybrid Interpretable Learning From Noisy Raw Data

12/09/2020
by   Daniel Cunnington, et al.
0

Inductive Logic Programming (ILP) systems learn generalised, interpretable rules in a data-efficient manner utilising existing background knowledge. However, current ILP systems require training examples to be specified in a structured logical format. Neural networks learn from unstructured data, although their learned models may be difficult to interpret and are vulnerable to data perturbations at run-time. This paper introduces a hybrid neural-symbolic learning framework, called NSL, that learns interpretable rules from labelled unstructured data. NSL combines pre-trained neural networks for feature extraction with FastLAS, a state-of-the-art ILP system for rule learning under the answer set semantics. Features extracted by the neural components define the structured context of labelled examples and the confidence of the neural predictions determines the level of noise of the examples. Using the scoring function of FastLAS, NSL searches for short, interpretable rules that generalise over such noisy examples. We evaluate our framework on propositional and first-order classification tasks using the MNIST dataset as raw data. Specifically, we demonstrate that NSL is able to learn robust rules from perturbed MNIST data and achieve comparable or superior accuracy when compared to neural network and random forest baselines whilst being more general and interpretable.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/24/2021

FF-NSL: Feed-Forward Neural-Symbolic Learner

Inductive Logic Programming (ILP) aims to learn generalised, interpretab...
12/06/2021

Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks

Recent work on neuro-symbolic inductive logic programming has led to pro...
05/25/2022

Inductive Learning of Complex Knowledge from Raw Data

One of the ultimate goals of Artificial Intelligence is to learn general...
12/15/2020

Rule Extraction from Binary Neural Networks with Convolutional Rules for Model Validation

Most deep neural networks are considered to be black boxes, meaning thei...
08/11/2022

CORNET: A neurosymbolic approach to learning conditional table formatting rules by example

Spreadsheets are widely used for table manipulation and presentation. St...
11/13/2017

Learning Explanatory Rules from Noisy Data

Artificial Neural Networks are powerful function approximators capable o...
03/31/2021

Neuro-Symbolic Constraint Programming for Structured Prediction

We propose Nester, a method for injecting neural networks into constrain...