
Inductive Logic Programming via Differentiable Deep Neural Logic Networks
We propose a novel paradigm for solving Inductive Logic Programming (ILP...
read it

Rule Extraction from Binary Neural Networks with Convolutional Rules for Model Validation
Most deep neural networks are considered to be black boxes, meaning thei...
read it

Learn to Explain Efficiently via Neural Logic Inductive Learning
The capability of making interpretable and selfexplanatory decisions is...
read it

NeuroSymbolic Constraint Programming for Structured Prediction
We propose Nester, a method for injecting neural networks into constrain...
read it

Learning Explanatory Rules from Noisy Data
Artificial Neural Networks are powerful function approximators capable o...
read it

Harnessing Deep Neural Networks with Logic Rules
Combining deep neural networks with structured logic rules is desirable ...
read it

Improving Scalability of Inductive Logic Programming via Pruning and BestEffort Optimisation
Inductive Logic Programming (ILP) combines rulebased and statistical ar...
read it
NSL: Hybrid Interpretable Learning From Noisy Raw Data
Inductive Logic Programming (ILP) systems learn generalised, interpretable rules in a dataefficient manner utilising existing background knowledge. However, current ILP systems require training examples to be specified in a structured logical format. Neural networks learn from unstructured data, although their learned models may be difficult to interpret and are vulnerable to data perturbations at runtime. This paper introduces a hybrid neuralsymbolic learning framework, called NSL, that learns interpretable rules from labelled unstructured data. NSL combines pretrained neural networks for feature extraction with FastLAS, a stateoftheart ILP system for rule learning under the answer set semantics. Features extracted by the neural components define the structured context of labelled examples and the confidence of the neural predictions determines the level of noise of the examples. Using the scoring function of FastLAS, NSL searches for short, interpretable rules that generalise over such noisy examples. We evaluate our framework on propositional and firstorder classification tasks using the MNIST dataset as raw data. Specifically, we demonstrate that NSL is able to learn robust rules from perturbed MNIST data and achieve comparable or superior accuracy when compared to neural network and random forest baselines whilst being more general and interpretable.
READ FULL TEXT
Comments
There are no comments yet.