Explanatory Learning: Beyond Empiricism in Neural Networks

01/25/2022
by   Antonio Norelli, et al.
6

We introduce Explanatory Learning (EL), a framework to let machines use existing knowledge buried in symbolic sequences – e.g. explanations written in hieroglyphic – by autonomously learning to interpret them. In EL, the burden of interpreting symbols is not left to humans or rigid human-coded compilers, as done in Program Synthesis. Rather, EL calls for a learned interpreter, built upon a limited collection of symbolic sequences paired with observations of several phenomena. This interpreter can be used to make predictions on a novel phenomenon given its explanation, and even to find that explanation using only a handful of observations, like human scientists do. We formulate the EL problem as a simple binary classification task, so that common end-to-end approaches aligned with the dominant empiricist view of machine learning could, in principle, solve it. To these models, we oppose Critical Rationalist Networks (CRNs), which instead embrace a rationalist view on the acquisition of knowledge. CRNs express several desired properties by construction, they are truly explainable, can adjust their processing at test-time for harder inferences, and can offer strong confidence guarantees on their predictions. As a final contribution, we introduce Odeen, a basic EL environment that simulates a small flatland-style universe full of phenomena to explain. Using Odeen as a testbed, we show how CRNs outperform empiricist end-to-end approaches of similar size and architecture (Transformers) in discovering explanations for novel phenomena.

READ FULL TEXT

page 18

page 19

page 20

research
11/01/2018

Towards Explainable NLP: A Generative Explanation Framework for Text Classification

Building explainable systems is a critical problem in the field of Natur...
research
09/26/2022

Greybox XAI: a Neural-Symbolic learning framework to produce interpretable predictions for image classification

Although Deep Neural Networks (DNNs) have great generalization and predi...
research
06/21/2021

A Turing Test for Transparency

A central goal of explainable artificial intelligence (XAI) is to improv...
research
06/04/2019

Learning to Explain: Answering Why-Questions via Rephrasing

Providing plausible responses to why questions is a challenging but crit...
research
04/26/2022

Process Knowledge-infused Learning for Suicidality Assessment on Social Media

Improving the performance and natural language explanations of deep lear...
research
06/12/2021

Entropy-based Logic Explanations of Neural Networks

Explainable artificial intelligence has rapidly emerged since lawmakers ...

Please sign up or login with your details

Forgot password? Click here to reset