Composition of Relational Features with an Application to Explaining Black-Box Predictors

06/01/2022
by   Ashwin Srinivasan, et al.
0

Relational machine learning programs like those developed in Inductive Logic Programming (ILP) offer several advantages: (1) The ability to model complex relationships amongst data instances; (2) The use of domain-specific relations during model construction; and (3) The models constructed are human-readable, which is often one step closer to being human-understandable. However, these ILP-like methods have not been able to capitalise fully on the rapid hardware, software and algorithmic developments fuelling current developments in deep neural networks. In this paper, we treat relational features as functions and use the notion of generalised composition of functions to derive complex functions from simpler ones. We formulate the notion of a set of M-simple features in a mode language M and identify two composition operators (ρ_1 and ρ_2) from which all possible complex features can be derived. We use these results to implement a form of "explainable neural network" called Compositional Relational Machines, or CRMs, which are labelled directed-acyclic graphs. The vertex-label for any vertex j in the CRM contains a feature-function f_j and a continuous activation function g_j. If j is a "non-input" vertex, then f_j is the composition of features associated with vertices in the direct predecessors of j. Our focus is on CRMs in which input vertices (those without any direct predecessors) all have M-simple features in their vertex-labels. We provide a randomised procedure for constructing and learning such CRMs. Using a notion of explanations based on the compositional structure of features in a CRM, we provide empirical evidence on synthetic data of the ability to identify appropriate explanations; and demonstrate the use of CRMs as 'explanation machines' for black-box models that do not provide explanations for their predictions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/02/2018

Logical Explanations for Deep Relational Machines Using Relevance Information

Our interest in this paper is in the construction of symbolic explanatio...
research
04/17/2019

"Why did you do that?": Explaining black box models with Inductive Synthesis

By their nature, the composition of black box models is opaque. This mak...
research
04/16/2023

Explanations of Black-Box Models based on Directional Feature Interactions

As machine learning algorithms are deployed ubiquitously to a variety of...
research
05/30/2020

RelEx: A Model-Agnostic Relational Model Explainer

In recent years, considerable progress has been made on improving the in...
research
12/19/2020

Constructing and Evaluating an Explainable Model for COVID-19 Diagnosis from Chest X-rays

In this paper, our focus is on constructing models to assist a clinician...
research
02/17/2020

Investigating the Compositional Structure Of Deep Neural Networks

The current understanding of deep neural networks can only partially exp...
research
03/11/2020

Building and Interpreting Deep Similarity Models

Many learning algorithms such as kernel machines, nearest neighbors, clu...

Please sign up or login with your details

Forgot password? Click here to reset