Functional Rule Extraction Method for Artificial Neural Networks

The idea I propose in this paper is a method that is based on comprehensive functions for directed and undirected rule extraction from artificial neural network operations. Firstly, I defined comprehensive functions, then constructed a comprehensive multilayer network (denoted as 𝛮). Each activation function of 𝛮 is parametrized to a comprehensive function. Following 𝛮 construction, I extracted rules from the network by observing that the network output depends on probabilities of composite functions that are comprehensive functions. This functional rule extraction method applies to the perceptron and multilayer neural network. For any 𝛮 model that is trained to predict some outcome given some event, that model behaviour can be expressed – using the functional rule extraction method – as a formal rule or informal rule obeyed by the network to predict that outcome. As example, figure 1 consist of a comprehensive physics function that is parameter for one of the network hidden activation functions. Using the functional rule extraction method, I deduced that the comprehensive multilayer network prediction depends on probability of that physics function and probabilities of other composite comprehensive functions in 𝛮. Additionally, functional rule extraction method can aid in applied settings for generation of equations of learned phenomena. This generation can be achieved by first training an 𝛮 model toward predicting outcome of a phenomenon, then extracting the rules and assuming that probability values of the network comprehensive functions are constants. Finally, to simplify the generated equation, comprehensive functions with probability 𝑝 = 0 can be omitted.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/21/2020

Activation function dependence of the storage capacity of treelike neural networks

The expressive power of artificial neural networks crucially depends on ...
research
02/02/2021

Formalising the Use of the Activation Function in Neural Inference

We investigate how activation functions can be used to describe neural f...
research
10/13/2021

Two-argument activation functions learn soft XOR operations like cortical neurons

Neurons in the brain are complex machines with distinct functional compa...
research
03/19/2020

Layerwise Knowledge Extraction from Deep Convolutional Networks

Knowledge extraction is used to convert neural networks into symbolic de...
research
09/24/2018

Dynamical Isometry is Achieved in Residual Networks in a Universal Way for any Activation Function

We demonstrate that in residual neural networks (ResNets) dynamical isom...
research
07/31/2020

Using neural networks to predict icephobic performance

Icephobic surfaces inspired by superhydrophobic surfaces offer a passive...

Please sign up or login with your details

Forgot password? Click here to reset