Automating the Design and Development of Gradient Descent Trained Expert System Networks

07/04/2022
by   Jeremy Straub, et al.
0

Prior work introduced a gradient descent trained expert system that conceptually combines the learning capabilities of neural networks with the understandability and defensible logic of an expert system. This system was shown to be able to learn patterns from data and to perform decision-making at levels rivaling those reported by neural network systems. The principal limitation of the approach, though, was the necessity for the manual development of a rule-fact network (which is then trained using backpropagation). This paper proposes a technique for overcoming this significant limitation, as compared to neural networks. Specifically, this paper proposes the use of larger and denser-than-application need rule-fact networks which are trained, pruned, manually reviewed and then re-trained for use. Multiple types of networks are evaluated under multiple operating conditions and these results are presented and assessed. Based on these individual experimental condition assessments, the proposed technique is evaluated. The data presented shows that error rates as low as 3.9 median) can be obtained, demonstrating the efficacy of this technique for many applications.

READ FULL TEXT

page 16

page 17

research
09/30/2019

On the convergence of gradient descent for two layer neural networks

It has been shown that gradient descent can yield the zero training loss...
research
03/07/2021

Expert System Gradient Descent Style Training: Development of a Defensible Artificial Intelligence Technique

Artificial intelligence systems, which are designed with a capability to...
research
03/16/2022

Gradient Correction beyond Gradient Descent

The great success neural networks have achieved is inseparable from the ...
research
08/05/2021

Determining Sentencing Recommendations and Patentability Using a Machine Learning Trained Expert System

This paper presents two studies that use a machine learning expert syste...
research
07/12/2021

Nonparametric Regression with Shallow Overparameterized Neural Networks Trained by GD with Early Stopping

We explore the ability of overparameterized shallow neural networks to l...
research
05/02/2021

Universal scaling laws in the gradient descent training of neural networks

Current theoretical results on optimization trajectories of neural netwo...
research
11/07/2022

Can neural networks extrapolate? Discussion of a theorem by Pedro Domingos

Neural networks trained on large datasets by minimizing a loss have beco...

Please sign up or login with your details

Forgot password? Click here to reset