A Polynomial Neural Network with Controllable Precision and Human-Readable Topology for Prediction and System Identification

04/08/2020
by   Gang Liu, et al.
0

Although the success of artificial neural networks (ANNs), there is still a concern among many over their "black box" nature. Why do they work? Could we design a "transparent" network? This paper presents a controllable and readable polynomial neural network (CR-PNN) for approximation, prediction, and system identification. CR-PNN is simple enough to be described as one "small" formula so that we can control the approximation precision and explain the internal structure of the network. CR-PNN, in fact, essentially is the fascinating Taylor expansion in the form of network. The number of layers represents precision. Derivatives in Taylor expansion are exactly imitated by error back-propagation algorithm. Firstly, we demonstrated that CR-PNN shows excellent analysis performance to the "black box" system through ten synthetic data with noise. Also, the results were compared with synthetic data to substantiate its search towards the global optimum. Secondly, it was verified, by ten real-world applications, that CR-PNN brought better generalization capability relative to the typical ANNs that approximate depended on the nonlinear activation function. Finally, 200,000 repeated experiments, with 4898 samples, demonstrated that CR-PNN is five times more efficient than typical ANN for one epoch and ten times more efficient than typical ANN for one forward-propagation. In short, compared with the traditional neural networks, the novelties and advantages of CR-PNN include readability of the internal structure, guarantees of a globally optimal solution, lower computational complexity, and likely better robustness to real-world approximation.(We're strong believers in Open Source, and provide CR-PNN code for others. GitHub: https://github.com/liugang1234567/CR-PNN#cr-pnn)

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset