Squashing activation functions in benchmark tests: towards eXplainable Artificial Intelligence using continuous-valued logic

10/17/2020
by   Daniel Zeltner, et al.
55

Over the past few years, deep neural networks have shown excellent results in multiple tasks, however, there is still an increasing need to address the problem of interpretability to improve model transparency, performance, and safety. Achieving eXplainable Artificial Intelligence (XAI) by combining neural networks with continuous logic and multi-criteria decision-making tools is one of the most promising ways to approach this problem: by this combination, the black-box nature of neural models can be reduced. The continuous logic-based neural model uses so-called Squashing activation functions, a parametric family of functions that satisfy natural invariance requirements and contain rectified linear units as a particular case. This work demonstrates the first benchmark tests that measure the performance of Squashing functions in neural networks. Three experiments were carried out to examine their usability and a comparison with the most popular activation functions was made for five different network types. The performance was determined by measuring the accuracy, loss, and time per epoch. These experiments and the conducted benchmarks have proven that the use of Squashing functions is possible and similar in performance to conventional activation functions. Moreover, a further experiment was conducted by implementing nilpotent logical gates to demonstrate how simple classification tasks can be solved successfully and with high performance. The results indicate that due to the embedded nilpotent logical operators and the differentiability of the Squashing function, it is possible to solve classification problems, where other commonly used activation functions fail.

READ FULL TEXT

page 4

page 5

page 15

page 16

page 17

page 18

page 19

research
02/17/2020

Evolutionary Optimization of Deep Learning Activation Functions

The choice of activation function can have a large effect on the perform...
research
05/22/2019

Effect of shapes of activation functions on predictability in the echo state network

We investigate prediction accuracy for time series of Echo state network...
research
05/13/2022

Uninorm-like parametric activation functions for human-understandable neural models

We present a deep learning model for finding human-understandable connec...
research
10/15/2020

Review and Comparison of Commonly Used Activation Functions for Deep Neural Networks

The primary neural networks decision-making units are activation functio...
research
05/17/2021

How to Explain Neural Networks: A perspective of data space division

Interpretability of intelligent algorithms represented by deep learning ...
research
10/06/2019

Semantic Interpretation of Deep Neural Networks Based on Continuous Logic

Combining deep neural networks with the concepts of continuous logic is ...
research
01/23/2023

Topological Understanding of Neural Networks, a survey

We look at the internal structure of neural networks which is usually tr...

Please sign up or login with your details

Forgot password? Click here to reset