## 1 Introduction

In the last few years there has been a revolutionary improvement in the fields of Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL). In particular, DL achieved the state-of-the-art on imaging classifications,

(e.g., Dieleman et al., 2015; Nguyen, Clune, Bengio, Dosovitskiy & Yosinski, 2016; Katebi et al., 2018), and natural language processing

(e.g., Young et al., 2017), while both ML and DL are now commonly used in physics, biology, and astronomy (e.g., Baldeschi et al., 2017; Schawinski et al., 2017; Zingales & Waldmann, 2018; Leung & Bovy, 2019; Reiman & Göhre, 2019) typically to tackle complex classification problems.DL enables models, which are composed of several hidden layers, to learn representations of data with several degrees of abstraction. DL discovers hidden patterns in large datasets by adjusting the weights, which are commonly found by minimizing a loss function through the method of Gradient Descent (GD) or through some of its variants, such as Adam or Adadelta

(Kingma & Ba, 2014).Standard neural networks (NN) consist of artificial neurons, which are mathematical functions that receive one or more inputs and sum them to produce an output. Usually each input is separately weighted, and the sum is passed through a non-linear activation function.

In this paper we present a novel neuron architecture where an input value is double weighted, and we demonstrate that this new paradigm results in an improvement with respect to ordinary neurons. In particular, we show that a NN consisting of double-weight neurons produces improved classification accuracy when compared with an ordinary NN, with the same hyperparameters that characterize the NN.

## 2 The double-weight neuron

A standard artificial neuron is a function that receives inputs and produces an output according to the formula:

(1) |

where is the output of the neuron, represents the inputs, and are the learnable weights and biases, respectively, is the number of inputs, and is an activation function.

The idea to add additional weights was already discussed in Arora et al. (2018) for linear NN. Here we add additional weights within a nonlinear NN.

We define the double-weight neuron as:

(2) |

where are a set of additional learnable weights. From equation 2 the output values of a NN layer can be expressed by using the matrix notation:

(3) |

where

is the input vector of size (

), is the output vector of size (),is the bias vector of size (

) while and are weight matrices of size (), where is the number of inputs and is the number of neurons in the considered layer of the NN. Here represents the element-wise product between the two weights matrix.Adding a second weight matrix in the NN adds complexity to the NN architecture and increases the training time. However, as we demonstrate in Section 3

, the double-weight neuron leads to an improved performance in terms of classification accuracy. The NN with double-weight neurons can be trained by minimizing a loss function with the method of GD. The gradient can be analytically calculated by using the backpropagation algorithm that is implemented in tensorflow

^{1}

^{1}1https://www.tensorflow.org/

. In Appendix B we derive the back-propagation equations for a double-weight feed-forward neural network.

## 3 Data analysis

In this section we test a Feed Forward Neural Network (FNN) and a Convolutional Neural Network (CNN) consisting of double-weight neurons on
the MNIST dataset^{2}^{2}2https://www.kaggle.com/c/digit-recognizer, which is
a large database of handwritten digits commonly used for training image processing systems. MNIST
contains 60000 training images and 10000 testing images.
Furthermore, we tested a CNN on the CIFAR-10 dataset^{3}^{3}3https://www.kaggle.com/c/cifar-10, which is constituted of 50000 training
and 10000 testing images and 10 classes.

The scope of this section is to show that the introduction of double-weight neurons leads to an improved classification accuracy for the same set of hyperparameters of the NN. We emphasize that we are not trying to achieve state-of-the-art results for either MNIST and CIFAR-10, and hence, the hyperparameters of the NN are not optimized.

We start by comparing a Standard Feed-Forward Neural Network^{4}^{4}4A FNN consisting of neurons as in equation 1. (SFNN)
with a Double-Weight Feed-Forward Neural Network^{5}^{5}5A FNN consisting of neurons as in equation 2. (DWFNN) on the MNIST
dataset. A FNN is not well-suited for imaging classification problems, but here the scope is simply
to compare a SFNN with a DWFNN, assuming the same set
of hyperparameters.
In figure 1, we show the classification accuracy as a function of the iteration number^{6}^{6}6One hundred images at the time were processed for each iteration.
on the test set for the SFNN and DWFNN
for the same set of hyperparameters
that characterize the networks.

Since the cost function of a NN is not convex, for different initialization of the weights the GD minimization procedure leads to different local minima. Therefore, we tested a SFNN and a DWFNN for 150 random initial different weights configurations. For each realization we estimate the average

^{7}

^{7}7The average was taken by excluding the first 1500 iterations. of the classification accuracy, and then we build the distribution of the classification accuracy over the 150 realizations (see Fig. 2

). The average values of the classification accuracy distribution for a SFNN and a DWFNN are 0.894 and 0.930, respectively. We also performed the Welch t-test for testing the significance of the difference between the mean values of the classification accuracy distribution for both SFNN and DWFNN, obtaining a highly significant p-value

. Figure 2, combined with the results of the Welch t-test demonstrates that the classification accuracy of a DWFNN is significantly better than the accuracy of a SFNN. The training time of a DWFNN is on average 1.46 times the training time of a SFNN for the MNIST dataset. A detailed description of the hyperparameters of the considered networks is presented in Appendix A.We also compared a CNN consisting of double-weight neurons (DWCNN) in the fully connected layers with a standard CNN (SCNN), repeating the same procedure as above. In Fig. 3 we show an example of classification accuracy as a function of the iteration number between a DWCNN and a SCNN. Figure 4 displays the results of the classification accuracy distribution, suggesting that a DWCNN is better than a SCNN. The mean values of the classification accuracy distribution of a DWCCN and SCNN are 0.978 and 0.984, respectively, and the p-value of the Welch’s t-test is . The training time of a DWCNN is on average 1.11 times the training time of a SCNN for MNIST.

We also compared a DWCNN with a SCNN on CIFAR-10. In Fig. 5 we show an example of classification accuracy as a function of the iteration number between a DWCNN and a SCNN. Figure 6 shows the results of the classification accuracy distribution suggesting that a DWCNN is better than a SCNN. The mean values of the classification accuracy distribution of a DWCCN and SCNN are 0.570 and 0.455, respectively, and the p-value of the Welch t-test is . The training time of a DWCNN is on average 1.07 times the training time of a SCNN for CIFAR-10.

The results of comparison between the networks are summarized in Table 1, and a detailed description of the CNN architectures is presented in Appendix A.

MNIST | CIFAR-10 | |
---|---|---|

SFNN | 0.894 | |

DWFNN | 0.930 | |

SCNN | 0.978 | 0.455 |

DWCNN | 0.984 | 0.570 |

Average value of the classification accuracy distribution for each of the considered NN architectures for both MNIST and CIFAR-10. The distribution was taken over 150 different random configurations of initial weights. The Welch t-test was used to test the null hypothesis that the standard NN and the double-weight NN have equal means, obtaining a p-value

for either MNIST and CIFAR-10.## 4 Conclusions

In this paper we presented a new type of artificial neuron, the double-weight neuron, where we added learnable weights to the standard definition. We showed that a Feed-Forward Neural Network (FNN), built with these neurons and trained with the MNIST dataset, has on average larger mean classification accuracy than a standard FNN with the same hyperparameters that characterize the NN. We also tested a convolutional neural network (CNN) constituted by double-weight neurons in the fully connected layer only, on MNIST and CIFAR-10, and we found on average and improved classification accuracy, with respect to a standard CNN with the same set of hyperparameters.

These results suggest that a neural network built with double-weight neurons may be considered as a valuable alternative to the standard networks.

This work has been supported by the Heising-Simons Foundation under grant # 2018-0911 (PI: Margutti).

## Appendix A.

In this appendix we describe the structure of the NNs presented in Section 3

. The FNN for the MNIST dataset consist of four hidden layers and an output layer. The hidden layers are built in order with 200, 100, 60 and 30 neurons, respectively, while the output layer has 10 neurons. The learning rate is 0.003 and we used the Adam optimizer to minimize the cross entropy cost function. We fed 100 flattened images per iteration during training and testing. We used a sigmoid function as the activation function for the hidden layers and a softmax function for the output layer. We initialized the weights with a truncated normal of mean 0 and standard deviation of 0.1, while we initialized the biases with zeros. The only difference between the DWFNN and SFNN is the presence of double-weight neurons in the DWFNN.

The CNN, for the MNIST dataset consist of 3 convolutional layers, 2 fully connected layers, and an output layer. The output depth of the convolutional and fully connected layers are 4, 8, 12, 200 and 80, respectively. The convolutional window is for the first and second convolutional layers, while it is

for the third convolutional layer. The adopted stride for the first convolutional layer is 1, while a value of 2 was adopted for the second and third convolutional layers. We used the keyword ’SAME’ for the padding in tensorflow. The weights were initialized with a truncated normal of mean 0 and standard deviation of 0.1. The learning rate is 0.0008 and we used the Adam optimizer to minimize the cross entropy cost function. We fed 100 images per iteration during training and testing. We used a rectified linear activation function (RELU) for the hidden layers and a softmax function for the output layer. The only difference between the DWCNN and SCNN is the presence of the double-weight neurons in the fully connected layers in the DWCNN. The CNN architecture for CIFAR-10 is the same as of that used for MNIST, with the only differences being the learning rate (which we set equal to 0.0006) and the number of images processed per iteration, which we set equal to 200.

## Appendix B.

In this appendix we derive the back-propagation formulas for a DWFNN. We consider a DWFNN with one hidden layer and a sum of square residuals cost function.

Considering the following equations:

(4) |

(5) |

(6) |

(7) |

(8) |

Where is the sum of square residuals cost function, represents the target values in the j neuron, represents the output
values of the neuron in the layer of the NN ^{8}^{8}8We assume that the layer is the output layer.,
and are the weights between the hidden layer and the output layer, is the output
of the neuron in the layer, while and are the weights between the and layer.
These equations define the forward propagation in the NN.

The gradient of for the weights between the and layer is:

(9) |

and

(10) |

The estimate of the gradient for the weights between the input and layer is:

(11) |

Eq. 11 can be expressed as:

(12) |

similarly, we obtain:

(13) |

## References

## References

- Leung & Bovy (2019) Leung, H. W., & Bovy, J. 2019, MNRAS, 483, 3255
- Baldeschi et al. (2017) Baldeschi, A., Elia, D., Molinari, S., et al. 2017, MNRAS, 466, 3682
- Sabour et al. (2017) Sabour, S., Frosst, N., & E Hinton, G. 2017, arXiv:1710.09829
- Kingma & Ba (2014) Kingma, D. P., & Ba, J. 2014, arXiv:1412.6980
- Dieleman et al. (2015) Dieleman, S., Willett, K. W., & Dambre, J. 2015, MNRAS, 450, 1441
- Katebi et al. (2018) Katebi, R., Zhou, Y., Chornock, R., & Bunescu, R. 2018, arXiv:1809.08377
- Young et al. (2017) Young, T., Hazarika, D., Poria, S., & Cambria, E. 2017, arXiv:1708.02709
- Schawinski et al. (2017) Schawinski, K., Zhang, C., Zhang, H., Fowler, L., & Santhanam, G. K. 2017, MNRAS, 467, L110
- Nguyen, Clune, Bengio, Dosovitskiy & Yosinski (2016) Nguyen A., Clune J., Bengio Y., Dosovitskiy A., Yosinski J., 2016, arXiv e-prints, arXiv:1612.00005
- Zingales & Waldmann (2018) Zingales, T., & Waldmann, I. P. 2018, apj, 156, 268
- Reiman & Göhre (2019) Reiman, D. M., & Göhre, B. E. 2019, MNRAS, 485, 2617
- Arora et al. (2018) Arora, S., Cohen, N., & Hazan, E. 2018, arXiv:1802.06509

Comments

There are no comments yet.