DeepAI
Log In Sign Up

Dynamic Neural Diversification: Path to Computationally Sustainable Neural Networks

09/20/2021
by   Alexander Kovalenko, et al.
Czech Technical University in Prague
6

Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks, where now excessively large models are used. However, such models face several problems during the learning process, mainly due to the redundancy of the individual neurons, which results in sub-optimal accuracy or the need for additional training steps. Here, we explore the diversity of the neurons within the hidden layer during the learning process, and analyze how the diversity of the neurons affects predictions of the model. As following, we introduce several techniques to dynamically reinforce diversity between neurons during the training. These decorrelation techniques improve learning at early stages and occasionally help to overcome local minima faster. Additionally, we describe novel weight initialization method to obtain decorrelated, yet stochastic weight initialization for a fast and efficient neural network training. Decorrelated weight initialization in our case shows about 40 accuracy during the first 5 epochs.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

07/02/2020

Persistent Neurons

Most algorithms used in neural networks(NN)-based leaning tasks are stro...
12/04/2018

Parameter Re-Initialization through Cyclical Batch Size Schedules

Optimal parameter initialization remains a crucial problem for neural ne...
02/17/2022

When, where, and how to add new neurons to ANNs

Neurogenesis in ANNs is an understudied and difficult problem, even comp...
02/28/2022

How and what to learn:The modes of machine learning

We proposal a new approach, namely the weight pathway analysis (WPA), to...
11/16/2022

Engineering Monosemanticity in Toy Models

In some neural networks, individual neurons correspond to natural “featu...
08/02/2019

Network with Sub-Networks

We introduce network with sub-network, a neural network which their weig...
12/11/2019

Is Feature Diversity Necessary in Neural Network Initialization?

Standard practice in training neural networks involves initializing the ...

1 Introduction

Over the last decade, machine learning algorithms have achieved vast progress in various fields. Namely, general approach called deep neural networks (DNN) with multiple hidden layers

[16], has enabled machine learning algorithms to perform at an acceptable level in the many areas, in some cases outperforming human accuracy [7]. Such progress, in no small measure, has become available due to modern hardware computational capabilities, enabling the training of large DNN on an immense amount of data.

On the other hand, even though large models perform very well on complex tasks, we cannot endlessly rely on an infinite increase in computational resources and size of datasets. Training large neural networks is energy, time and memory demanding task. Recently, researchers started questioning energy consumption of machine learning algorithms and their carbon footprint [24]. Thus it will not be superfluous to develop a strategy for the models that have a constrained number of parameters, sufficient enough for the certain task, and can be trained fast, rather than chasing higher accuracy by enlarging the number of parameters and using more complex hardware.

Universal approximation theorem [8] claims that a feed-forward artificial neural network with a single hidden layer can approximate any continuous well-behaved function of arbitrary number of variables with any accuracy. The conditions are: a sufficient number of neurons in the hidden layer, and a correct weight selection. Above mentioned theorem for an arbitrary width case was originally proved by Cybenko [8] and Hornik [19] and later extended to an arbitrary depth case (DNN) in [27].

In this paper we get a deeper insight on the practical application of Cybenko’s theorem, in order to train a neural network, where all hidden neurons will be used efficiently. Therefore, we have to pay attention to two following aspects: number of neurons and correct weight selection.

Number of neurons in a hidden layer is a quite straightforward parameter that became trendy with availability of multi-threaded parallel computing on GPU [30]

. Models of a vast number of trainable parameters are not devoid of logic, as they generalize better and can be so-called ‘universal learners’. For example, GPT-3 having 175 billion parameters, is a perfect example of a universal learner

[5]. Thus, the community has been experimenting with model architectures increasing width [27] or depth [36] of neural networks. Issues, such as vanishing gradient [17, 18] was resolved by applying methods, including second-order Hessian-free optimization [29], training schedules by using greedy layer-wise training [34, 15, 39]

, sparse rectifier activation function, widely known as ReLU

[11], layer-size-dependent initialization, such as Xavier [10] and Kaiming [14] and skip connections [13]

. Even though, we can make arbitrarily large models make good predictions, to achieve computational sustainability by expanding the number of trainable parameters up to infinity, would not be the best option for the tasks of lower complexity. The community has been already trying to address this problem, thus several solutions dealing with this issue have occurred. For example, widely used ReLU activation function, saturated only in one dimension, which helps with vanishing gradient problem, on the other hand results in so-called ‘dying neurons’

[26], modified activation functions such as Leaky ReLU [41], adaptive convolutional ReLU [9], Swish [32], Antirectifier [28] and many other were addressed to solve the problem of ‘neural graveyard’. Resource efficient solutions, such as pooling operations [33], LightLayers [21] depth-wise separable convolutions [6] were developed to reduce the complexity of the models.

Correct weight selection

, at first sight, depends on training parameters, such as loss function, number of epochs, learning rate etc. However, to train the neural network competently these weights have to be initialized stochastically. There are several ways to initialize weight, mainly aimed to avoid vanishing gradients. Nevertheless, stochastic weight initialization can result in neuron redundancy, when different neurons are trained in a similar manner. This is not crucial if the neural network is excessively large, however, in computationally sustainable models, neuron redundancy and ‘neural graveyards’ are undesirable. Moreover, there are numerous application when memory efficient model is required (e.g. autonomous devices such as sensors, detectors, mobile or portable devices). Such devices require memory and performance efficient solutions to learn spontaneously and improve from experience. In this case adding excessive parameters to the model can be rather questionable for the model application.

Therefore, once we consider each neuron of the model as an individual learner, the neural network can be seen as an ensemble. It is known that for ensembles diversity of learners is desirable to some extent [4]. Thus, we can assume that diversity between neurons or reinforced diversification during the training can be beneficial for the model.

In this paper we foremost explore how the diversity between neurons evolves during the training and as a following step suggest methods for diversification of the neurons during the model training. This is especially relevant in resource constrained models, where neuron redundancy means reducing the number of predictors. Additionally, we show how weight pre-initialization can affect neural network training at the early steps.

2 Our Approach

Let us start with a term negative correlation (NC) learning [4], which is a simple, yet elegant technique to diversify individual base-models in the ensemble and reduce their correlations. Ambiguity decomposition [12]

of the loss function raises the possibility of controlling the trade-off between bias, variance, and covariance

[38] using the strength parameter, to reduce covariance. In its order the concept of an NC learning is originated from bias-variance decomposition [3, 20] of ensemble learning. In this case, bias is the output shift from the true value, and variance is the measure of ensemble ambiguity, which simply means dispersion around the mean output value.

As it was first demonstrated by Krogh and Vedelsby [23]

quadratic error of ensemble prediction is always less that the quadratic error of each individual estimator of the ensemble:

(1)

Later Brown [4] demonstrated decomposition of ensemble error into three components - bias, variance and covariance, and shown, the connection between ambiguity and covariance:

(2)

The ensemble ambiguity is nothing less than the variance of the weighted ensemble around the weighted mean. Therefore, higher ambiguity, i.e. decorrelation between the ensemble output is desirable up to some measure.

Our first trial was to decorrelate neurons in the hidden layer by penalizing the difference between mean weight of the neurons and each neuron :

(3)

where is the regularization strength parameter, and n is the number of neurons in a layer.

However, it is likely more profitable to compare not only single weights, but weight matrices or e.g. kernels in convolutional neural networks (CNN), as trainable kernels represent. Thus, the

second

way to define diversity is comparing neurons by cosine similarity:

(4)

where are weights of individual neurons and is the diversity measure.

In this technique we compare each weight in the layers and define a diversity measure . However, it has quadratic complexity of such expression, which would oppose the idea of the current work, as our indent is fast and efficient training of resource constrained neural networks.

Therefore, combining the first two approaches we introduce and explore another method to define diversity in the neural networks:

(5)

After observing the training process and evolution of diversity measure in the models, we explored the possibility of weight pre-optimization using diversification. In this case, we used Kaiming weight initialization, with further optimization to enlarge the diversity between the weights, and at the same time keep weight mean and standard deviation of the weight matrix close to initial:

(6)

where is loss, is the initial weight mean, is the weight mean at k training step, is standard deviation of the initial weights array, and is standard deviation of the weights array at k training step.

3 Experiments

We perform some initial experiments using DNN in order to study diversity evolution during the model training and demonstrate the effectiveness of proposed diversification mechanisms.

The experiments were performed on publicly available benchmark dataset Fashion MNIST

[40]. This dataset was chosen as it is suitable for DNN training and has higher variance than traditional hand-written digits dataset MNIST [25]

. We implemented one-hidden-layer neural network with 16, 32, 64, 128, and 256 neurons in the hidden layer (see Table 1), using PyTorch

[31] library. Otherwise, we used standard parameters for the training, including Adam optimizer [22] with a learning rate of 0.01, cross entropy loss function with penalization terms (Eq. 3-5):

(7)

where presents training set, is true distribution, is predicted distribution, is standard deviation of the weights array at k training step,

is the probability of event

estimated from the training set, and is the diversity measure, obtained using Eq. 3, 4, or 5.

4 Results and Discussion

4.1 Evolving Diversity and Symmetry Breaking

During the model training, one can notice sub-optimal accuracy stagnation for a several epochs, this can be associated with the existence of local minima on a loss function surface [2, 35]. This can be associated with a symmetry in the neural network layer, which is shown to be a critical point especially for small neural networks[1, 37]

. We found out that naturally the model tends to decrease the correlation between the neurons, however when the model converges to a local minimum with a sub-optimal accuracy, the similarity between the neurons rises up until the moment when the optimization process surpasses the local minimum and the accuracy increases. (see Figure 1) This correlates with an existence of symmetry in the weights, once weights are symmetrical (correlated) and the number of neurons is constrained, the overall output of the model will likely to be inefficient.

Figure 1: Training curve and Diversity measure (Eq. 4) for the first 50 epochs on Fashion MNIST dataset. DNN with 1 hidden layer of 32 neurons.

4.2 Negative Correlation Learning

Number of Neurons &
Test Accuracy, %
16 32 64 128 256
0.0 54.19 58.25 62.46 69.62 72.10
55.17 60.17 62.45 68.64 70.65
56.41 61.25 64.13 70.32 72.27
54.48 60.81 65.04 70.45 72.83
53.54 60.04 63.19 70.26 72.36
54.22 59.46 62.23 70.20 71.64
1.0 50.09 57.49 60.53 69.84 71.65
Table 1: First 10 epochs average of the neural network training for various number of neurons, hidden layer diversified according to the Eq. 5.

The experiment above inspired us to study certain ways to decorrelate neurons in the hidden layer, thus brake the symmetry that can appear during the learning process. As we discussed earlier, we consider the output of neural network as an output of an ensemble. Thus, first, we did simple NC learning, applied to the individual neurons, rather than ensemble of classifiers. The logic behind this experiment was rather comprehensible. Once the model has constrained number of parameters to generalize the data, higher variance would help to eliminate redundant neurons and overall prediction has to be more accurate. As it can be seen from the Figure 2. decorrelation mechanism helps to avoid local minima at the early stage on the model learning. Nevertheless, decorrelation using NC learning generally did not result in the higher accuracy overall. We associate it to several factors, such as Kaiming weight initialization that help to avoid vanishing gradient, and Adam optimizer which is a replacement optimization algorithm that can handle sparse gradients on noisy data, and thus is able to efficiently overcome local minima due to adaptive learning rated for each parameter. Eventhough, these widely used techniques are dealing with the above mentioned problem of the neuron redundancy, our proposed model can help at the early stages of a model training.

fig2 *
fig3 *
fig1 *

[.3][c] [.3][c] [.3][c]

Figure 2: Validation accuracy training curves of the model with various various values.

Moreover, with an increasing number of neurons the influence of decorrelation diminishes, this can be explained, that excessively large NN performs good at the low variance data as well as not every neuron is needed for a good prediction. However, in the present work we consider computationally sustainable DNN, where all the neuron are forced to contribute the prediction and on the other hand, for complex data larger amount of neurons would be needed to generalize the dataset. Therefore, for more sophisticated problems neuron diversification may be efficient for a larger number of neurons. However, in the present case we performed further experiments on the model with 64 neurons in the hidden layer, which we consider sufficient for a given dataset. All the models were trained for 10 times to calculate mean and standard deviation. In Table 2 the average testing accuracy of the first 10 epoch for the DNN with 64 neurons in the hidden layer trained using negative correlation learning (Eq. 3) is shown.

Train Acc., % Test Acc., % , Test Acc. STD
0.0 61.46 62.46 2.34
62.26 62.45 2.34
63.65 64.13 1.59
63.12 65.04 1.76
62.54 63.19 1.03
64.23 62.23 0.95
1.0 59.56 60.53 1.57
Table 2: First 10 epochs average of the neural network training, hidden layer diversified according to the Eq. 3.

4.3 Pairwise Cosine Similarity Diversification

It has to be noted that, unlike in [4], where universal diversification strength parameter was found for the ensembles of all sizes, in our case value depends on the size of the hidden layer and has to be rather considered as per neuron. However, on the other hand it is loss-dependent, which means that, ideally, it has to be same or one order of magnitude smaller than the output of the loss function during the training, otherwise, rather than the model loss (e.g. cross entropy), reciprocal diversity measure will be optimized. Thus, the reader has to consider optimizing value for each certain neural network and loss function. Thus optimal approximately can be estimated as:

(8)

where n is the number of neurons in the hidden layer and is the loss function order of magnitude.

In addition to NC learning, we introduced diversity measure based on cosine similarity between the neurons (Eq. 4). Such technique, seems to be promising due to several reasons: first, we, rather that mean values, compare patterns, which can be useful for more complex models, such as CNNs or transformers, moreover here, each neuron is compared with each, thus such model is intended to be more robust. Nevertheless, at least for DNN, results we comparable with NC learning (see Table 3), additionally, such method has quadratic complexity, which opposes our initial aim to train small models faster and more efficient.

Train Acc., % Test Acc., % , Test Acc. STD
0.0 55.94 62.83 1.18
56.52 64.20 1.11
57.98 65.76 0.92
55.48 59.96 0.71
56.47 49.20 1.52
56.47 44.60 1.06
42.66 38.61 1.10
Table 3: First 10 epochs average of the neural network training, hidden layer diversified according to the Eq. 4.

4.4 Reaching Linear Complexity

To enable our diversification method to compare patterns, however avoid quadratic complexity, we combined the fist concept of NC learning with the second one, and implemented diversity measure based on penalization of the cosine similarity of each neuron in the hidden and layer’s neurons mean (Eq. 5). The algorithm (see Table 4) overhead is comparable with regularization. Moreover, it has shown the highest accuracy gain among three.

Train Acc., % Test Acc., % , Test Acc. STD
0.0 61.54 63.15 2.08
62.37 63.3 1.63
63.60 64.54 1.25
64.87 64.95 1.66
60.36 62.14 1.66
52.54 55.86 0.45
41.26 42.71 1.32
Table 4: First 10 epochs average of the neural network training, hidden layer diversified according to the Eq. 5.

4.5 Iterative Diversified Weight Initialization

However, it can be noticed, that occasionally, during the training, the model do not behave exactly as expected, creating an outlying learning curves. This is most likely associated with stochastic weight initialization. In this case Kaiming initalization is used [13]. Kaiming initialization is widely used for the neural networks with ReLU activation functions and related to the nonlinearities of the ReLU activation function, which make it non-differentiable at . The weights, in this case are initialized stochastically with the variance that depends on the number of neurons :

(9)

It is fair to suggest, that correlation between the initialized weights can play significant role in the model learning process. Indeed, in the Figure 1. it is clearly seen, the the model gained the most of its accuracy while reducing the correlation between neurons during the first few epochs. However, the aim of weight initialization is to prevent layer activation outputs from exploding or vanishing during the course of a forward pass through a deep neural network. Usually weight are initialized stochastically with a small number to avoid vanishing gradients especially if or activation functions are used. Thus, to obtain stochastically initialized, yet decorrelated, weights we introduced iteratively diversified Weight initialization, using custom loss function based on Eq. 6. The logic behind such initialization is to reduce the diversity measure between the weights and at the same time keep weights mean and weights standard deviation close to the originally initialized using Kaiming initialization.

Train Acc., % Test Acc., % , Test Acc. STD
0.0 29.54 34.23 2.04
42.43 43.43 1.81
43.92 45.65 1.53
44.65 47.01 1.24
38.32 39.83 1.06
1.0 36.64 38.94 1.37
10.0 32.5 37.57 1.41
Table 5: First 5 epochs average of the neural network training initialized with decorrelated weights according to the Eq. 6 pre-optimized for 5 epochs.

5 Conclusion

In this paper we show how to explore and tame the diversity of neurons in the hidden layer. We studied how the correlation between the neurons evolves during the training and what is the effect on prediction accuracy. In appears, that once the model is converged to the local minimum on the loss landscape, correlation between the neurons increases up to the point when the optimization process overcome the local minimum. Thus, we introduced three methods how to dynamically reinforce diversification and thus decorrelate neural network layer. The concept of negative correlation suggested by Brown [4] was reviewed and expanded. Instead of decorrelation individual neural networks in the ensemble we diversified neurons in the hidden layer, using three techniques: negative correlation learning, cosine pairwise similarity, cosine similarity around the mean.

First technique is originated from the neural networks ensembles and shows a decent performance in our example using DNN, however for more sophisticated models, such as CNNs and transformers, second and third technique is likely to be more advantageous as far as it can compare patterns. Additionally to reach correct weight selection, we introduced weight iterative optimization using weight diversification. It was shown that such techniques are suitable for the fast training of small models and notably affect their accuracy at the early stage. Which is a small, yet important step towards the development of a strategy towards energy-efficient training of neural networks.

Our future plans for using neural network diversification primarily consists in using above described diversification techniques in more sophisticated models in order to explore the possibility to improve training speed and reduce the number of training parameters. Popular architectures, such as transformers can benefit from the individual head diversification in multi-head attention block, as far as multiple heads are intended to learn various representation. Furthermore, we are planning to explore more pattern-oriented techniques for defining diversity between neurons to enable efficient diversification application in CNNs.

Acknowledgment

This research is supported by the Czech Ministry of Education, Youth and Sports from the Czech Operational ProgrammeResearch, Development, and Education, under grant agreement No. CZ.02.1.01/0.0/0.0/15003/0000421 and the Czech Science Foundation (GAČR 18-18080S).

References

  • [1] Y. Arjevani and M. Field (2020) Symmetry and critical points for a model shallow neural network. External Links: 2003.10576 Cited by: §4.1.
  • [2] A. Atakulreka and D. Sutivong (2007) Avoiding local minima in feedforward neural networks by simultaneous learning. In

    AI 2007: Advances in Artificial Intelligence

    , M. A. Orgun and J. Thornton (Eds.),
    Berlin, Heidelberg, pp. 100–109. External Links: ISBN 978-3-540-76928-6 Cited by: §4.1.
  • [3] Y. Bian and H. Chen (2021) When does diversity help generalization in classification ensembles?. External Links: 1910.13631 Cited by: §2.
  • [4] G. Brown (2004) Diversity in neural network ensembles. Technical report . Cited by: §1, §2, §2, §4.3, §5.
  • [5] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei (2020) Language models are few-shot learners. External Links: 2005.14165 Cited by: §1.
  • [6] F. Chollet (2017)

    Xception: deep learning with depthwise separable convolutions

    .
    External Links: 1610.02357 Cited by: §1.
  • [7] D. C. Ciresan, U. Meier, J. Masci, L. M. Gambardella, and J. Schmidhuber (2011) Flexible, high performance convolutional neural networks for image classification. In Twenty-second international joint conference on artificial intelligence, Cited by: §1.
  • [8] G. Cybenko (1989)

    Approximation by superpositions of a sigmoidal function

    .
    Mathematics of control, signals and systems 2 (4), pp. 303–314. Cited by: §1.
  • [9] H. Gao, L. Cai, and S. Ji (2020) Adaptive convolutional relus. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34, pp. 3914–3921. Cited by: §1.
  • [10] X. Glorot and Y. Bengio (2010) Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249–256. Cited by: §1.
  • [11] X. Glorot, A. Bordes, and Y. Bengio (2011) Deep sparse rectifier neural networks. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 315–323. Cited by: §1.
  • [12] L. K. Hansen and P. Salamon (1990) Neural network ensembles. IEEE transactions on pattern analysis and machine intelligence 12 (10), pp. 993–1001. Cited by: §2.
  • [13] K. He, X. Zhang, S. Ren, and J. Sun (2015) Deep residual learning for image recognition. External Links: 1512.03385 Cited by: §1, §4.5.
  • [14] K. He, X. Zhang, S. Ren, and J. Sun (2015)

    Delving deep into rectifiers: surpassing human-level performance on imagenet classification

    .
    In

    Proceedings of the IEEE international conference on computer vision

    ,
    pp. 1026–1034. Cited by: §1.
  • [15] G. E. Hinton, S. Osindero, and Y. Teh (2006) A fast learning algorithm for deep belief nets. Neural computation 18 (7), pp. 1527–1554. Cited by: §1.
  • [16] G. E. Hinton and R. R. Salakhutdinov (2006) Reducing the dimensionality of data with neural networks. science 313 (5786), pp. 504–507. Cited by: §1.
  • [17] S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber (2001) Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. In

    A Field Guide to Dynamical Recurrent Neural Networks

    , S. C. Kremer and J. F. Kolen (Eds.),
    Cited by: §1.
  • [18] S. Hochreiter (1998) The vanishing gradient problem during learning recurrent neural nets and problem solutions. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 6 (02), pp. 107–116. Cited by: §1.
  • [19] K. Hornik (1991) Approximation capabilities of multilayer feedforward networks. Neural networks 4 (2), pp. 251–257. Cited by: §1.
  • [20] P. Izmailov, D. Podoprikhin, T. Garipov, D. Vetrov, and A. G. Wilson (2019) Averaging weights leads to wider optima and better generalization. External Links: 1803.05407 Cited by: §2.
  • [21] D. Jha, A. Yazidi, M. A. Riegler, D. Johansen, H. D. Johansen, and P. Halvorsen (2021) LightLayers: parameter efficient dense and convolutional layers for image classification. External Links: 2101.02268 Cited by: §1.
  • [22] D. P. Kingma and J. Ba (2017) Adam: a method for stochastic optimization. External Links: 1412.6980 Cited by: §3.
  • [23] A. Krogh and J. Vedelsby (1995)

    Validation, and active learning

    .
    Advances in neural information processing systems 7 7, pp. 231. Cited by: §2.
  • [24] A. Lacoste, A. Luccioni, V. Schmidt, and T. Dandres (2019) Quantifying the carbon emissions of machine learning. arXiv preprint arXiv:1910.09700. Cited by: §1.
  • [25] Y. LeCun and C. Cortes (2010) MNIST handwritten digit database. Note: http://yann.lecun.com/exdb/mnist/ External Links: Link Cited by: §3.
  • [26] L. Lu (2020-06) Dying relu and initialization: theory and numerical examples. Communications in Computational Physics 28 (5), pp. 1671–1706. External Links: ISSN 1991-7120, Link, Document Cited by: §1.
  • [27] Z. Lu, H. Pu, F. Wang, Z. Hu, and L. Wang (2017) The expressive power of neural networks: a view from the width. External Links: 1709.02540 Cited by: §1, §1.
  • [28] B. Luijten, R. Cohen, F. J. de Bruijn, H. A. Schmeitz, M. Mischi, Y. C. Eldar, and R. J. van Sloun (2019) Deep learning for fast adaptive beamforming. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1333–1337. Cited by: §1.
  • [29] J. Martens and I. Sutskever (2011) Learning recurrent neural networks with hessian-free optimization. In ICML, Cited by: §1.
  • [30] L. Marziale, G. G. Richard, and V. Roussev (2007) Massive threading: using gpus to increase the performance of digital forensics tools. Digital Investigation 4, pp. 73 – 81. External Links: ISSN 1742-2876, Document, Link Cited by: §1.
  • [31] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala (2019) PyTorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, pp. 8024–8035. External Links: Link Cited by: §3.
  • [32] P. Ramachandran, B. Zoph, and Q. V. Le (2017) Searching for activation functions. arXiv preprint arXiv:1710.05941. Cited by: §1.
  • [33] D. Scherer, A. Müller, and S. Behnke (2010) Evaluation of pooling operations in convolutional architectures for object recognition. In International conference on artificial neural networks, pp. 92–101. Cited by: §1.
  • [34] J. Schmidhuber (1992) Learning to control fast-weight memories: an alternative to dynamic recurrent networks. Neural Computation 4 (1), pp. 131–139. Cited by: §1.
  • [35] G. Swirszcz, W. M. Czarnecki, and R. Pascanu (2017) Local minima in training of neural networks. External Links: 1611.06310 Cited by: §4.1.
  • [36] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich (2014) Going deeper with convolutions. External Links: 1409.4842 Cited by: §1.
  • [37] K. Tayal, C. Lai, V. Kumar, and J. Sun (2020) Inverse problems, deep learning, and symmetry breaking. External Links: 2003.09077 Cited by: §4.1.
  • [38] N. Ueda and R. Nakano (1996) Generalization error of ensemble estimators. In Proceedings of International Conference on Neural Networks (ICNN’96), Vol. 1, pp. 90–95. Cited by: §2.
  • [39] P. Vincent, H. Larochelle, Y. Bengio, and P. Manzagol (2008)

    Extracting and composing robust features with denoising autoencoders

    .
    In Proceedings of the 25th international conference on Machine learning, pp. 1096–1103. Cited by: §1.
  • [40] H. Xiao, K. Rasul, and R. Vollgraf (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747. Cited by: §3.
  • [41] B. Xu, N. Wang, T. Chen, and M. Li (2015) Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:1505.00853. Cited by: §1.