Deep neural networks are a powerful tool for feature learning and extraction given their ability to represent and model high-level abstractions in highly complex data. Deep neural networks have shown considerable capabilities in producing features that enable state-of-the-art performance for handling complex tasks such as speech recognition [1, 2], object recognition [3, 4, 5, 6]7, 8]. Recent advances in improving the performance of deep neural networks for feature learning and extraction have focused on areas such as network regularization [9, 10]11, 12, 13], and deeper architectures [6, 14, 15], where the goal is to learn more representative features with respect to increasing task accuracy.
Despite the power capabilities of deep neural networks for feature learning and extraction, they are very rarely employed on embedded devices such as video surveillance cameras, smartphones, and wearable devices. This difficult migration of deep neural networks into embedded applications for feature extraction stems largely from the fact that, unlike the highly powerful distributed computing systems and GPUs that are often leveraged for deep learning networks, the low-power CPUs commonly used in embedded systems simply do not have the computational power to make deep neural networks a feasible solution for feature extraction.
Much of the focus on migrating deep neural networks for feature learning and extraction in embedded systems have been to design custom embedded processing units dedicated to accelerating deep neural networks [16, 17, 18] . However, such an approach greatly limits the flexibility of the type of deep neural network architectures that can be used. Furthermore, such an approach requires adding additional hardware, which adds to the cost and complexity of the embedded system. On the other hand, improving the efficiency of deep neural networks for feature learning and extraction is much less explored, with considerably fewer strategies proposed so far . In particular, very little exploration has been conducted on efficient neural connectivity formation for efficient feature learning and extraction, which can hold considerable promise it achieving highly efficient deep neural network architectures that can be used in embedded applications.
One way to address this challenge is to draw inspiration from the brain which has an uncanny ability to efficiently represent information. In particular, we are inspired by the way brain develops synaptic connectivity between neurons. Recently, in a pivotal paper by , data of living brain tissue from Wistar rats was collected and used to construct a partial map of a rat brain. Based on this map, Hill et al. came to a very surprising conclusion. The synaptic formation, of specific functional connectivity in neocortical neural microcircuits, was found to be stochastic in nature. This is in sharp contrast to the way deep neural networks are formed, where connectivity is largely deterministic and pre-defined.
Motivated by findings of random neural connectivity formation and the efficient information representation capabilities of the brain, we proposed the learning of efficient feature representations via StochasticNets , where the key idea is to leverage random graph theory [22, 23] to form sparsely-connected deep neural networks via stochastic connectivity between neurons. The connection sparsity, in particular for deep convolutional networks, allows for more efficient feature learning and extraction due to the sparse nature of receptive fields which requires less computation and memory access. We will show that these sparsely-connected deep neural networks, while computationally efficient, can still maintain the same accuracies as the traditional deep neural networks. Furthermore, the StochasticNet architecture for feature learning and extraction presented in this work can also benefit from all of the same approaches used for traditional deep neural networks such as data augmentation and stochastic pooling to further improve performance.
The paper is organized as follows. First, a review of random graph theory is presented in Section 2. The theory and design considerations behind forming StochasticNet as a random graph realizations are discussed in Section 3. Experimental results where we train deep convolutional StochasticNets to learn abstract features using the CIFAR-10 dataset , and extract the learned features from images to perform classification on the SVHN  and STL-10  datasets is presented in Section 5. Finally, conclusions are drawn in Section 6.
2 Review of Random Graph Theory
to form the neural connectivity of deep neural networks in a stochastic manner such that the resulting neural networks are sparsely connected yet maintaining feature representation capabilities. As such, it is important to first provide a general overview of random graph theory for context. In random graph theory, a random graph can be defined as the probability distribution over graphs. A number of different random graph models have been proposed in literature.
A commonly studied random graph model is that proposed by , in which a random graph can be expressed by , where all possible edge connectivity are said to occur independently with a probability of , where . This random graph model was generalized by , in which a random graph can be expressed by , where is a set of vertices and the edge connectivity between two vertices in the graph is said to occur with a probability of , where .
Therefore, based on this generalized random graph model, realizations of random graphs can be obtained by starting with a set of vertices and randomly adding a set of edges between the vertices based on the set of possible edges independently with a probability of . A number of realizations of a random graph are provided in Figure 2 for illustrative purposes. It is worth noting that because of the underlying probability distribution, the generated realizations of the random graph often exhibit differing edge connectivity.
Given that deep neural networks can be fundamentally expressed and represented as graphs , where the neurons are vertices and the neural connections are edges , one intriguing idea for introducing stochastic connectivity for the formation of deep neural networks is to treat the formation of deep neural networks as particular realizations of random graphs, which we will describe in greater detail in the next section.
3 StochasticNets: Deep Neural Networks as Random Graph Realizations
|(a) RG Representation||(b) FFNN Graph Realization|
Example random graph representing a section of a deep feed-forward neural network (a) and its realization (b). Every neuronmay be connected to neuron with probability based on random graph theory. To enforce the properties of a general deep feed-forward neural network, there is no neural connectivity between nodes that are not in the adjacent layers. As shown in (b), the neural connectivity of nodes in the realization are varied since they are drawn based on a probability distribution.
Let us represent a deep neural network as a random graph , where is the set of neurons , with denoting the neuron and denoting the total number of neurons, in the deep neural network and is the probability that a neural connection occurs between neuron and . It is worth noting that since neural networks are directed graphs, to this end the probability is represented in a directed way from source node to the destination node. As such, one can then form a deep neural network as a realization of the random graph by starting with a set of neurons , and randomly adding neural connections between the set of neurons independently with a probability of as defined above.
While one can form practically any type of deep neural network as a random graph realizations, an important design consideration for forming deep neural networks as random graph realizations is that different types of deep neural networks have fundamental properties in their network architecture that must be taken into account and preserved in the random graph realization. Therefore, to ensure that fundamental properties of the network architecture of a certain type of deep neural network is preserved, the probability must be designed in such a way that these properties are enforced appropriately in the resultant random graph realization. Let us consider a general deep feed-forward neural network. First, in a deep feed-forward neural network, there can be no neural connections between non-adjacent layers. Second, in a deep feed-forward neural network, there can be no neural connections between neurons on the same layer. Therefore, to enforce these two properties, when where encodes the layer number associated to the node . An example random graph based on this random graph model for representing general deep feed-forward neural networks is shown in Figure 3(a), with an example feed-forward neural network graph realization is shown in Figure 3(b). Note that the neural connectivity for each neuron may be different due to the stochastic nature of neural connection formation.
4 Feature Learning via Deep Convolutional StochasticNets
As one of the most commonly used types of deep neural networks for feature learning is deep convolutional neural networks 
, let us investigate the efficacy of efficient feature learning can be achieved via deep convolutional StochasticNets. Deep convolutional neural networks can provide a general abstraction of the input data by applying the sequential convolutional layers to the input data (e.g. input image). The goal of convolutional layers in a deep neural network is to extract discriminative features to feed into the classifier such that the fully connected layers play the role of classification in deep neural networks. Therefore, the combination of receptive fields in convolutional layers can be considered as the feature extractor in these models. The receptive fields’ parameters must be trained to find optimal parameters leading to most discriminative features.
However learning those parameters is not possible every time due to the computational complexity or lake of enough training data such that general receptive fields (i.e., convolutional layers) without learning is desirable. On the other hand, the computational complexity of extracting features is another concern which should be addressed. Essentially, extracting features in a deep convolutional neural network is a sequence of convolutional processes which can be represented as multiplications and summations and the number of operations is dependent on the number of parameters of receptive fields. Motivated by those reasons, sparsifying the receptive fields while maintaining the generality of them is highly desirable.
To this end, we want to sparsify the receptive field motivated by the StochasticNet framework to provide efficient deep feature learning. First, in additional to the design considerations for presented in the previous section to enforce the properties of deep feed-forward neural networks, additional considerations must be taken to preserve the properties of deep convolutional neural networks, which is a type of deep feed-forward neural network.
Specifically, the neural connectivity for each randomly realized receptive field in the deep convolutional StochasticNet is based on a probability distribution, with the neural connectivity configuration thus being shared amongst different small neural collections for a given randomly realized receptive field. An example of a realized deep convolutional StochasticNet is shown in Figure 4. As seen, the neural connectivity for randomly realized receptive field is completely different from randomly realized receptive field . The response of each randomly realized receptive field leads to an output in new channel in layer .
A realized deep convolutional StochasticNet can then be trained to learn efficient feature representations via supervised learning using labeled data. One can then use the trained StochasticNet for extracting a set of abstract features from input data.
4.1 Relationship to Other Methods
While a number of stochastic strategies for improving deep neural network performance have been previously introduced ,  and , it is very important to note that the proposed StochasticNets is fundamentally different from these existing stochastic strategies in that StochasticNets’ main significant contributions deals primarily with the formation of neural connectivity of individual neurons to construct efficient deep neural networks that are inherently sparse prior to training, while previous stochastic strategies deal with either the grouping of existing neural connections to explicitly enforce sparsity , or removal/introduction of neural connectivity for regularization during training. More specifically, StochasticNets is a realization of a random graph formed prior to training and as such the connectivity in the network are inherently sparse, and are permanent and do not change during training. This is very different from Dropout  and DropConnect  where the activations and connections are temporarily removed during training and put back during test for regularization purposes only, and as such the resulting neural connectivity of the network remains dense. There is no notion of ’dropping’ in StochasticNets as only a subset of possible neural connections are formed in the first place prior to training, and the resulting network connectivity of the network is sparse.
StochasticNets are also very different from HashNets , where connection weights are randomly grouped into hash buckets, with each bucket sharing the same weights, to explicitly sparsifying into the network, since there is no notion of grouping/merging in StochasticNets; the formed StochasticNets are naturally sparse due to the formation process. In fact, stochastic strategies such as HashNets, Dropout, and DropConnect can be used in conjunction with StochasticNets.
|Network Model||Network Formation||Training||Testing|
5 Experimental Results
To investigate the efficacy of efficient feature learning via StochasticNets, we form deep convolutional StochasticNets and train the constructed StochasticNets using the CIFAR-10  image dataset for generating generic features. Based on the trained StochasticNets, features are then extracted for the SVHN  and STL-10  image datasets and image classification performance using these extracted deep features within a neural network classifier framework are then evaluated in a number of different ways. It is important to note that the main goal is to investigate the efficacy of feature learning via StochasticNets and the influence of stochastic connectivity parameters on feature representation performance, and not to obtain maximum absolute classification performance; therefore, the performance of StochasticNets can be further optimized through additional techniques such as data augmentation and network regularization methods.
The CIFAR-10 image dataset  consists of 50,000 training images categorized into 10 different classes (5,000 images per class) of natural scenes. Each image is an RGB image that is 3232 in size. The MNIST image dataset  consists of 60,000 training images and 10,000 test images of handwritten digits. Each image is a binary image that is 2828 in size, with the handwritten digits are normalized with respect to size and centered in each image. The SVHN image dataset  consists of 604,388 training images and 26,032 test images of digits in natural scenes. Each image is an RGB image that is 3232 in size. Finally, the STL-10 image dataset  consists of 5,000 labeled training images and 8,000 labeled test images categorized into 10 different classes (500 training images and 800 training images per class) of natural scenes. Each image is an RGB image that is 9696 in size. The images were resized to to have the same network configuration for all experimented datasets for consistency purposes. Note that the 100,000 unlabeled images in the STL-10 image dataset were not used in this paper.
5.0.2 StochasticNet Configuration
The deep convolutional StochasticNets used in this paper are realized based on the LeNet-5 deep convolutional neural network  architecture, and consists of three convolutional layers with 32, 32, and 64 receptive fields for the first, second, and third convolutional layers, respectively, and one hidden layer of 64 neurons, with all neural connections being randomly realized based on probability distributions. The neural connectivity formation for the deep convolutional StochasticNet realizations is achieved via acceptance-rejection sampling, and can be expressed by:
where is the neural connectivity from node to node , is the Iverson bracket, and encodes the sparsity of neural connectivity in the StochasticNet.
|(a) Gaussian Connectivity Model||(b) Uniform Connectivity Model|
While it is possible to take advantage of any arbitrary distribution to construct deep convolutional StochasticNet realizations, for the purpose of this paper two different spatial neural connectivity models were explored for the convolutional layers: i) uniform connectivity model:
and ii) a Gaussian connectivity model:
where the mean is at the center of the receptive field (i.e.,
) and the standard deviationis set to be a third of the receptive field size. In this study, is defined as a spatial region around node , which means that neural connectivity of 100 the resulting receptive field is equivalent to a dense receptive field used for ConvNets. Finally, for comparison purposes, the conventional ConvNet used as a baseline is configured with the same network architecture using receptive fields.
|(a) SVHN||(b) STL-10|
5.1 Number of Neural Connections
An experiment was conducted to illustrate the impact of the number of neural connections on the feature representation capabilities of StochasticNets. Figure 6 demonstrates the training and test error versus the number of neural connections in the network for the STL-10 dataset. The neural connection probability is varied to achieve the desired number of neural connections for testing its effect on feature representation capabilities.
Figure 6 demonstrates the training and testing error vs. the neural connectivity percentage relative to the baseline ConvNet, for two different spatial neural connectivity models: i) uniform connectivity model, and ii) Gaussian connectivity model. It can be observed that classification using the features from the StochasticNet is able to achieve the better or similar test error as using the features from the ConvNet when the number of neural connections in the StochasticNet is fewer than the ConvNet. In particular, classification using the features from the StochasticNet is able to achieve the same test error as using the features from the ConvNet when the number of neural connections in the StochasticNet is half that of the ConvNet. It can be also observed that, although increasing the number of neural connections resulted in lower training error, it does not exhibit reductions in test error, and as such it can be observed that the proposed StochasticNets can improve the handling of over-fitting associated with deep neural networks while decreasing the number of neural connections, which in effect greatly reduces the number of computations and thus resulting in more efficient feature learning and feature extraction. Finally, it is also observed that there is a noticeable difference in the training and test errors when using the Gaussian connectivity model when compared to the uniform connectivity model, which indicates that the choice of neural connectivity probability distributions can have a noticeable impact on feature representation capabilities.
|(a) SVHN||(b) STL-10|
5.2 Comparisons with ConvNet for Feature Learning
Motivated by the results shown in Figure 6, a comprehensive experiment were done to investigate the efficacy of feature learning via StochasticNets on CIFAR-10 and utilize them to classify the SVHN and STL-10 image datasets. Deep convolutional StochasticNet realizations were formed with 75% neural connectivity using the Gaussian connectivity model as well as the uniform connectivity model when compared to a conventional ConvNet. The performance of the StochasticNets and the ConvNets was evaluated based on 25 trials and the reported results are based on the best of the 25 trials in terms of training error. Figure 7 and Figure 8 shows the training and test error results of classification using learned deep features from CIFAR-10 using the StochasticNets and ConvNets on the SVHN and STL-10 datasets via the uniform connectivity model and the Gaussian connectivity model, respectively. It can be observed that, in the case where the uniform connectivity model is used, the test error for classification using features learned using StochasticNets, with just 75% of neural connections as ConvNets, is approximately the same as ConvNets for both the SVHN and STL-10 datasets (with 0.5% test error reduction for STL-10). It can also be observed that, in the case where the Gaussian connectivity model is used, the test error for classification using features learned using StochasticNets, with just 75% of neural connections as ConvNets, is approximately the same (1% relative test error reduction) as ConvNets for the SVHN dataset. More interestingly, it can also be observed that the test error for classification using features learned using StochasticNets, with just 75% of neural connections as ConvNets, is reduced by 4.5% compared to ConvNets for the STL-10 dataset. Furthermore, the gap between the training and test errors of classification using features extracted using the StochasticNets is less than that of the ConvNets, which would indicate reduced overfitting in the StochasticNets.
These results illustrate the efficacy of feature learning via StochasticNets in providing efficient feature learning and extraction while preserving feature representation capabilities, which is particularly important for real-world applications where efficient feature extraction performance is necessary.
|(a) Gaussian Connectivity Model||(b) Uniform Connectivity Model|
5.3 Training Set Size
An experiment was conducted to illustrate the impact of the size of the training set on the feature representation capabilities of StochasticNets. To perform this experiment, deep convolutional StochasticNet realizations were formed with 75% neural connectivity using the Gaussian connectivity model as well as the uniform connectivity model, and different percentages of the CIFAR-10 dataset was used for feature learning. The trained StochasticNet realizations where then used to perform classification on the STL-10 dataset to evaluate training and test error performance analysis.
Figure 9 demonstrates the training and testing error vs. the training set size, for the two tested connectivity models. It can be observed that the features extracted using the StochasticNets can provide comparable classification performance even when only 30% of the training data is used in the case of the Gaussian connectivity model. Furthermore, it was observed that there was only a 3% drop in test error when only 10% of the training data is used. More interest is the case of the uniform connectivity model, where the features extracted using the StochasticNets can provide comparable classification performance with no increase in test error even when only 10% of the training data, which illustrates the efficacy of feature learning via StochasticNets in situations where the training size is small.
5.4 Relative Feature Extraction Speed vs. Number of Neural Connections
Previous sections showed that StochasticNets can achieve good feature learning performance relative to conventional ConvNets, while having significantly fewer neural connections. We now investigate the feature extraction speed of StochasticNets, relative to the feature extraction speed of ConvNets, with respect to the number of neural connections formed in the constructed StochasticNets. To this end, the convolutions in the StochasticNets are implemented as a sparse matrix dot product, while the convolutions in the ConvNets are implemented as a matrix dot product. For fair comparison, both implementations do not make use of any hardware-specific optimizations such as Streaming SIMD Extensions (SSE) because many industrial embedded architectures used in applications such as embedded video surveillance systems do not support hardware optimization such as SSE.
As with Section 5.3, the neural connection probability is varied in both the convolutional layers and the hidden layer to achieve the desired number of neural connections for testing its effect on the feature extraction speed of the formed StochasticNets. Figure 10 demonstrates the relative feature extraction time vs. the neural connectivity percentage, where relative time is defined as the time required during the feature extraction process relative to that of the ConvNet. It can be observed that the relative feature extraction time decreases as the number of neural connections decrease, which illustrates the potential for StochasticNets to enable more efficient feature extraction.
Interestingly, it can be observed that speed improvements are seen immediately, even at 90% connectivity, which may appear quite surprising given that sparse representation of matrices often suffer from high computational overhead when representing dense matrices. However, in this case, the number of connections in the randomly realized receptive field is very small relative to the typical input image size. As a result, the overhead associated with using sparse representations is largely negligible when compared to the speed improvements from the reduced computations gained by eliminating even one connection in the receptive field. Therefore, these results show that StochasticNets can have significant merit for enabling the feature representation power of deep neural networks to be leveraged for a large number of industrial embedded applications.
In this paper, we proposed the learning of efficient feature representations via StochasticNets, where sparsely-connected deep neural networks are constructed by way of stochastic connectivity between neurons. Such an approach facilitates for more efficient neural utilization, resulting in reduced computational complexity for feature learning and extraction while preserving feature representation capabilities. The effectiveness of feature learning via StochasticNet was investigated by training StochasticNets using the CIFAR-10 dataset and using the learned features for classification of images in the SVHN and STL-10 image datasets. The StochasticNet features were then compared to the features extracted using a conventional convolutional neural network (ConvNet). Experimental results demonstrate that classification using features extracted via StochasticNets provided better or comparable accuracy than features extracted via ConvNets, even when the number of neural connections is significantly fewer. Furthermore, StochasticNets, with fewer neural connections than the conventional ConvNet, can reduce the over fitting issue associating with the conventional ConvNet. As a result, deep feature learning and extraction via StochasticNets can allow for more efficient feature extraction while facilitating for better or similar accuracy performances.
This work was supported by the Natural Sciences and Engineering Research Council of Canada, Canada Research Chairs Program, and the Ontario Ministry of Research and Innovation. The authors also thank Nvidia for the GPU hardware used in this study through the Nvidia Hardware Grant Program.
-  A. Hannun, C. Case, J. Casper, B. Catanzaro, G. Diamos, E. Elsen, R. Prenger, S. Satheesh, S. Sengupta, A. Coates et al., “Deepspeech: Scaling up end-to-end speech recognition,” arXiv preprint arXiv:1412.5567, 2014.
-  G. E. Dahl, D. Yu, L. Deng, and A. Acero, “Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition,” Audio, Speech, and Language Processing, IEEE Transactions on, vol. 20, no. 1, pp. 30–42, 2012.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” inAdvances in neural information processing systems, 2012, pp. 1097–1105.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” in Computer Vision–ECCV 2014. Springer, 2014, pp. 346–361.
Y. LeCun, F. J. Huang, and L. Bottou, “Learning methods for generic object
recognition with invariance to pose and lighting,” in
Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, vol. 2. IEEE, 2004, pp. II–97.
-  K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
R. Collobert and J. Weston, “A unified architecture for natural language
processing: Deep neural networks with multitask learning,” in
Proceedings of the 25th international conference on Machine learning. ACM, 2008, pp. 160–167.
-  Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin, “A neural probabilistic language model,” The Journal of Machine Learning Research, vol. 3, pp. 1137–1155, 2003.
-  M. D. Zeiler and R. Fergus, “Stochastic pooling for regularization of deep convolutional neural networks,” arXiv preprint arXiv:1301.3557, 2013.
-  L. Wan, M. Zeiler, S. Zhang, Y. L. Cun, and R. Fergus, “Regularization of neural networks using dropconnect,” in Proceedings of the 30th International Conference on Machine Learning (ICML-13), 2013, pp. 1058–1066.
X. Glorot and Y. Bengio, “Understanding the difficulty of training deep
feedforward neural networks,” in
International conference on artificial intelligence and statistics, 2010, pp. 249–256.
-  X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in International Conference on Artificial Intelligence and Statistics, 2011, pp. 315–323.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” arXiv preprint arXiv:1502.01852, 2015.
-  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” arXiv preprint arXiv:1409.4842, 2014.
-  X. Zhang, J. Zou, K. He, and J. Sun, “Accelerating very deep convolutional networks for classification and detection,” arXiv preprint arXiv:1505.06798, 2015.
-  C. Farabet, B. Martini, P. Akselrod, S. Talay, Y. LeCun, and E. Culurciello, “Hardware accelerated convolutional neural networks for synthetic vision systems,” in Circuits and Systems (ISCAS), Proceedings of 2010 IEEE International Symposium on. IEEE, 2010.
-  J. Jin, V. Gokhale, A. Dundar, B. Krishnamurthy, B. Martini, and E. Culurciello, “An efficient implementation of deep convolutional neural networks on a mobile coprocessor,” in Circuits and Systems (MWSCAS), 2014 IEEE 57th International Midwest Symposium on. IEEE, 2014, pp. 133–136.
-  V. Gokhale, J. Jin, A. Dundar, B. Martini, and E. Culurciello, “A 240 g-ops/s mobile coprocessor for deep neural networks,” in Computer Vision and Pattern Recognition Workshops (CVPRW), 2014 IEEE Conference on. IEEE, 2014.
-  W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and Y. Chen, “Compressing neural networks with the hashing trick,” arXiv preprint arXiv:1504.04788, 2015.
-  S. L. Hill, Y. Wang, I. Riachi, F. Schürmann, and H. Markram, “Statistical connectivity provides a sufficient foundation for specific functional connectivity in neocortical neural microcircuits,” Proceedings of the National Academy of Sciences, vol. 109, no. 42, pp. E2885–E2894, 2012.
-  M. J. Shafiee, P. Siva, and A. Wong, “Stochasticnet: Forming deep neural networks via stochastic connectivity,” arXiv preprint arXiv:1508.05463, 2015.
-  E. N. Gilbert, “Random graphs,” The Annals of Mathematical Statistics, pp. 1141–1144, 1959.
-  P. Erdos and A. Renyi, “On random graphs i,” Publ. Math. Debrecen, vol. 6, pp. 290–297, 1959.
-  A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” 2009.
-  Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, “Reading digits in natural images with unsupervised feature learning,” in NIPS workshop on deep learning and unsupervised feature learning, vol. 2011, no. 2. Granada, Spain, 2011, p. 5.
-  A. Coates, A. Y. Ng, and H. Lee, “An analysis of single-layer networks in unsupervised feature learning,” in International conference on artificial intelligence and statistics, 2011, pp. 215–223.
-  B. Bollobás and F. R. Chung, Probabilistic combinatorics and its applications. American Mathematical Soc., 1991, vol. 44.
-  I. Kovalenko, “The structure of a random directed graph,” Theory of Probability and Mathematical Statistics, vol. 6, pp. 83–92, 1975.
-  Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
-  N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” The Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929–1958, 2014.