Lightweight Neural Networks

by   Altaf H. Khan, et al.

Most of the weights in a Lightweight Neural Network have a value of zero, while the remaining ones are either +1 or -1. These universal approximators require approximately 1.1 bits/weight of storage, posses a quick forward pass and achieve classification accuracies similar to conventional continuous-weight networks. Their training regimen focuses on error reduction initially, but later emphasizes discretization of weights. They ignore insignificant inputs, remove unnecessary weights, and drop unneeded hidden neurons. We have successfully tested them on the MNIST, credit card fraud, and credit card defaults data sets using networks having 2 to 16 hidden layers and up to 4.4 million weights.



page 1

page 2

page 3

page 4


Exponential discretization of weights of neural network connections in pre-trained neural networks

To reduce random access memory (RAM) requirements and to increase speed ...

Recursive Binary Neural Network Learning Model with 2.28b/Weight Storage Requirement

This paper presents a storage-efficient learning model titled Recursive ...

Pipelined Training with Stale Weights of Deep Convolutional Neural Networks

The growth in the complexity of Convolutional Neural Networks (CNNs) is ...

An Evolving Cascade Neural Network Technique for Cleaning Sleep Electroencephalograms

Evolving Cascade Neural Networks (ECNNs) and a new training algorithm ca...

A Single-Pass Classifier for Categorical Data

This paper describes a new method for classifying a dataset that partiti...

Learning credit assignment

Deep learning has achieved impressive prediction accuracies in a variety...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Lightweight Neural Networks

Lightweight Neural Networks (LWN) are a subset of the conventional Continuous-Weight Networks (CWN). We call them lightweight because the trained LWNs have weights that require approximately 1.1 bits/weight of storage and their forward-passes does not require floating-point multiplications. The key characteristic of LWNs is the sparsity of their weight matrices. Moreover, the non-zero weights of these matrices are limited to only two values: (see Figure 1). These networks were first introduced in 1996 [1, 2]

as Multiplier-Free Networks and used training heuristics that were proposed in 1994 

[3]. Due to the recent interest in similar networks [4, 5, 6, 7], we present new results highlighting the sparsity of these networks, their natural inclination towards forming tight receptive fields, and their universal approximation capability.

Figure 1: Each CWN neuron is connected to all neurons in the preceding layer, whereas the LWN neurons have limited receptive fields. CWN weights have varying values (depicted by the thickness of the connections) and can be positive (solid lines) or negative (broken lines). The LWN weights have just two values, .

2 LWN and the Biological Neural Network

Here we would like to highlight those aspects of the LWN neurons that make them more similar in structure and function to the biological neurons as compared with the CWN neurons. Consider an axon of a source biological neuron connecting to the dendrite of the target neuron at a synapse. One of two types of neurotransmitter chemicals (either for excitatory or inhibitory receptors) is released from the axon’s side of the synapse whenever the source neuron is activated 

[8]. These chemicals then bind with receptors on the dendrite-side of the synapse, resulting in an increase (in case of an excitatory receptor) or decrease (inhibitory receptor) in the electrical potential on the membrane of the target neuron. The electrical potential of that membrane is the sum of contributions due to the firings of all neurons that are connected through synapses to the target neuron. When the membrane’s electrical potential reaches a threshold value, the target neuron fires. The highlight of the above narrative is the absence of multiplication operations and the presence of only two synaptic values, excitatory (similar to the weight of LWN connections) and inhibitory( weight)111LWN neurons are different from the biological neuron in having bipolar outputs as well as bipolar inputs. In case of the negative-valued inputs, the roles of excitatory and inhibitory receptors are reversed..

The connections-to-neurons ratio decreases with increase in the number of neurons in biological systems [9, 10]. LWNs exhibit the same characteristic (Figure 2), but CWNs can not. The size of the receptive field, , of a biological neuron is the fan-in of that neuron. Studies of processing in the visual cortex of animals show that varies among different types of neurons [11]. The LWN is similar in structure in that each LWN neuron is a specialist (as compared to the generalist neurons of the CWN) as LWN neuron specializes in a particular subset of inputs. The for a conventional CWN is fixed for every neuron in every layer and is equal to the number of neurons in the preceding layer. For LWN, is much smaller and varies with the number of neurons in a layer.

LWN training is inspired by the synaptic pruning process in the biological brain [12]: start with plenty; prune the excess off later. This natural phenomenon prunes, for example, 25-50% synapses among humans as they approach adulthood, but does not reduce the number of neurons. For LWN, the training process prunes the initial count of the synapses (i.e. weights) by about 80-95%, which, in many cases, results in the elimination of some of the neurons as well.

Figure 2: Ratio of number of non-zero weights to number of functioning neurons, , against number of functioning neurons in LWNs on the MNIST data set

3 Universal Approximation

Although their weights are restricted to the set

, LWNs’ thresholds have no such constraints. Their activation functions are bounded and odd, and are confined to hyperbolic tangent in this discussion. With the help of these ingredients, we can create a one-dimensional bump of arbitrarily small height at an arbitrary location on the

-axis using expressions of the form

where . Summations of such bumps can be used to approximate arbitrary one-dimensional functions to any accuracy. We will now discuss the extension of this one-dimensional construction to denseness in . We start by a theorem due to Khan [13]:

Theorem 1 (Weightless Neural Network Existence Theorem)

Summations of the form:

are dense in , where , is bounded and odd, , and .

The Weightless Neural Network is, on one hand, simpler than CWN in having unit-valued input- and output-layer weights, but on the other hand more complex in having two types of hidden neurons. The two types differing only by an irrational multiplication factor, , in their activation functions.

Allowing the input- and output-layer weights to assume additional values of and does not harm the density result, but may result in networks that train quicker and are more compact. The choice of multiplication factor is arbitrary, and can be chosen to be close to one. When simulated on a digital computer, that choice will become exactly one due to the limited precision of the computer [14], and the network will end up having only a single type of hidden neurons. This simplified configuration having inputs and hidden neurons can be written as:

where . This expression depicts a network having a solitary layer of hidden neurons. To form multi-hidden-layer networks, layers comprising neurons identical to those in the first hidden-layer were employed. Moreover, for our simulations, we used output neurons which were identical to the hidden neurons.

4 Training Procedure

Several approaches have been proposed for training neural networks with discrete weights. Hwang and Sung [4] take a trained CWN, discretize its weights to ternary

values, and retrain using backpropagation. They also restricted all signals to a depth of three bits. Mellempudi et al. 

[7] also start with a trained CWN and use a fine-grained ternarization method which exploits local correlations in the dynamic-range of the parameters to minimize the impact of discretization on the accuracy of the network. Li et al. [5]

use the stochastic gradient descent method to train ternary-weight networks. They use ternary-weights during the forward and backward propagations, but not during the weight updates. Yin et al. 

[6] gradually discretize the weights to zero or powers of two by minimizing the Euclidean distance between conventional weights and their closest discrete value during backpropagation.

We use the training heuristics proposed by Khan [3] augmented by a new step with the aim of minimizing the error:

where o and t

are the calculated and desired output vectors,

the value of an individual weight, and the weight discretization function. is differentiable and the zeros of are

. The main point of the original heuristics was the sequential application of error-reduction and weight-discretization steps to the network in every training epoch. Both were based on steepest-descent, but weight-discretization was supplemented by an additional mechanism to take care of the weight-update paralysis. This paralysis was caused by the opposing weight-updates calculated by the error-reduction and weight-discretization steps. That additional mechanism - the black-hole mechanism - forced nearly-discrete weights to discrete values. The rate of weight-discretization and the radius of the black-hole grew as the error in the output of the network shrank.

The black-hole mechanism worked well for shallow networks comprising hundreds of weights as discussed in [3], but failed to overcome the weight-update paralysis for deeper LWNs having thousands or millions of weights discussed in this paper. For such networks, we propose an additional mechanism that comes into play only when almost all weights are discrete. At that stage, all weights are rounded to their nearest discrete value from the set . If that results in a network having acceptable test-data accuracy, training is concluded. Otherwise, the rounding step is rolled back and normal training is resumed with the pre-rounding weights. The resulting error plots for a typical LWN are shown in Figure 3.

Figure 3: Plot of training epochs against mean-squared error on training data, ratio of non-discrete weights to all weights, and test data miss-classifications. The bottom plot is the same as the top one, but with a magnified y-axis

5 MNIST Data Set Simulations

MNIST is a well-known data set of images of handwritten digits [15, 16]

. We used the version available through the TensorFlow machine learning library 

[17]. That version includes 55,000 images in the training set and 10,000 in the test set. All images have been normalized, centered and transformed to a matrix of 32-bit floating-point numbers ranging from zero to one according to the gray-scale value of the associated pixel. The matrices have been flattened to 784-element vectors. Image labels are 10-element vectors with only a single element in a vector equal to one, and the rest having a value of zero. We used this data set as is and did not try to help the network configuration or the training process with any information about the original, 2-D nature of the image vector. This is because our interest is not to get the best possible result but to look at what the LWN learns with the simplest possible configuration. In this simplest configuration, all neurons in all layers are exactly the same and all layers are fully-connected to layer preceding them. The only factors that we varied were the number of hidden-layers and the number of neurons in each of those layers.

The summary of our results for two-hidden-layer LWNs is shown in Table 1. For each configuration, the results for the top-two LWNs with the best test-data accuracies are shown. The test-data accuracies range from 93.8 to 97.0%. If we want to balance accuracy with size, we will choose the 784:256:128:10 LWN. It achieves an accuracy of 96.7% with 0.23 million weights which compares favorably with 97.0% for a much larger LWN having 1,152 extra neurons and 1.1 million additional weights. This result (Table 2) is similar to 96.95% reported for a 784:300:100:10 CWN and 97.05% reported for a 784:500:150:10 CWN [16]. We do not have much to say about the epochs required to achieve reasonable accuracy as that number is very much dependent on the training parameters. We did not make any effort to optimize the parameters for that purpose.

Figure 3 shows the mean-squared error on training data, the ratio of misclassifications to the total number of test data examples, and the ratio of non-discrete weights to the total number of weights as a function of training epochs. That figure clearly reflects the very deliberate slowness of the weight-discretization processes as compared with the error minimization process. Our intention was to find the continuous-weight error minimum first and then look for the discrete-weight minimum in its immediate vicinity. The lack of smoothness in the training error and test misclassification curves is due to the weight-discretization corrections. The smoothness of the non-discrete weight curve indicates the slowness of discretization.

Configuration Epochs Test Number of All-Zero Rows in Maximum
Data the Weight Matrix of the Fan-In
Accuracy Hidden Hidden Output of Output
(%) (%) Layer Layer Layer Neurons
784:30:14:10 2,005 93.8 24,080 3,756 84.4 124 0 0 3
3,670 93.8 3,953 83.6 124 0 0 2
784:32:16:10 2,958 94.6 25,760 4,520 82.5 111 0 4 4
6,625 94.8 5,281 79.5 118 0 3 3
784:64:16:10 1,713 95.4 51,360 6,322 87.7 66 0 5 2
1,129 95.1 6,276 87.8 60 0 1 3
784:64:32:10 2,324 95.5 52,544 6,216 88.2 65 1 10 2
615 95.2 5,762 89.0 53 0 12 2
784:128:16:10 500 95.4 102,560 10,620 89.6 27 0 1 3
407 95.3 10,931 89.3 17 1 4 2
784:128:32:10 739 95.7 104,768 10,415 90.1 20 0 21 2
1,356 95.6 10,883 89.6 30 0 16 1
784:128:64:10 698 95.4 109,184 10,032 90.8 19 0 45 2
472 95.6 10,326 90.5 16 0 45 2
784:256:16:10 403 96.0 204,960 18,625 90.9 1 5 2 2
683 95.9 17,512 91.5 0 6 3 4
784:256:32:10 452 96.1 209,216 17,715 91.5 4 0 15 1
672 96.1 18,317 91.2 3 2 17 2
784:256:64:10 493 96.5 217,728 18,336 91.6 3 0 44 3
840 96.4 18,619 91.4 1 0 42 2
784:256:128:10 797 96.7 234,752 16,996 92.8 2 1 109 2
488 96.3 17,475 92.6 3 0 99 3
784:512:16:10 540 96.4 409,760 29,867 92.7 0 13 3 2
432 96.2 28,143 93.1 0 32 5 2
784:512:32:10 425 96.3 418,112 29,474 93.0 0 16 17 2
547 96.1 29,701 92.9 0 15 14 2
784:512:64:10 579 96.2 434,816 30,012 93.1 0 2 48 2
549 96.0 31,954 92.7 0 6 44 2
784:512:128:10 707 96.5 468,224 31,288 93.3 0 0 80 2
856 96.3 33,067 92.9 0 2 102 2
784:512:256:10 742 96.7 535,040 32,556 93.9 0 0 207 2
783 96.1 34,556 93.5 0 0 213 2
784:1024:16:10 390 96.2 819,360 42,283 94.8 0 244 2 2
603 96.6 32,934 96.0 0 258 4 2
784:1024:32:10 480 96.3 835,904 46,399 94.4 0 96 18 2
619 96.2 43,574 94.8 0 96 18 2
784:1024:64:10 507 96.6 868,992 48,066 94.5 0 37 48 1
765 96.4 50,851 94.1 0 15 48 2
784:1024:128:10 799 96.1 935,168 57,133 93.9 0 2 97 3
727 96.3 54,955 94.1 0 2 96 2
784:1024:256:10 1,129 96.3 1,067,520 60,366 94.3 0 0 213 2
859 96.0 62,125 94.2 0 0 201 3
784:1024:512:10 1,520 96.2 1,332,224 73,255 94.5 8 0 419 2
1,532 97.0 76,459 94.3 10 0 446 2
Table 1: Two-hidden-layer LWN results for the MNIST data
Network Test Data Accuracy (%) Configuration When Training Started Configuration at the End of Training
CWN [18] 96.95 784:300:100:10 266,200 n.a. n.a.
CWN [18] 97.05 784:500:150:10 468,500 n.a. n.a.
LWN 96.68 784:265:128:10 234,752 780:262:28:10 16,996
LWN 97.02 784:1024:512:10 1,332,224 774:1024:66:10 76,459
Table 2: Best performing LWNs and CWNs on the MNIST data set. The boldface numbers highlight changes from the starting configurations.

5.1 Zero-Valued Weights

The sparsity, , of the two-hidden-layer LWNs ranged from 79.5 to 96%, with the larger networks tending to have sparser weight matrices. The relationship of the dimensions of the hidden layers, and , and the number of non-zero weights, , is shown in Figure 4 on a log-log-log scale. seems to be mainly dependent on .

Figure 4: Relationship of the non-zero weights, , and the number of neurons in the hidden layers, , , on the MNIST data set

5.1.1 Ignored Inputs

The all-zero-rows column in Table 1 indicates that the LWN ignores some of the inputs when the number of neurons in the first hidden layer is inadequate. For the MNIST data set, the minimum layer size seems to be 256. This number is consistent with a minimum of 300 that was reported by LeCun et al. [18]. For the 784:32:16:10 LWNs, the ignored pixels were in the top five and bottom three rows, and four left-most and three right-most columns. Whereas for the larger 784:128:64:10 LWNs the ignored pixels were in the top five and bottom row, and three left-most and two right-most columns. This indicates that the normalized MNIST images are slightly off-center and the LWNs are ignoring the nearly-white pixels around the digits due to their low information value.

5.1.2 Receptive Fields

Due to its fully-connected structure, the weights-to-neurons ratio for CWN increases with the number of neurons. The LWN restricts this rise in complexity by limiting each neuron to a narrow receptive field. Figure 2 shows the weights-to-neurons ratio as a function of the number of neurons in the network.

Variations in the average size of the receptive field of neurons in the first hidden-layer, , as a function of the size of the hidden layer is shown in Figure 5

. Here, we excluded the smaller networks, those with 30-32 neurons in the hidden-layer, from the linear-regression model. The LWN reduces the burden of processing on each neuron by reducing their

as more and more of them are added to the first hidden-layer.

We emphasize here that the LWN has no way of knowing that the MNIST data set comprises 2-D images. Neither does the LWN architecture include any custom-designed, small-receptive-field convolution layers of the Convolution Neural Network (CNN)

à la LeCun et al. [16]. It, however, tends to drop non-crucial inputs from the receptive-field of neurons by setting the corresponding weight to zero. This behavior is true for almost all neurons in all layers of the network. In contrast with the CNN, the size of the receptive field varies from neuron to neuron in an LWN and is not a parameter of the training process, but is arrived at naturally as a consequence of the training process. Receptive-fields are clearly present in biological systems for the processing of visual, aural and touch and possibly other stimuli. The design of CNN, for example, is inspired by those systems. There may be other data-processing situations where the receptive-fields may not be that obvious. The LWN can automatically discover and leverage them even in those situations.

Figure 5: Relationship of the average size of the receptive fields, , and the number of neurons in the first hidden-layer, , on the MNIST data set. The regression model excludes the first four points.

5.1.3 Dropped Hidden Neurons

Some of the hidden neurons had all their weights set to zero as a consequence of the training regimen. The number of such dropped neurons, indicated by the number of all-zero rows in the weight matrices, is related to the mismatch between the number of neurons in consecutive layers. Table 1 makes this clear: greater the mismatch, the greater the number of dropped hidden neurons.

Tables 1 also shows that, in 70% of the cases, a hidden-layer neuron is connected to at most two output neurons. This indicates the efficient distribution of the image recognition task among the hidden neurons. That is, the task is distributed in such a way that each neuron has a distinct and focused responsibility. Here the size of the hidden-layer does not seem to be much of a factor. In the most extreme case, there are 446 dropped neurons in the second hidden layer. Of the remaining 66, 61 are connected only to single output neuron, while the other five are connected to two of the output neurons. The for the output neurons was 7.1.

5.1.4 Weight Storage

Each LWN weight requires at most two binary-bits for storage. However, using variable-length coding (e. g. Huffman coding [19]), a weight matrix that is 90% sparse will require only 1.1 bits/weight. This number approaches one as sparsity increases. Table 3 shows a comparison of storage requirements for various weight depths, but excludes the effect of thresholds on storage requirements. 32- and 8-bit thresholds for the 784:1024:512:10 configuration consume 48 and 12 kB, respectively. We have yet to try LWN training with 8-bit thresholds, but we expect it to work due to the results reported by Vanhoucke et al. [20].

Storage Scheme Bits/Weight Required Storage (kB)
n.a. Conventional 32 5,204
n.a. Conventional 8 1,301
0.943 Conventional 2 325
0.943 Huffman coding 1.057 172
Table 3: Weight storage comparison for the MNIST data set for the 784:1024:512:10 configuration that has weights

5.1.5 Forward Pass

LWNs posses an efficient forward pass because multiplications in all neurons are trivial in nature as weights are restricted to the set . Moreover, due to the high sparsity of weight matrices, even that trivial operation is not necessary most of the time. The thresholds are 32-bit floating-point numbers, but they are never part of a multiplication operation. Therefore, the LWN does not require the floating-point multiplication operation at all.

5.2 Training Deeper Networks

We trained several LWNs with 4-16 hidden-layers of varying sizes to see if our training heuristics suffered from the vanishing (and/or exploding) gradient problem 

[21, 22]. Results are shown in Table 4

. We did these training runs using the same training parameter values (e.g. learning rate, momentum, discretization rates) as those of the 2-hidden-layer LWNs. Our focus was not on obtaining the best performing LWNs, but to see if we were able to train theses deeper LWNs at all. Surprisingly, these training experiments did not present symptoms of the vanishing or exploding gradient. This is probably due to the restriction on the magnitude of the weights.

Configuration Hidden Layers Epochs Test Data Accuracy (%) (%)
784:1024:512:128:64:10 4 339 96.3 1,499,776 116,244 92.2
469 97.0 98,018 93.5
784:256:128:64:32:16:10 5 481 96.8 244,384 19,364 92.1
874 96.9 19,506 92.0
784:512:256:128:64:32:10 5 321 96.3 575,808 46,192 92.0
330 96.0 48,010 91.7
784:2048:1024:512:256:128:64:32:16:10 8 630 96.0 4,401,824 255,229 94.2
1,081 95.8 264,572 94.0
784:256:128:{564}:32:16:10 9 771 96.5 260,768 21,664 91.7
655 96.6 21,198 91.9
784:256:128:{1264}:32:16:10 16 822 95.7 289,440 25,732 91.1
862 95.7 26,280 90.9
Table 4: MNIST training results for LWNs having 4-16 hidden layers

6 Credit Card Fraud and Defaults Simulations

After extensive simulations on the MNIST data, we looked at credit card fraud [23] and default [24] data to further validate the viability of the LWN. A key characteristic of these data sets is their class imbalance. This issue can be addressed by undersampling the majority class or oversampling the minority class. We tried oversampling in two different ways: simple repetition of the minority class examples and Synthetic Minority Oversampling Technique (SMOTE) [25]

. For these simulations, we normalized the continuous features to zero-mean and unit-variance and clipped them to the range

. All binary features were mapped to . We then split the data into training (70%) and testing (30%) sets using stratified sampling. We assigned separate outputs to each of the two classes. The higher of the two outputs was considered the winning class while testing. This methodology avoids the work required to find the most suitable output threshold (in case of a single output neuron) at which the classification switches from one class to the other.

The credit card fraud data set contained 29 features and consisted of 284,807 transactions, out of which 0.17% were fraudulent. Our best performing network had an F1 score [26]

of 0.83 with oversampled training data. Pozzolo et al. have reported G-mean scores of 0.944, 0.770, and 0.794 on this data set using Logit Boost, Random Forests, and Support Vector Machine, respectively 

[23]. It is difficult to compare these results with our G-mean score of 0.900 due to, among other things, differences in data pre-processing, training/testing data splits, and classification thresholds.

Sampling Method Configuration (%) Accuracy (%) Precision Recall F1 Score
Undersampled 29:256:128:64:32:2 50,496 1,710 96.6 97.7 0.058 0.804 0.109
Oversampled 6,389 87.3 99.9 0.857 0.811 0.833
SMOTE 7,655 84.8 99.9 0.829 0.818 0.823
Table 5: Test data results for the credit card fraud data.

The credit card defaults data set contained 23 features and consisted of 30,000 cases, out of which 22% were defaults. We replaced categorical features with separate features for each category. The three sampling techniques, in this case, resulted in very similar F1 scores (Table 6

) due to the relatively lower imbalance among the two classes. Yeh et al. have reported 83% accuracy using the neural network classifier on this data set. It is not possible to compare this result with our result of 68.5% due to lack of adequate information about their training and testing processes.

Sampling Method Configuration (%) Accuracy (%) Precision Recall F1 Score
Undersampled 32:512:256:128:64:32:16:8:4:2 191,144 22,528 88.2 68.5 0.365 0.571 0.445
Oversampled 37,648 80.3 68.4 0.368 0.596 0.455
SMOTE 46,449 75.7 67.5 0.359 0.595 0.448
Table 6: Test data results for the credit card default data.

7 Conclusions

LWNs do not have weights in the conventional sense, just excitatory and inhibitory connections of unit strength. They can approximate any continuous function with any accuracy. They require modest storage and have a multiplication-free forward pass, rendering them suitable for deployment on inexpensive hardware. Their sparse weight matrices loosen the coupling among the layers, making the LWN more tolerant to failure of individual neurons. In a CWN, the learned information is distributed over all weights. In an LWN, the picture is less fuzzy and the localized nature of computation is much more obvious due to the presence of a large number of zero-valued weights. For image processing, a CWN does not scale well with increases in the resolution of the images due to CWN’s fully-connected structure. An LWN, on the other hand, scales much better due to the sparsity of its weight matrices. The small magnitude of LWN weights should result in smooth mappings [27] and the small number of non-zero weights should result in low generalization error [28].

The LWN learning process automatically drops insignificant inputs, unnecessary weights, and unneeded hidden-neurons. That process is relatively complex and slow, but results in networks that are almost as accurate as the CWNs but have much lower information complexity. It is conjectured that those accuracies can be matched by using bigger networks. This can be understood by considering the limited number of angles at which an LWN neuron can draw classification boundaries as opposed to the CWN neuron that can draw those boundaries at an arbitrary angle. A superposition of two or more LWN neurons can, however, approximate those arbitrary boundaries, but the added complexity may not justify the minuscule improvement in approximation accuracy.

At this time, the LWN neurons have unrestricted thresholds. We are exploring if the magnitude of these thresholds can be restricted to an arbitrary value [29], e.g. 1, which may lead to more efficient storage. In its current form, the LWN training process applies the same training operation to every weight irrespective of the value of the weight. We are trying to see if the training process can be made more efficient by sometimes ignoring those weights that have not deviated from a zero value for several epochs.


  • [1] A. H. Khan, Feedforward neural networks with constrained weights. PhD thesis, University of Warwick, 1996.
  • [2] A. H. Khan, “Multiplier-free feedforward networks,” in International Joint Conference on Neural Networks (IJCNN), vol. 3, pp. 2698–2703, IEEE, 2002.
  • [3] A. H. Khan and E. L. Hines, “Integer-weight neural nets,” Electronics Letters, vol. 30, no. 15, pp. 1237–1238, 1994.
  • [4] K. Hwang and W. Sung, “Fixed-point feedforward deep neural network design using weights , , and ,” in IEEE Workshop on Signal Processing Systems (SiPS), pp. 1–6, IEEE, 2014.
  • [5] F. Li, B. Zhang, and B. Liu, “Ternary weight networks,” in The 1st International Workshop on EMDNN, NIPS, 2016.
  • [6] P. Yin, S. Zhang, J. Xin, and Y. Qi, “Quantization and training of low bit-width convolutional neural networks for object detection,” ArXiv Preprint arXiv:1612.06052v2, 2017.
  • [7] N. Mellempudi, A. Kundu, D. Mudigere, D. Das, B. Kaul, and P. Dubey, “Ternary neural networks with fine-grained quantization,” ArXiv Preprint arXiv:1705.01462, 2017.
  • [8] M. F. Bear, B. W. Connors, and M. A. Paradiso, Neuroscience, vol. 2. Lippincott Williams & Wilkins, 2007.
  • [9] J. L. Ringo, “Neuronal interconnection as a function of brain size,” Brain, Behavior and Evolution, vol. 38, no. 1, pp. 1–6, 1991.
  • [10] D. K. Cullen, M. E. Gilroy, H. R. Irons, and M. C. LaPlaca, “Synapse-to-neuron ratio is inversely related to neuronal density in mature neuronal cultures,” Brain Research, vol. 1359, pp. 44–55, 2010.
  • [11] D. H. Hubel and T. N. Wiesel, “Receptive fields and functional architecture of monkey striate cortex,” The Journal of Physiology, vol. 195, no. 1, pp. 215–243, 1968.
  • [12] G. Chechik, I. Meilijson, and E. Ruppin, “Synaptic pruning in development: A computational account,” Neural Computation, vol. 10, no. 7, pp. 1759–1777, 1998.
  • [13] A. H. Khan, “Weightless neural net is a universal approximator,” 2017. Manuscript submitted for publication.
  • [14] J. Wray and G. G. Green, “Neural networks, approximation theory, and finite precision computation,” Neural Networks, vol. 8, no. 1, pp. 31–37, 1995.
  • [15] Y. LeCun, C. Cortes, and C. J. Burges, “MNIST handwritten digit database,” AT&T Labs [Online], vol. 2, 2010. The data set is available at
  • [16] Y. LeCun, B. E. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. E. Hubbard, and L. D. Jackel, “Handwritten digit recognition with a back-propagation network,” in Advances in Neural Information Processing Systems, pp. 396–404, 1990.
  • [17] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al., “Tensorflow: Large-scale machine learning on heterogeneous distributed systems,” ArXiv Preprint arXiv:1603.04467, 2016.
  • [18] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
  • [19] D. A. Huffman, “A method for the construction of minimum-redundancy codes,” Proceedings of the IRE, vol. 40, no. 9, pp. 1098–1101, 1952.
  • [20] V. Vanhoucke, A. Senior, and M. Z. Mao, “Improving the speed of neural networks on CPUs,” in

    Proceedings of Deep Learning and Unsupervised Feature Learning NIPS Workshop

    , vol. 1, p. 4, 2011.
  • [21] S. Hochreiter, “Untersuchungen zu dynamischen neuronalen netzen,” Diploma, Technische Universität München, vol. 91, 1991.
  • [22] J. Schmidhuber, “Deep learning in neural networks: An overview,” Neural Networks, vol. 61, pp. 85–117, 2015.
  • [23] A. Dal Pozzolo, O. Caelen, R. A. Johnson, and G. Bontempi, “Calibrating probability with undersampling for unbalanced classification,” in IEEE Symposium Series on Computational Intelligence, pp. 159–166, IEEE, 2015. The data set is available as
  • [24] I.-C. Yeh and C.-h. Lien, “The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients,” Expert Systems with Applications, vol. 36, no. 2, pp. 2473–2480, 2009. The data set is available as of credit card clients.xls.
  • [25] N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “SMOTE: Synthetic minority over-sampling technique,”

    Journal of Artificial Intelligence Research

    , vol. 16, pp. 321–357, 2002.
  • [26] T. Fawcett, “An introduction to ROC analysis,” Pattern Recognition Letters, vol. 27, no. 8, pp. 861–874, 2006.
  • [27] C. M. Bishop, Neural Networks for Pattern Recognition. Oxford University Press, 1995.
  • [28] J. E. Moody, “The effective number of parameters: An analysis of generalization and regularization in nonlinear learning systems,” in Advances in Neural Information Processing Systems, pp. 847–854, 1992.
  • [29] M. Stinchcombe and H. White, “Approximating and learning unknown mappings using multilayer feedforward networks with bounded weights,” in International Joint Conference on Neural Networks (IJCNN), pp. 7–16, IEEE, 1990.