Binary Ensemble Neural Network: More Bits per Network or More Networks per Bit?

06/20/2018 ∙ by Shilin Zhu, et al. ∙ Harvard University University of California, San Diego 0

Binary neural networks (BNN) have been studied extensively since they run dramatically faster at lower memory and power consumption than floating-point networks, thanks to the efficiency of bit operations. However, contemporary BNNs whose weights and activations are both single bits suffer from severe accuracy degradation. To understand why, we investigate the representation ability, speed and bias/variance of BNNs through extensive experiments. We conclude that the error of BNNs is predominantly caused by the intrinsic instability (training time) and non-robustness (train & test time). Inspired by this investigation, we propose the Binary Ensemble Neural Network (BENN) which leverages ensemble methods to improve the performance of BNNs with limited efficiency cost. While ensemble techniques have been broadly believed to be only marginally helpful for strong classifiers such as deep neural networks, our analyses and experiments show that they are naturally a perfect fit to boost BNNs. We find that our BENN, which is faster and much more robust than state-of-the-art binary networks, can even surpass the accuracy of the full-precision floating number network with the same architecture.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep Neural Networks (DNNs) have achieved great impact to broad disciplines in academia and industry [57, 38]. Recently, the deployment of DNNs are transferring from high-end cloud to low-end devices such as mobile phones and embedded chips, serving general public with many real-time applications, such as drones, miniature robots, and augmented reality. Unfortunately, these devices typically have limited computing power and memory space, thus cannot afford DNNs to achieve important tasks like object recognition involving significant matrix computation and memory usage.

Binary Neural Network (BNN) is among the most promising techniques to meet the desired computation and memory requirement. BNNs [31]

are deep neural networks whose weights and activations have only two possible values (e.g., -1 and +1) and can be represented by a single bit. Beyond the obvious advantage of saving storage and memory space, the binarized architecture admits only bitwise operations, which can be computed extremely fast using digital logic units 

[20] such as arithmetic-logic unit (ALU) with much less power consumption than floating-point unit (FPU).

Figure 1: Comparison between traditional floating-number DNN, BNN and our proposed BENN on image recognition task (W: weights, A: activations). The inference speed of BENN can be further boosted on FPGAs [63].

Despite the significant gain in speed and storage, however, current BNNs suffer from notable accuracy degradation when applied to challenging tasks such as ImageNet classification. To mitigate the gap, previous researches in BNNs have been focusing on designing more effective optimization algorithms to find better local minima of the quantized weights. However, the task is highly non-trivial, since gradient-based optimization that used to be effective to train DNNs now becomes tricky to implement.

Instead of tweaking network optimizers, we investigate BNNs systematically in terms of representation power, speed, bias, variance, stability, and their robustness. We find that BNNs suffer from severe intrinsic instability and non-robustness regardless of network parameter values. What implied by this observation is that the performance degradation of BNNs are not likely to be resolved by solely improving the optimization techniques; instead, it is mandatory to cure the BNN function, particularly to reduce the prediction variance and improve its robustness to noises.

Inspired by the analysis, in this work, we propose Binary Ensemble Neural Network (BENN). Though the basic idea is as straight-forward as to simply aggregate multiple BNNs by boosting or bagging, we show that the statistical properties of the ensembled classifiers become much nicer: not only the bias and variance are reduced, more importantly, BENN’s robustness to noises at test time is significantly improved. All the experiments suggest that BNNs and ensemble methods are a perfectly natural fit. Using architectures of the same connectivity (a compact Network in Network [42]), we find that boosting only BNNs would be able to even surpass the baseline DNN with real weights in the best case. In addition, our initial exploration by applying BENN on ImageNet recognition using AlexNet [38] and ResNet [27] also shows a large gain. This is by far the fastest, most accurate, and most robust results achieved by binarized networks (Fig. 1).

To the best of our knowledge, this is the first work to bridge BNNs with ensemble methods. Unlike traditional BNN improvements that have computational complexity of by using -bit per weights [65] or bases in total [43], the complexity of BENN is reduced to . Compared with [65, 43], BENN also enjoys better bitwise operation parallelizability. With trivial parallelization, the complexity can be reduced to . We believe that BENN can shed light on more research along this idea to achieve extremely fast yet robust computation by networks.

2 Related Work

Quantized and binary neural networks: People have found that there is no need to use full-precision parameters and activations and can still preserve the accuracy of a neural network using k-bit fixed point numbers, as stated by [19, 23, 61, 8, 40, 41, 48, 56, 49]. The first approach is to use low-bitwidth numbers to approximate real ones, which is called quantized neural networks (QNNs) [32]. [66, 64] also proposed ternary neural networks. Although recent advances such as [65] can achieve competitive performance compared with full-precision models, they cannot fully speed it up because we still cannot perform parallelized bitwise operation with bitwidth larger than one. [31] is the very recent work that binarizes all the weights and activations, which was the birth of BNN. They have demonstrated the power of BNNs in terms of speed, memory use and power consumption. But recent works such as [58, 11, 21, 10] also reveal the strong accuracy degradation and mismatch issue during the training when BNNs are applied in complicated tasks such as ImageNet ([12]) recognition, especially when the activation is binarized. Although some work like [43, 50, 13]

have offered reasonable solutions to approximate full-precision neural network, much more computation and tricks on hyperparameters are still needed to implement compared with BENN. Since they either use

-bitwidth quantization or binary bases, the computational complexity cannot get rid of if is required for 1-bit single BNN, while BENN can achieve and even if multiple threads are naturally paralleled. Also, many of current literatures tried to minimize the distance between binary and real-value parameters. But empirical assumptions such as Gaussian parameter distribution are usually required in order to get a priori for each BNN or just keep the sign same as suggested by [43], otherwise the non-convex optimization is hard to deal with. By contrast, BENN can be a general framework to achieve the goal and has strong potential to work even better than full-precision networks, without involving more hyperparameters than a single BNN.

Ensemble techniques: To avoid simply relying on a single powerful classifier, the ensemble strategy can improve the accuracy of given learning algorithm combining multiple weak classifiers as summarized by [6, 9, 47]. The two most common strategies are bagging by [5] and boosting by [51, 17, 53, 26], which were proposed many years ago and have strong statistical foundation. They have roots in a theoretical framework PAC model by [59]

which was the first to pose the question of whether weak learners can be ensembled into a strong learner. Bagging predictors are proved to reduce variance while boosting can reduce both bias and variance, and their effectiveness have been proved by many theoretical analysis. Traditionally ensemble was used with decision trees, decision stumps, random forests and achieved great success thanks to its desirable statistical properties. Recently people use ensemble to increase the generalization ability of deep CNNs

[24], advocate boosting on CNNs and do architecture selection [45], and propose boost over features [30]. But people did not pay enough attention to ensemble techniques because neural network is not a weak classifier anymore thus ensemble can unnecessarily increase the model complexity. However, when applied to weak binary neural networks, we found it generates new insights and hopes, and BENN is a natural outcome of such perfect combination. In this work, we build our BENN on the top of variant bagging, AdaBoost by [15, 52], LogitBoost by [17] and can be extended to many more variants of traditional ensemble algorithms. We hope this work can revive these intelligent approaches and bring their life back into modern neural networks.

3 Why Making BNNs Work Well is Challenging?

Despite the speed and space advantage of BNN, its performances is still far inferior to the real valued counterparts. There are at least two possible reasons: First, functions representable by BNNs may have some inherent flaws; Second, current optimization algorithms may still not be able to find a good minima. While most researchers have been working on developing better optimization methods, we suspect that BNNs have some fundamental flaws. The following investigation reveals the fundamental limitations of BNN-representable functions experimentally.

Because all weights and activations are binary, an obvious fact is that BNNs can only represent a subset of discrete functions, being strictly weaker than real networks that are universal continuous function approximators [29]. What are not so obvious are two serious limitations of BNNs: the robustness issue w.r.t. input perturbations, and the stability issue w.r.t. network parameters. Classical learning theory tells us that both robustness and stability are closely related to the generalization error of a model [62, 4]. A more detailed theoretical analysis on BNN’s problems is attached in supplementary material.

Robustness Issue: In practice, we observe more severe overfitting effects of BNNs than real networks. Robustness is defined as the property that if a testing population is “similar” to a training population, then the testing error is close to the training error [62]. To verify this point, we experiment in a random network setting and a trained network setting.

Random Network Setting. We compute the following quantity to compare 32bit real-valued DNN, BNN, QNN, and our BENN model (Sec. 4) on the Network-In-Network (NIN) architecture:

(1)

where is the network and represents network weights.

We randomly sample real-valued weights as suggested in literature to get a DNN with weights and binarize it to get a BNN with binary weights . We also independently sample and binarize to generate multiple BNNs with the same architecture to simulate the BENN and get . QNN is obtained by quantizing the DNN to -bit weights (W) and activations (A). We normalize each input image in CIFAR-10 to the range .

Then we inject the input perturbation on each example by a Gaussian noise with different variances (), run a forward pass on each network, and measure the expected norm of the change on the output distribution. The above norm of DNN, BNN, QNN, and BENN averaged by 1000 sampling rounds is shown in Fig. 2(left) with perturbation variance 0.01.

Results show that BNNs always have larger output variation, suggesting that they are more susceptible to input perturbation, and BNN does worse than QNN that has more bits. We also observe that having more bits on activations actually improves BNN’s robustness significantly, while having more bits on weights just has quite marginal improvement (Fig. 2(left)). Therefore, the activation binarization seems to be the bottleneck.

Trained Network Setting. To further consolidate the discovery, we also train a real-valued DNN and a BNN

using XNOR-Net

[50] rather than direct sampling. We also include our designed BENN in comparison. Then we perform the same Gaussian input perturbation , run a forward pass, and calculate the change of classification error on CIFAR-10 as:

(2)

Results in Fig. 2(middle) indicates that BNNs are still more sensitive to noises even if it is well optimized. Although people have shown that weights in BNN still have nice statistical properties as in [1], the conclusion can change dramatically if both weights and activations are binarized while input is perturbed.

Stability Issue:

BNNs are known to be hard to optimize due to problems such as gradient mismatch and non-smoothness of activation function. While

[40] has shown that stochastic rounding converges to within accuracy of the minimizer in expectation where denotes quantization resolution, assuming the error surface is convex, the community has not fully understood the non-convex error surface of BNN and how it interacts with different optimizers such as SGD or ADAM [37].

To compare the stability of different networks (sensitivity to network parameter during optimization), we measure the accuracy fluctuation after a large amount of training steps. Fig. 2

(right) shows the accuracy oscillation in the last 20 training steps after we train BNN and QNN with 300 epochs, and results show that we should at least have QNN with weights and activations both 4-bit in order to stabilize the network.

One explanation of such instability is the non-smoothness of the function output w.r.t. the binary network parameters. Note that, as the output of the activation function in the previous layer, the input to each layer of BNNs are binarized numbers. In other words, not only each function is non-smooth w.r.t. the input, but also it is non-smooth w.r.t. the learned parameters. As a comparison, empirically, BENN with 5 and 32 ensembles (denoted as BENN-05/32 in Fig. 2) have already achieved amazing stability.

Figure 2: Left: BNN has large output variation (robustness issue). Middle: BNN has large variation of prediction accuracy (robustness issue). Right: BNN has large test accuracy variation during training (instability issue). BENN can cure these problems. Here, the perturbation variance is 0.01. (*QNN-W1A2 denotes QNN with 1-bit weights and 2-bit activations and so do others.)

4 Binary Ensemble Neural Network

In this section, we illustrate our BENN using bagging and boosting strategies, respectively. In all experiments, we adopt the widely used deterministic binarization as

for network weights and activations, which is preferred to leverage hardware accelerations. However, back-propagation becomes challenging since the derivative is zero almost everywhere except for the stepping point. In this work, we borrow the common strategy called “straight-through estimator” (STE) 

[28] during back-propagation, defined as .

4.1 BENN-Bagging

The key idea of bagging is to average weak classifiers that are trained from i.i.d. samples of the training set. To train each BNN classifier, we sample examples independently with replacement from the training set . We do this times to get BNNs, denoted as . The sampling with replacement assures that each BNN sees roughly of the entire training set.

At test time, we aggregate the opinions from these classifiers and decide among

classes. We compare two ways of aggregating the outputs. One is to choose the label that most BNNs agree with (hard decision), while the other is to choose the best label after aggregating their softmax probabilities (soft decision).

The main advantage brought by bagging is to reduce the variance of a single classifier. This is known to be extremely effective for deep decision trees which suffer from high variance, but only marginally helpful to boost the performance of neural networks, since networks are generally quite stable. Interestingly, though less helpful to real-valued networks, bagging is effective to improve BNNs since the instability issue is severe for BNNs due to gradient mismatch and strong discretization noise as stated in Sec. 3.

4.2 BENN-Boosting

Boosting is another important tool to ensemble classifiers. Instead of just aggregating the predictions from multiple independently trained BNNs, boosting combines multiple weak classifiers in a sequential manner and can be viewed as a stage-wise gradient descent method optimized in the function space. Boosting is able to reduce both bias and variance of individual classifiers.

There are many variants of boosting algorithms and we choose the AdaBoost [15] algorithm for its popularity. Suppose classifier has hypothesis , weight , and output distribution , we can denote the aggregated classifier as and its aggregated output distribution . Then AdaBoost minimizes the following exponential loss:

where and denotes the index of the training example.

Reweighting Principle

The key idea of boosting algorithm is to have the current classifier pay more attention to the misclassified samples by previous classifiers. Reweighting is the most common way of budgeting attention based on the historical results. There are essentially two ways to accomplish this goal:

  • [leftmargin=0.5cm]

  • Reweighting on sampling probabilities: Suppose initially each training example is assigned uniformly, so each sample gets equal chance to be picked. After each round, we reweight the sampling probability according to the classification confidence.

  • Reweighting on loss/gradient: We may also incorporate into the gradient, so that a BNN updates parameters with larger step size on misclassified examples and vice versa. For example, set , where is the learning rate. However, we observe that this approach is less effective experimentally for BNNs, and we conjecture that it exaggerates the gradient mismatch problem.

4.3 Test-Time Complexity

A 1-bit BNN with the same connectivity as the original full-precision 32-bit DNN can save x memory. In reality, BNN can achieve x speed up on the current generation of 64-bit CPUs [50] and may be further improved with special hardware such as FPGA. Some existing works only binarize the weights but leave activations full-precision, which practically only results in 2x speed up. As for BENN with ensembles, each BNN’s inference is independent, thus the total memory saving is x. As for boosting, we can further compress BNN to save more computations and memory usage. Besides, existing approaches have complexity with -bit QNN [65] or use binary bases [43], because they cannot avoid the bit collection operation to generate a number, although their fixed-point computation is much more efficient than float-point computation. If is the time complexity of the boolean operation, then BENN reduces the quadratic complexity to linear, i.e., with ensembles but still maintains the very satisfying accuracy and stability as stated above. We can even make the inference in for BENN if multiple threads are supported. A complete comparison is shown in Table 1.

4.4 Stability Analysis

Given a full-precision real valued DNN with a set of parameters , a BNN with binarized parameters

, input vector

(after Batch Normalization) and perturbation

, and a BENN with ensembles, we want to compare their stability and robustness w.r.t. the network parameters and input perturbation. Here we analyze the variance of output change before and after perturbation, which echoes Eq. 1 in Sec. 3. This is because the output change has zero mean and its variance reflects the distribution of output variation. More specifically, larger variance means increased variation of output w.r.t. input perturbation.

Assume

are outputs before non-linear activation function of a single neuron in an one-layer network, we have the output variation of real-value DNN as

, whose distribution has variance , where denotes number of input connections for this neuron and denotes inner product. Some modern non-linear activation function

like ReLU will not change the inequality of variances, thus we can omit them in the analysis to keep it simple.

For BNN with both weights and activations binarized, we can rewrite the above formulation as , thus having variance . And for BENN-Bagging, we have with ensembles, since bagging effectively reduces variance. For BENN-Boosting, our model can reduce both bias and variance at the same time. However for boosting, the analysis on bias and variance becomes much more difficult and there are still some debates in literature [7, 17]. With these Gaussian assumptions and some numerical experiments (detailed analysis and theorems can be found in supplementary material), we can verify the large stability gain of BENN over BNN compared with floating-number DNN. As for robustness, the same analysis principle can be applied to perturbing weights as compared with used in stability analysis.

Network Weights Activation Operations Memory Saving Computation Saving
Standard DNN F F +, -, 1 1
[10, 33, 39, 66, 64],… B F +, - 32x 2x
[65, 32, 61, 2],… +, -, x x
[43],… +, -, XNOR, bitcount x x
[50] and ours B B XNOR, bitcount 32x 58x
Table 1: Analysis of Theoretically Computational Complexity on a Single Network. (F-full-precision, -k-bit quantization, B-binary)

5 Independent and Warm-Restart Training for BENNs

We train our BENN with two different methods. The first one is to initialize each new classifier independently and retrain it, which is a traditional way. To accelerate the training of new weak classifier in BENN, we can also initialize the weights of the new classifier by cloning the weights from the most recently trained classifier. We name this training scheme as warm-restart training, and we conjecture that the knowledge of those unseen data for the new classifier has been transferred from the inherited weights and is helpful to increase the discriminability of the new classifier. Interestingly, we observe that for small network and dataset like Network-In-Network [42] on CIFAR-10, warm-restart training has better accuracy. However, independent training is better when BENN is applied to large network and dataset such as AlexNet [38] and ResNet [27] on ImageNet since overfitting problem emerges. More discussion can be found in Sec. 6 and Sec. 7.

Implementation Details

We train BENN on the image classification task with CNN block structure containing a batch normalization layer, a binary activation layer, a binary convolution layer, a non-binary activation layer (e.g., sigmoid, ReLU), and a pooling layer, as used by many recent works [50, 65]. To compute the gradient of step function , we use the same approach suggested by STE. When updating parameters, we use real-valued weights as [50] suggests otherwise the tiny update could be killed by deterministic binarization and training cannot move on. In this work, we train each BNN using standard independent and warm-restart training. Unlike the previous works which always keep the first and last layer full-precision, we test 7 different BNN architecture configurations as shown in Table 2 and use them as ingredients for ensemble in BENN.

Weak BNN Configuration/Type (T) Weight Activation Size Params
SB (Semi-BNN) First and last layer:32-bit First and last layer:32-bit 100% 100%
AB (All-BNN) All layers:1-bit All layers:1-bit 100% 100%
WQB (Weight-Quantized-BNN) All layers:Q-bit All layers:1-bit 100% 100%
AQB (Activation-Quantized-BNN) All layers:1-bit All layers:Q-bit 100% 100%
IB (Except-Input-BNN) All layers:1-bit First layer: 32-bit 100% 100%
SB/AB/IB-Tiny (Tiny-Compress-BNN) - - 50% 25%
SB/AB/IB-Nano (Nano-Compress-BNN) - - 10% 1%
Table 2: Weak BNN Configurations Used to Ensemble (W-weights, A-activation, Params-number of parameters in network). The Last Two are Naive Compressed Network.

6 Experimental Results

We evaluate BENN on CIFAR-10 and ImageNet datasets with a self-designed compact Network-In-Network (NIN) [42], the standard AlexNet [38] and ResNet-18 [27], respectively. We have summarized in Table 2 the configurations of all BNN variants. More detailed specifications of the networks can be found in the supplementary material. For each type of BNN, we obtain the converged single BNN (e.g., SB) when training is done. We also store BNN after each training step and obtain the best BNN along the way by picking the one with the highest test accuracy (e.g., Best SB). We use BENN-T-R to denote the BENN by aggregating R BNNs of configuration T (e.g., BENN-SB-32). We also denote Bag/Boost-Indep and Bag/Boost-Seq as bagging/boosting with standard independent training and warm-restart sequential training (Sec. 5). All ensembled BNNs share the same network architecture as their real-valued DNN counterpart in this paper, although studying multi-model ensemble is an interesting future work. The code of all our experiments will be made public online.

Network Ensemble Method Ensemble STD
SB - 1 2.94
Best SB - 1 1.40
BENN-SB Bag-Seq 5 0.31
BENN-SB Boost-Seq 5 0.24
BENN-SB Bag-Seq 32 0.03
BENN-SB Boost-Seq 32 0.02
Table 3: Oscillation During Training (Instability)

6.1 Insights Generated from CIFAR-10

In this section, we show the large performance gain using BENN on CIFAR-10 and summarize some insights. Each BNN is initialized by a pre-trained model from XNOR-Net [50] and then retrained by 100 epochs to reach convergence before ensemble. Each full-precision DNN counterpart is trained by 300 epochs to obtain the best accuracy for reference. The learning rate is set to 0.001 and ADAM optimizer is used. Here, we use a compact Network-In-Network (NIN) for CIFAR-10. We first present some significant independent comparisons as follows and then summarize the insights we found.

Figure 3: Left: BENN can increase the test accuracy significantly with more ensembles. It can even achieve better accuracy than its full-precision counterpart under Semi-BNN (SB) case. Right: Boosting strongly outperforms bagging in All-BNN (AB) case where each BNN has larger bias.

Single BNN versus BENN: We found that BENN can achieve much better accuracy and stability than a single BNN with negligible sacrifice in speed. Experiments across all BNN configurations show that BENN has the accuracy gain ranging from to over BNN on CIFAR-10. If each BNN is weak (e.g., AB), the gain of BENN will increase as shown in Fig. 3 (right). This verifies that BNN is indeed a good weak classifier for ensembling. Surprisingly, BENN-SB outperforms full-precision DNN after 32 ensembles (either bagging or boosting) by up to (Fig. 3 (left)). Note that in order to have the same memory usage as a 32-bit DNN, we constrain the ensemble up to 32 rounds if no network compression is involved. If more ensembles are available, we observe further performance boost but accuracy gain will eventually become flat.

Figure 4: After ensemble, the accuracy increases with more activation bits (Q=2 in AQB). Preserving the first and/or last layer full-precision (IB and SB) helps, compared with all-binary case (AB).

We also compare BENN-SB-5 (i.e., 5 ensembles) with WQB (Q=5, 5-bit weight and 1-bit activation), which have the same amount of parameters (measured by bits). WQB can only achieve accuracy unstably while our ensemble network can reach up to and remain stable.

We also measure the accuracy variation of the classifier in the last 20 training steps for all BNN configurations. The results in Fig. 3 indicate that BENN can reduce BNN’s variance by if ensemble 5 rounds and after 32 rounds. Moreover, picking the best BNN with the highest test accuracy instead of using the BNN when training is done can also reduce the oscillation. This is because the statistical property of ensemble framework (Sec. 3 and Sec. 4.4) makes BENN become a graceful way to ensure high stability.

Bagging versus boosting: It is known that bagging can only reduce the variance of the predictor, while boosting can reduce both bias and variance. Fig. 3(right), Fig. 4, and Table 4 show that boosting outperforms bagging, especially after BNN is compressed, by up to when network size is reduced to (Tiny config) and when network size is reduced to (Nano config), and the gain increases from 5 to 32 ensembles. This verifies that boosting is a better choice if the model does not overfit much.

Standard independent training versus warm-restart training: Standard ensemble techniques use independent training, while warm-restart training enable new classifiers to learn faster. Fig. 3(left) shows that warm-restart training performs better up to for bagging and for boosting after the same number of training epochs. This means gradually adapting to more examples might be a better choice for CIFAR-10. However, this does not hold for ImageNet task because of slight over-fitting with warm-restart (Sec. 6.2). We believe that this is an interesting phenomenon but it needs more justification by studying the theory of convergence.

Network Ensemble Method Ensemble Accuracy
Best SB - 1 84.91%
BENN-SB Bag-Seq 32 89.12%
BENN-SB Boost-Seq 32 89.00%
Best SB-Tiny - 1 77.20%
BENN-SB-Tiny Bag-Seq 32 84.09%
BENN-SB-Tiny Boost-Seq 32 84.32%
Best SB-Nano - 1 40.70%
BENN-SB-Nano Bag-Seq 500 57.12%
BENN-SB-Nano Boost-Seq 500 63.11%
Table 4: Impact of Network Compression

The impact of compressing BNN: BNN’s model complexity largely affects bias and variance. If each weak BNN has enough complexity with low bias but high variance, then bagging is more favorable than boosting due to simplicity. However, if each BNN’s size is small with large bias, boosting becomes a much better choice. To verify this, we compress each BNN in Table 2 by naively reducing the amount of channels and neurons in each layer. The results in Table 4 show that BENN-SB can maintain reasonable performance even after naive compression, and boosting gains more over bagging in severe compression (Nano config).

We also found that BENN is less sensitive to network size. Table 4 shows that compression reduces single BNN’s accuracy by (Tiny config) and (Nano config). After 32 ensembles, the performance loss caused by compression decreases to and respectively. Surprisingly, we observe that compression only reduces the accuracy of full-precision DNN by (Tiny config) and (Nano config). So it is necessary to have not-too-weak BNNs to build BENN that can compete with full-precision DNN. Better pruning algorithm can be combined with BENN in the future rather than naive compression to allow smaller network to be ensembled.

The effect of bit width: Higher bitwidth results in lower variance and bias at the same time. This can be seen in Fig. 4 where we make activations 2-bit in BENN-AQB (Q=2). As can be seen, BENN-AQB (Q=2) and BENN-IB have comparable accuracy after 32 ensembles, but much better than BENN-AB and worse than BENN-SB. We also observe that activation binarization results in much more unstable model than weight binarization. This indicates that the gain of having more bits is mostly due to better features from the input image, since input binarization is a real pain for neural networks. Surprisingly, BENN-AB can still achieve more than accuracy under such a pain.

The effect of binarizing first and last layer: Almost all the existing works in BNN assume the full precision of the first and last layer, since binarization on these two layers will cause severe accuracy degradation. But we found BENN is less affected, as shown by BENN-AB, BENN-SB and BENN-IB in Fig. 4. The BNN’s accuracy loss due to binarizing these two special layers is . For BENN with 32 ensembles, the loss reduces to .

In summary, we generate our main insights about BNN and BENN: (1) Ensemble such as bagging and boosting greatly relieve BNN’s problems in terms of representation power, stability, and robustness. (2) Boosting gains advantage over bagging in most cases, and warm-restart training is often a better choice. (3) Weak BNN’s configuration (i.e., size, bitwidth, first and last layer) is essential to build a well-functioning BENN to match full-precision DNN in practice.

6.2 Exploration on Applying BENN to ImageNet Recognition

We believe BENN is one of the best neural network structures for inference acceleration. To demonstrate the effectiveness of BENN, we compare our algorithm with state-of-the-arts on the ImageNet recognition task (ILSVRC2012) using AlexNet [38] and ResNet-18 [27]. Specifically, we compare our BENN-SB independent training (Sec. 5) with the full-precision DNN [38, 50], DoReFa-Net (k-bit quantized weight and activation) [65], XNOR-Net (binary weight and activation) [50], BNN (binary weight and activation) [31] and BinaryConnect (binary weight) [10]. We also tried ABC-Net (k binary bases for weight and activation) [43] but unfortunately the network does not converge well. Note that accuracy of BNN and BinaryConnect on AlexNet are reported by [50] instead of original authors. For DoReFa-Net and ABC-Net, we use the best reported accuracy by original authors with 1-bit weight and 1-bit activation. For XNOR-Net, we report the number of our own retrained model. Our BENN is retrained given a well pre-trained model until convergence by XNOR-Net after 100 epochs to use, and we retrain each BNN with 80 epochs before ensemble. As shown in Table 5 and 6, BENN-SB is the best among all the state-of-the-art BNN architecture, even with only 3 ensembles paralleled on 3 threads. Meanwhile, although we do observe continuous gain with 5 and 8 ensembles (e.g., + on AlexNet), we found that BENN with more ensembles on ImageNet task can be unstable in terms of accuracy and needs further investigation on overfitting issue, otherwise the rapid gain is not always guaranteed. However, we believe our intitial exploration along this direction has shown BENN’s potentiality of catching up full-precision DNN and even surpass it with more base BNN classifiers. In fact, how to optimize BENN on large and diverse dataset is still an interesting open problem.

Method W A Top-1
Full-Precision DNN [38, 50] 32 32 56.6%
XNOR-Net [50] 1 1 44.0%
DoReFa-Net [65] 1 1 43.6%
BinaryConnect [10, 50] 1 32 35.4%
BNN [31, 50] 1 1 27.9%
BENN-SB-3, Bagging (ours) 1 1 48.8%
BENN-SB-3, Boosting (ours) 1 1 50.2%
BENN-SB-6, Bagging (ours) 1 1 52.0%
BENN-SB-6, Boosting (ours) 1 1 54.3%
Table 5: Comparison with state-of-the-arts on ImageNet using AlexNet (W-weights, A-activation)
Method W A Top-1
Full-Precision DNN [27, 43] 32 32 69.3%
XNOR-Net [50] 1 1 48.6%
ABC-Net [43] 1 1 42.7%
BNN [31, 50] 1 1 42.2%
BENN-SB-3, Bagging (ours) 1 1 53.4%
BENN-SB-3, Boosting (ours) 1 1 53.6%
BENN-SB-6, Bagging (ours) 1 1 57.9%
BENN-SB-6, Boosting (ours) 1 1 61.0%
Table 6: Comparison with state-of-the-arts on ImageNet using ResNet-18 (W-weights, A-activation)

7 Discussion

More bits per network or more networks per bit? We believe this paper brings up this important question. As for biological neural networks such as our brain, the signal between two neurons is more like a spike instead of high-range real-value signal. This implies that it may not be necessary to use real-valued numbers, while involve a lot of redundancies and can waste significant computing power. Our work converts the direction of “how many bits per network” into “how many networks per bit”. BENN provides a hierarchical view, i.e., we build weak classifiers by groups of neurons, and build a strong classifier by ensembling the weak classifiers. We have shown that this hierarchical approach is more intuitive and natural to represent knowledge. Although the optimal ensemble structure is beyond the scope of this paper, we believe that some structure searching or meta-learning techniques can be applied. Moreover, the improvement on single BNN such as studying the error surface and resolving the curse of activation/gradient binarization is still essential for the success of BENN.

BENN is hardware friendly: Using BENN with ensembles is better than using one -bit classifier. Firstly, -bit quantization still cannot get rid of fixed-point multiplication, while BENN can support bitwise operation. People have found that BNN can be further accelerated on FPGAs over modern CPUs [63, 18]. Secondly, people have shown that the complexity of a multiplier is proportional to the square of bitwidth, thus BENN simplifies the hardware design. Thirdly, BENN can use spike signals in the chips instead of keeping the signal real-valued all the time, which can save a lot of energy. Finally, unlike recent literature requiring quadratic time to compute, BENN can be better paralleled on the chips due to its linear time complexity.

Current limitations: It is known to all that ensemble methods can potentially cause overfitting to the model and we also observed similar problems on CIFAR-10 and ImageNet, when the number of ensembles keeps increasing. An interesting next step is to analyze the property of decision boundary of BENN on different datasets and track its evolution in high-dimensional feature space. Also, training will take longer time if many ensembles are needed (especially on large dataset like ImageNet), thus reducing the speed of design iterations. Finally, BENN needs to be further optimized for large networks such as AlexNet and ResNet in order to show its full power, such as picking the best ensemble rule and base classifier.

8 Conclusion and Future Work

In this paper, we proposed BENN, a novel neural network architecture which marries BNN with ensemble methods. The experiments showed a large performance gain in terms of accuracy, robustness, and stability. Our experiments also reveal some insights about trade-offs on bit width, network size, number of ensembles, etc. We believe that by leveraging specialized hardware such as FPGA, BENN can be a new dawn for deploying large DNNs into mobile and embedded systems. This work also indicates that a single BNN’s properties are still essential thus people need to work hard on both directions. In the future we will explore the power of BENN to reveal more insights about network bit representation and minimal network architecture (e.g., combine BENN with pruning), BENN and hardware co-optimization, and the statistics of BENN’s decision boundary.

References

9 Supplementary Material: Detailed Analysis on DNN, BNN, and BENN

Given a full-precision real valued DNN with a set of parameters , a BNN with binarized parameters , input vector (after Batch Normalization) and perturbation , and a BENN with ensembles, we want to compare their robustness w.r.t. the input perturbation. Here we analyze the variance of output change before and after perturbation, which echoes Eq.1 in Sec.3 in the main paper. This is because the output change has zero mean and its variance reflects the distribution of output variation. More specifically, larger variance means increased variation of output w.r.t. input perturbation.

Assume are outputs before non-linear activation function of a single neuron in an one-layer network, we have the output variation of real-value DNN:

whose distribution has variance , where denotes number of input connections for this neuron and denotes inner product. This is because summation of multiple independent distributions (due to inner product ) has variance summed as well. Some modern non-linear activation function like ReLU will not change the inequality of variances (i.e., if , then ), thus we can omit them in the analysis to keep it simple.

9.1 Activation Binarization

Suppose is real valued but only input binarized (denote as ), the activation binarization (-1 and +1) has threshold 0, then the output variation is:

whose distribution has variance . This is because so the inner product is just the summation of independent distributions, each having variance . Note that only has three possible values, namely, 0, -2 and +2. We compute each of them as follows:

and its variance can be computed by:

since . Unfortunately this integral is too complicated to be solved by analytical formula, thus we use numerical method to obtain . Therefore, the variance is:

where () and () can be found in Table 7. When , robustness of BNN is worse than DNN’s. As for BENN-Bagging with () ensembles, the output change has variance:

thus BENN-Bagging has better robustness than BNN. If , then BENN-Bagging can have even better robustness than DNN.

B R
1.5 1.25 2.25
1.0 1.0 1.0
0.5 0.59 0.25
0.1 0.13 0.01
0.01 0.013 0.0001
0.001 0.0013 0.000001
Table 7: Relation between B, R and

9.2 Weight Binarization

If we binarize to but keeping the activation real-valued, the output variation follows:

with variance . Thus whether weight binarization will hurt robustness or not depends on whether holds or not. In particular, the robustness will not decrease if . BENN-Bagging has variance . So if , then BENN-Bagging is better than DNN.

9.3 Binarization of Both Weight and Activation

If both activation and weight are binarized (denote as ), the output variation:

has variance which is just the combination of Sec. 9.1 and Sec. 9.2. BENN-Bagging has variance , which is more robust than DNN when .

The above analysis results in the following theorem:

Theorem 1

Given a activation binarization, weight binarization or extreme binarization one-layer network introduced above, input perturbation is , then the output variation obeys:

  1. If only activation is binarized, BNN has worse robustness than DNN when perturbation . BENN-Bagging is guaranteed to be more robust than BNN. BENN-Bagging with ensembles is more robust than DNN when .

  2. If only weight is binarized, BNN has worse robustness than DNN when . BENN-Bagging is guaranteed to be more robust than BNN. BENN-Bagging with ensembles is more robust than DNN when .

  3. If both weight and activation are binarized, BNN has worse robustness than DNN when and perturbation . BENN-Bagging is guaranteed to be more robust than BNN. BENN-Bagging with ensembles is more robust than DNN when .

9.4 Multiple Layers Scenario

All the above analysis is for one layer models before and after activation function. The same conclusion can be extended to multiple layers scenario with Theorem 2.

Theorem 2

Given a activation binarization, weight binarization or extreme binarization L-layer network (without batch normalization for generalization) introduced above, input perturbation is , then the accumulated perturbation of ultimate network output obeys:

  1. For DNN, ultimate output variation is .

  2. For activation binarization BNN, ultimate output variation is .

  3. For weight binarization BNN, ultimate output variation is

  4. For extreme binarization BNN, ultimate output variation is .

  5. Theorem 1 holds for multiple layers scenario.

People have not fully understood the effect of variance reduction in boosting algorithms and some debates still exist in literature [7, 17], given that classifiers are not independent with each other. However, our experiments show that BENN-boosting can also reduce variance in our situation, which is consistent with [16, 17]. The theoretical analysis on BENN-boosting is left for future work.

If we switch and , replace input perturbation with parameter perturbation in the above analysis, then the same conclusion holds for parameter perturbation (stability issue). To sum up, BNN often can be worse than DNN in terms of robustness and stability, and our method BENN can cure these problems.

10 Supplementary Material: Training Process of BENN

1Input: a full-precision neural net with layers, elements in convolution kernel and learning rate , initial weight for each training example and number of ensemble rounds . Initialize BNN with a pre-trained XNOR-Net model [50]. Retrain each BNN for maximally epochs. Ensemble Pass: for k=1 to K do
2        Sampling a new training set given weight of each example ; for epoch=1 to M do
3               Forward Pass: for =1 to  do
4                      for each filter in -th layer do
5                             ; ;
6                      end for
7                     Compute activation based on binary kernel and input ;
8               end for
9              Backward Pass: Compute gradient based on [50, 28]; Parameter Update: Update to with any update rules (e.g., SGD or ADAM)
10        end for
11       Ensemble Update: Pick the BNN when training converges; Use either bagging or boosting algorithm to update weight of each training example ;
12 end for
Return: trained base classifiers for BENN;
Algorithm 1 Training Process of BENN

11 Supplementary Material: Network Architectures Used in the Paper

In this section we provide network architectures used in the experiments of our main paper.

11.0.1 Self-Designed Network-In-Network (NIN)

Layer Index Type Parameters
1 Conv

Depth: 192, Kernel Size: 5x5, Stride: 1, Padding: 2

2 BatchNorm : 0.0001, Momentum: 0.1
3 ReLU -
4 BatchNorm : 0.0001, Momentum: 0.1
5 Dropout : 0.5
6 Conv Depth: 96, Kernel Size: 1x1, Stride: 1, Padding: 0
7 ReLU -
8 MaxPool Kernel: 3x3, Stride: 2, Padding: 1
9 BatchNorm : 0.0001, Momentum: 0.1
10 Dropout : 0.5
11 Conv Depth: 192, Kernel Size: 5x5, Stride: 1, Padding: 2
12 ReLU -
13 BatchNorm : 0.0001, Momentum: 0.1
14 Dropout : 0.5
15 Conv Depth: 192, Kernel Size: 1x1, Stride: 1, Padding: 0
16 ReLU -
17 AvgPool Kernel: 3x3, Stride: 2, Padding: 1
18 BatchNorm : 0.0001, Momentum: 0.1
19 Dropout : 0.5
20 Conv Depth: 192, Kernel Size: 3x3, Stride: 1, Padding: 1
21 ReLU -
22 BatchNorm : 0.0001, Momentum: 0.1
23 Conv Depth: 192, Kernel Size: 1x1, Stride: 1, Padding: 0
24 ReLU -
25 BatchNorm : 0.0001, Momentum: 0.1
26 Conv Depth: 192, Kernel Size: 1x1, Stride: 1, Padding: 0
27 ReLU -
28 AvgPool Kernel: 8x8, Stride: 1, Padding: 0
29 FC Width: 1000
Table 8: Self-Designed Network-In-Network (NIN)

11.0.2 AlexNet

Layer Index Type Parameters
1 Conv Depth: 96, Kernel Size: 11x11, Stride: 4, Padding: 0
2 ReLU -
3 MaxPool Kernel: 3x3, Stride: 2
4 BatchNorm -
5 Conv Depth: 256, Kernel Size: 5x5, Stride: 1, Padding: 2
6 ReLU -
7 MaxPool Kernel: 3x3, Stride: 2
8 BatchNorm -
9 Conv Depth: 384, Kernel Size: 3x3, Stride: 1, Padding: 1
10 ReLU -
11 Conv Depth: 384, Kernel Size: 3x3, Stride: 1, Padding: 1
12 ReLU -
13 Conv Depth: 256, Kernel Size: 3x3, Stride: 1, Padding: 1
14 ReLU -
15 MaxPool Kernel: 3x3, Stride: 2
16 Dropout : 0.5
17 FC Width: 4096
18 ReLU -
19 Dropout : 0.5
20 FC Width: 4096
21 ReLU -
22 FC Width: 1000
Table 9: AlexNet

11.0.3 ResNet-18

Layer Index Type Parameters
1 Conv Depth: 64, Kernel Size: 7x7, Stride: 2, Padding: 3
2 BatchNorm : 0.00001, Momentum: 0.1
3 ReLU -
4 MaxPool Kernel: 3x3, Stride: 2
5 Conv Depth: 64, Kernel Size: 3x3, Stride: 1, Padding: 1
6 BatchNorm : 0.00001, Momentum: 0.1
7 ReLU -
8 Conv Depth: 64, Kernel Size: 3x3, Stride: 1, Padding: 1
9 BatchNorm : 0.00001, Momentum: 0.1
10 Conv Depth: 64, Kernel Size: 3x3, Stride: 1, Padding: 1
11 BatchNorm : 0.00001, Momentum: 0.1
12 ReLU -
13 Conv Depth: 64, Kernel Size: 3x3, Stride: 1, Padding: 1
14 BatchNorm : 0.00001, Momentum: 0.1
15 Conv Depth: 128, Kernel Size: 3x3, Stride: 2, Padding: 1
16 BatchNorm : 0.00001, Momentum: 0.1
17 ReLU -
18 Conv Depth: 128, Kernel Size: 3x3, Stride: 1, Padding: 1
19 BatchNorm : 0.00001, Momentum: 0.1
20 Conv Depth: 128, Kernel Size: 1x1, Stride: 2
21 BatchNorm : 0.00001, Momentum: 0.1
22 Conv Depth: 128, Kernel Size: 3x3, Stride: 1, Padding: 1
23 BatchNorm : 0.00001, Momentum: 0.1
24 ReLU -
25 Conv Depth: 128, Kernel Size: 3x3, Stride: 1, Padding: 1
26 BatchNorm : 0.00001, Momentum: 0.1
27 Conv Depth: 256, Kernel Size: 3x3, Stride: 2, Padding: 1
28 BatchNorm : 0.00001, Momentum: 0.1
29 ReLU -
30 Conv Depth: 256, Kernel Size: 3x3, Stride: 1, Padding: 1
31 BatchNorm : 0.00001, Momentum: 0.1
32 Conv Depth: 256, Kernel Size: 1x1, Stride: 2
33 BatchNorm : 0.00001, Momentum: 0.1
34 Conv Depth: 256, Kernel Size: 3x3, Stride: 1, Padding: 1
35 BatchNorm : 0.00001, Momentum: 0.1
36 ReLU -
37 Conv Depth: 256, Kernel Size: 3x3, Stride: 1, Padding: 1
38 BatchNorm : 0.00001, Momentum: 0.1
39 Conv Depth: 512, Kernel Size: 3x3, Stride: 2, Padding: 1
40 BatchNorm : 0.00001, Momentum: 0.1
41 ReLU -
42 Conv Depth: 512, Kernel Size: 3x3, Stride: 1, Padding: 1
43 BatchNorm : 0.00001, Momentum: 0.1
44 Conv Depth: 512, Kernel Size: 1x1, Stride: 2
45 BatchNorm : 0.00001, Momentum: 0.1
46 Conv Depth: 512, Kernel Size: 3x3, Stride: 1, Padding: 1
47 BatchNorm : 0.00001, Momentum: 0.1
48 ReLU -
49 Conv Depth: 512, Kernel Size: 3x3, Stride: 1, Padding: 1
50 BatchNorm : 0.00001, Momentum: 0.1
51 AvgPool -
52 FC Width: 1000
Table 10: ResNet-18