Recent wave of deep learning research has brought the deep neural network to the frontier of AI applications. Current implementations of deep neural networks in wide variety of fields heavily rely on high computing power and energy hungry hardware such as GPUs, which limits implementations of those neural networks in embedded environment such as mobile phones or wearable hardware. Much effort has been put to reduce computing cost of deep neural networks, for example,(Han et al., 2015; Chen et al., 2015; Jaderberg et al., 2014).
have been trying to reduce synapse weights and/or intermediate signal representation to binary form. They have shown binarification of neural networks can dramatically reduce the computing cost(Rastegari et al., 2016; Courbariaux et al., 2016), as well as the amount of memory needed for weight and intermediate results storage. Those approaches are especially useful for memory constrained environment such as FPGA implementations.
In this work, we designed a network architecture based on Binarized Neural Networks (BNN), the neural network structure originally proposed by (Courbariaux et al., 2016), and trained networks on the ILSVRC2012 image classification task (Krizhevsky et al., 2012).
We analyzed the behavior of weights adaptation during SGD training and use the observation to guide the setting of proper learning rates used in the network training. This leads to significant gain in the rate of convergence.
In order to increase the recognition accuracy while maintains the very high efficiency of a BNN, a special new neural network architecture is proposed here. This network has more than usual channels than regular networks at early layers and normal layers are used through the rest layers. We obtained 80 percent top-5 accuracy on this network.
A further improvement of network performance is introduced through network distillation (Hinton et al., 2015). With a moderate size network an 84.1 percent top-5 accuracy on single crop validation set of ILSVRC2012 classification task is achieved, which is much better than previous published results on XNOR network (Rastegari et al., 2016) and binarized version of GoogleNET (Hubara et al., 2016). We expect networks of better performance can be trained by following our current strategies.
2.1 First layer architecture
In the original BNN layer designed for CIFAR-10 dataset as in (Courbariaux et al., 2016), all layers of network are binary. For natural images, usually the first layer input only have 3 input channels, thus similar to (Rastegari et al., 2016; Hubara et al., 2016), we also use regular weight convolutional layer for the first layer of the network.
A. and C. The distribution of synapse states after one epoch of training. Top: synapses from layer 2, Bottom: synapses from layer 6.
B. Lower learing rate significantly accerelated convergence speed.
Networks silmar to Alexnet are used here with batch size of 256
2.2 Weight adaptation behavior analysis and learning rate selection
In a typical BNN layer, at the training stage real-valued synaptic weights are used during SGD training. This extra states are removed at the stage of computing layer output. It is believed that real-valued weights are needed for smooth out the noise in SGD learning (Courbariaux et al., 2016). In the field of computational neuroscience this is called binary synapses with internal/hidden states(Baldassi et al., 2007; Wu & Mel, 2009). Here we show indeed this is the case. When a high learning rate of 0.01 is used for training, after one epoch of training (we rescale learning rate of every binary layer as in (Courbariaux et al., 2016)), we observed most weight values stay around -1 and +1 (Figure 0(c)) because of weight clipping. Under such condition weights quickly jumped between edge values of -1 and +1 with little accumulation process, equivalently there are less hidden synapse states available to the training process. Correspondingly, a rather slow accuracy rate climbing process is observed (Figure 0(a)). When we use a lower learning rate of 0.001, we observed a weight distribution more uniform or concentrated around 0, indicating the weights are taking more steps to travel between -1 and +1 (Figure 0(b)) resemble the behavior of synapse age tags as in(Wu & Mel, 2009) or hidden states in (Baldassi et al., 2007). Correspondingly, a much faster accuracy climbing was observed during the training process (Figure 0(a)).
(a) An BVLC-Alexnet network trained on regular targets. (b) 13 layer network trained with regular targets achieved 80% top-5 accuracy. (c) 13 layer network trained with soft targets. (d) Fine tuning the 13 layer network with combined soft and regular target achieved 84.1% top-5 accuracy
2.3 Network distillation and avoiding network bottle-neck
By following aforementioned strategies, we managed to train BVLC-Alexnet (Krizhevsky et al., 2012) to 68% top-5 accuracy rate(Figure 1(a)) slightly better than the one shown in Hubara et al. (2016). Pretraining the network with soft tanh activation gave a bit further improvement and reach 69% accuracy rate.
We also trained a 13 layer network (shown in Table 1) and obtained a top-5 accuracy rate of 80.2%. For this network we used regular weights on last fully connected layer and dropout ratio of 0.2 was used for the dropout layer. Notice wider than usual layers used for the first and second layers, this is designed to avoid information bottleneck associated with binarized networks. The key feature here is that wider layers are only used on early layers and the rest part of the network stay in normal width. This feature is crucial as it maintains very low computing cost brought on by BNNs.
Further improvement appears hindered by network over-fitting. Adding more layers to the network might further improve the network performance, however such strategy will cut into the computing budget. Since maintaining high efficiency is crucial for real world implementations, we adopted the network distillation (Hinton et al., 2015). We used a pre-trained 50 layer residual net(He et al., 2015) to generate the soft training targets. In agreement with (Hinton et al., 2015), we found purely training with soft targets did not improve the network performance(Figure 1(c)). A combined soft and regular targets training significantly improved the network performance to 84.1%(Figure 1(d)) after the network was pre-trained with soft targets. The network structure used in this part is same as in Table 1 except no dropout was used and a fixed scaling layer before softmax. Limited by the computing resource available to us, we did not test on bigger networks for better performance.
3 Discussion and acknowledgement
Our results show that BNN networks can be trained on ILSVRC2012 data to achieve a good performance with a proper network architectures and training strategies. At this moment the best network we trained is a 13 layers network with 84.1 percent top-5 accuracy rate on a single crop test of validation set. We expect further improvement can be obtained through: 1) testing with multiple scales/crops average as in(He et al., 2015), 2) Distill-training with networks of better accuracy, and 3) tuning hyper-parameters beyond a handful of distillation networks we have trained due to very limited computing resources. We are cleaning up the code and will release it soon.
Baldassi et al. (2007)
Carlo Baldassi, Alfredo Braunstein, Nicolas Brunel, and Riccardo Zecchina.
Efficient supervised learning in networks with binary synapses.BMC Neuroscience, 8(2):1, 2007.
- Chen et al. (2015) Wenlin Chen, James T Wilson, Stephen Tyree, Kilian Q Weinberger, and Yixin Chen. Compressing neural networks with the hashing trick. arXiv preprint arXiv:1504.04788, 2015.
- Courbariaux et al. (2015) Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in Neural Information Processing Systems, pp. 3105–3113, 2015.
- Courbariaux et al. (2016) Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yanivand, and Yoshua Bengio. Binarynet: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830, 2016.
- Han et al. (2015) Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in Neural Information Processing Systems, pp. 1135–1143, 2015.
- He et al. (2015) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
- Hinton et al. (2015) Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
- Hubara et al. (2016) Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Quantized neural networks: Training neural networks with low precision weights and activations. arXiv preprint arXiv:1609.07061, 2016.
- Jaderberg et al. (2014) Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. arXiv preprint arXiv:1405.3866, 2014.
Krizhevsky et al. (2012)
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton.
Imagenet classification with deep convolutional neural networks.In Advances in neural information processing systems, pp. 1097–1105, 2012.
- Rastegari et al. (2016) Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. arXiv preprint arXiv:1603.05279, 2016.
- Wu & Mel (2009) Xundong E Wu and Bartlett W Mel. Capacity-enhancing synaptic learning rules in a medial temporal lobe online learning model. Neuron, 62(1):31–41, 2009.