PyTorch models for imagenet classification
In object detection, reducing computational cost is as important as improving accuracy for most practical usages. This paper proposes a novel network structure, which is an order of magnitude lighter than other state-of-the-art networks while maintaining the accuracy. Based on the basic principle of more layers with less channels, this new deep neural network minimizes its redundancy by adopting recent innovations including C.ReLU and Inception structure. We also show that this network can be trained efficiently to achieve solid results on well-known object detection benchmarks: 84.9 VOC2007 and VOC2012 while the required compute is less than 10 ResNet-101.READ FULL TEXT VIEW PDF
This paper presents how we can achieve the state-of-the-art accuracy in
Neural networks have been notorious for being computationally expensive....
Since convolutional neural network(CNN)models emerged,several tasks in
Object detection in challenging situations such as scale variation,
In this paper, we introduce an innovative method to improve the converge...
This paper proposes an enhancement of convolutional neural networks for
We introduce a principled approach for unsupervised structure learning o...
PyTorch models for imagenet classification
Demo code for PVANet https://arxiv.org/abs/1611.08588
clone from PVANET
Convolutional neural networks (CNNs) have made impressive improvements in object detection for several years. Thanks to many innovative works, recent object detection algorithms have reached accuracies acceptable for commercialization in a broad range of markets like automotive and surveillance. However, in terms of detection speed, even the best algorithms are still suffering from heavy computational cost. Although recent reports on network compression and quantization shows promising results, it is still important to reduce the computational cost in the network design stage.
The successes in network compression kim2015compression and decomposition of convolution kernels ioannou2015training ; iandola2016squeezenet imply that present network architectures are highly redundant. Therefore, reducing these redundancies is a straightforward approach in reducing the computational cost.
This paper presents a lightweight network architecture for object detection, named PVANet111The code and models are available at https://github.com/sanghoon/pva-faster-rcnn, which achieves state-of-the-art detection accuracy in real-time. Based on the basic principle of “smaller number of output channels with more layers”, we adopt C.ReLUShangW2016icml in the initial layers and Inception structureSzegedyC2015cvpr in the latter part of the network. Multi-scale feature concatenationKongT2016cvpr is also applied to maximize the multi-scale nature of object detection tasks.
We also show that our thin but deep network can be trained effectively with batch normalizationSzegedyC2015icml HeK2016cvpr , and our own learning rate scheduling based on plateau detection.
In the remaining sections, we describe the structure of PVANet as a feature extraction network (Section2.1) and a detection network (Section 2.2
). Then, we present experimental results on ImageNet 2012 classification, VOC-2007 and VOC-2012 benchmarks with detailed training and testing methodologies (Section3).
C.ReLU ShangW2016icml is motivated from an interesting observation of intermediate activation patterns in CNNs. In the early stage, output nodes tend to be “paired” such that one node’s activation is the opposite side of another’s. From this observation, C.ReLU can double the number of output channels by simply concatenating negated outputs before applying ReLU.
The original design of C.ReLU enforces a shared bias between two negatively correlated outputs while the observations are about weight matrices only. We add a separated bias layer so that two correlated filters can have different bias values.When it is tested with ALL-CNN-C network springenberg2015striving on CIFAR-10, our modified C.ReLU shows lower training loss (Figure 1) and better test accuracy (Table 2.1) compared to the original work.
For object detection tasks, Inception has neither been widely applied nor been verified for its effectiveness. We have found that Inception can be one of the most cost-effective building blocks for capturing both small and large objects in an input image.
To learn visual patterns for large objects, output features of CNNs should correspond to sufficiently large receptive fields, which can be easily fulfilled by stacking up convolutions of 3x3 or larger kernels. On the other hand, in capturing small-sized objects, output features do not necessarily need to have large receptive fields, and a series of large kernels may lead to redundant parameters and computations. 1x1 convolution in Inception structure prevents the growth of receptive fields in some paths of the network, and therefore, can reduce those redundancies.
It is widely accepted that as the network goes deeper and deeper, the training of the network becomes more troublesome. We solve this issue by adopting residual structures with pre-activation he2016identity and batch normalization SzegedyC2015icml . Unlike the original work, we add residual connections onto inception layers as well.
We also implement our own policy to control the learning rate dynamically based on “plateau detection”. After measuring the moving average of loss, if the minimum loss is not updated for a certain number of iterations, we call it on-plateau. Whenever plateau is detected, the learning rate is decreased by a constant factor. In experiments, our learning rate policy gave a notable gain of accuracy.
Table 2 shows the feature extraction network of PVANet. In the early stage (conv1_1, …, conv3_4) of the network, we adopt “bottleneck” building blocksHeK2016cvpr in order to reduce the input dimensions of 3x3 kernels without jeopardizing overall representation capacity, and then we apply modified C.ReLU after 7x7 and 3x3 convolutional layers. The latter part of the network consists of Inception structures without the modified C.ReLU. In our Inception blocks, a 5x5 convolution is substituted by two consecutive 3x3 convolutional layers with an activation layer between them. With this feature extraction network, we can create an efficient network for object detection. Figure 2 shows the designs of two main building blocks of our network structure.
Figure 3 shows the structure of PVANet detection network. We basically follow the method of Faster R-CNNRenS2015nips , but we introduce some modifications specialized for object detection. In this section, we describe the design of the detection network.
Multi-scale representation and its combination are proven to be effective in many recent deep learning tasksKongT2016cvpr ; BellS2016cvpr ; HariharanB2015cvpr . Combining fine-grained details with highly abstracted information in the feature extraction layer helps the following region proposal network and classification network detect objects of different scales. However, since the direct concatenation of all abstraction layers may produce redundant information with much higher compute requirement, we need to design the number of different abstraction layers and the layer numbers of abstraction carefully.
, which combines 1) the last layer and 2) two intermediate layers whose scales are 2x and 4x of the last layer, respectively. We choose the middle-sized layer as a reference scale (= 2x), and concatenate the 4x-scaled layer and the last layer with down-scaling (pooling) and up-scaling (linear interpolation), respectively. The concatenated features are combined by a 1x1x512 convolutional layer.
In our experiments, we have found that feature inputs to the Region Proposal Network (RPN) does not need to be as deep as the inputs to the fully connected classifiers. Thanks to this observation, we feed only the first 128 channels in ’convf’ into the RPN. This helps to reduce the computational costs by 1.4 GMAC without damaging its accuracy. The RPN in our structure consists of one 3x3x384 convolutional layer followed by two prediction layers for scores and bounding box regressions. Unlike the original Faster R-CNNRenS2015nips , our RPN uses 42 anchors of 6 scales (32, 48, 80, 144, 256, 512) and 7 aspect ratios (0.333, 0.5, 0.667, 1.0, 1.5, 2.0, 3.0).
The classification network takes all 512 channels from ’convf’. For each ROI, 6x6x512 tensor is generated by ROI pooling, and then passed through a sequence of fully-connected layers of “4096 - 4096 - (21+84)” output nodes.222For 20-class object detection, R-CNN produces 21 predicted scores (20 classes + 1 background) and 21x4 predicted values of 21 bounding boxes. Note that our classification network is intentionally composed of fully-connected(FC) layers rather than fully convolutional layers. FC layers can be compressed easily without a significant accuracy drop RenS2015nips and provide possibility to balance between computational cost and accuracy of a network.
PVANet is pre-trained with ImageNet 2012 classification dataset. During pre-training, all images are resized into 256, 384 and 512. The network inputs are randomly cropped 192x192 patches due to the limitation in the GPU memory. The learning rate is initially set to 0.1, and then decreased by a factor of whenever a plateau is detected. Pre-training terminates when the learning rate drops below , which usually requires about 2M iterations.
To evaluate the performance of our pre-trained network, we re-train the last three fully connected layers (fc6, fc7, fc8) with 224x224 input patches. Table 3 shows the accuracy of our network as well as others’. Thanks to its efficient network structure and training schemes, PVANet shows a surprisingly competitive accuracy considering its computational cost. Its accuracy is even better than GoogLeNetSzegedyC2015cvpr .
|Model||top-1 err. (%)||top-5 err. (%)||Cost (GMAC)|
For the PASCAL VOC2007 detection task, the network is trained with the union set of MS COCO trainval, VOC2007 trainval and VOC2012 trainval and then fine-tuned with VOC2007 trainval and VOC2012 trainval. Training images are resized randomly so that shorter edges of inputs are between 416 and 864. All parameters are set as in the original work RenS2015nips except for the number of proposal boxes before non-maximum suppression (NMS) (), the NMS threshold () and the input size (). All evaluations were done on Intel i7-6700K CPU with a single core and NVIDIA Titan X GPU.
Table 4 shows the object recall and accuracy of our models in different configurations. Thanks to Inception structure and multi-scale features, our RPN generates initial proposals very accurately. It can capture almost 99% of the target objects with only 200 proposals. Since the results imply that more than 200 proposals do not give notable benefits to object recall and detection accuracy, we fix the number of proposals to 200 in other experiments. The overall detection accuracy of plain PVANet in VOC2007 reaches 84.4% mean AP. When bounding box voting GidarisS2015iccv is applied, the performance increases by 0.5% mean AP. Unlike the original work, we do not apply an iterative localization and penalize object scores if there are less than 5 overlapped detections in order to suppress false alarms. We have found that the voting scheme works well even without an iterative localization.
The classification sub-network in PVANet consists of fully-connected layers which can be compressed easily without a significant drop of accuracy GirshickR2015iccv . When the fully-connected layers of “4096 - 4096” are compressed into “512 - 4096 - 512 - 4096” and then fine-tuned, the compressed network can run in 31.3 FPS with only 0.5% accuracy drop.
|Model||Proposals||Recall (%)||mAP (%)||Time (ms)||FPS|
For the PASCAL VOC2012 detection task, we use the same settings with VOC2007 except that we fine-tune the network with VOC2007 trainval, test and VOC2012 trainval.
Table 5 summarizes the comparisons between PVANet+ and some state-of-the-art networks HeK2016cvpr ; RenS2015nips ; Jif2016nips ; liu15ssd from the PASCAL VOC2012 leaderboard.333http://host.robots.ox.ac.uk:8080/leaderboard/displaylb.php?challengeid=11&compid=4 Our PVANet+ has achieved the 4th place on the leaderboard as of the time of submission, and the network shows 84.2% mAP, which is significant considering its computational cost. Our network outperforms “Faster R-CNN + ResNet-101” by 0.4% mAP.
It is worthwhile to mention that other top-performers are based on “ResNet-101” or “VGG-16” which is much heavier than PVANet. Moreover, most of them, except for “SSD512”, utilize several time-consuming techniques such as global contexts, multi-scale testing or ensembles. Therefore, we expect that other top-performers are slower than our design by an order of magnitude (or more). Among the networks performing over 80% mAP, PVANet+ is the only network running in ms. Taking its accuracy and computational cost into account, PVANet+ is the most efficient network on the leaderboard.
It is also worth comparing ours with “R-FCN” and “SSD512”. They introduced novel detection structures to reduce computational cost without modifying the base networks, while we mainly focus on designing an efficient feature-extraction network. Therefore, their methodologies can be easily integrated with PVANet and further reduce its computational cost.
|Model||Computation cost (GMAC)||Running time||mAP|
|Faster R-CNN + ResNet-101||80.5||N/A||125.9||>206.4||2240||48.6||83.8|
|Faster R-CNN + VGG-16||183.2||2.7||18.5||204.4||110||2.4||75.9|
|R-FCN + ResNet-101||122.9||0||0||122.9||133||2.9||82.0|
. Competitors’ MAC are estimated from their Caffe prototxt files which are publicly available. Here, we assume that all competitors take an 1000x600 image and the number of proposals is 200 except for “SSD 512” which takes a 512x512 image. Competitors’ runtime performances are fromRenS2015nips ; Jif2016nips and the public VOC leaderboard3 while we projected the original values with an assumption that NVIDIA Titan X is 1.5x faster than NVIDIA K40.
In this paper, we show that the current networks are highly redundant and that we can design a thin and light network which is capable of complex vision tasks. Elaborate adoptions and combinations of recent technical innovations on deep learning make it possible for us to design a network to maximize the computational efficiency. Even though the proposed network is designed for object detection, we believe that our design principle is widely applicable to other tasks such as face recognition and semantic analysis.
Our network design is completely independent of network compression and quantization. All kinds of recent compression and quantization techniques are applicable to our network as well to further increase the actual performance in real applications. As an example, we show that a simple technique like truncated SVD could achieve a notable improvement in the runtime performance based on our network.
Understanding and improving convolutional neural networks via concatenated rectified linear units.In
Proceedings of the International Conference on Machine Learning (ICML), 2016.
Inside-outside net: Detecting objects in context with skip pooling and recurrent neural networks.In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2016.