1 Introduction
Convolutional networks have become a dominant approach for visual object recognition [12, 39, 25, 41] . However, as Convolutional Neural Networks (CNNs) are becoming increasingly deep, the vanishing gradient problem
When training a deep neural network gradients can become very small during the backpropagation process, making it hard to optimise the parameters in the early stages of the network. Therefore in the training phase the weights of the layers at the end of the network get updated quite rapidly while the early layers do not, leading to poor results. Activation function ’ReLU’ and regularization methods like dropouts were proposed to address this problem
In this paper we draw inspiration from the above networks [12, 13] and propose a novel network architecture that retains positive aspects of these approaches [12, 13] whilst overcoming some of their limitations. Figure 1 illustrates a single module layout of our proposed architecture where its unique connectivity is displayed.

We show that ChoiceNet design allows good gradient and information flow through the network while using fewer parameters compared to other state of the art schemes. We evaluate ChoiceNet on three benchmark datasets (CIFAR10 [21], CIFAR 100 [21] and SVHN [34]) for image classification and also compare the performance of our network with state of the art methods in CamVid dataset [17]. Our model performs well against existing networks [12, 13] on all three datasets, showing promising results when compared to the current state-of-the-art.


2 ChoiceNet
Consider a single image that is passing through a CNN. The network has
layers, each with a non-linear transformation
, where is the index number of the layer.is a list of operations such as Batch-Normalization
[15], Pooling [26][33] or a convolutional operation. The output of layer is denoted as .ResNet: ResNet [12] uses identity mapping as bypassing paths to improve over a typical CNN network [22].
A typical convolutional feed-forward network connects the layer’s output to the layer’s input. It gives rise to a layer transition: = . ResNet [12] adds an identity mapped connection, also referred as skip connection, that bypasses the transformation in between:
(1) |
This mechanism allows the network to flow gradients directly through the identity functions which results in faster training and better error propagation. However in [13] it was argued that despite the benefits of using skip connections, there is a possibility that when a layer is connected by a skip connection it may disrupt the information flow of the network therefore degrading the performance of the network.
In [44], a wider version of ResNet was proposed where the authors showed that an increased number of filters in each layer could improve the overall performance with sufficient depth. FractalNet [24] also shows comparable improvement on similar benchmark datasets[21].
DenseNet: As an alternative to ResNet, DenseNet proposed a different connectivity scheme. They allowed connections from each layer to all of its subsequent layers. Thus layer receives feature maps from all previous layers. Considering as input:
(2) |
where denotes the concatenation of feature maps produced from previous layers respectively.
The network maximizes information flow by connecting the convolutional layers channel wise instead of skipping connections. In this model, the layer has number of inputs consisting of all the feature maps of previous layers. Thus on the layer, there are connections. DenseNet requires fewer parameters as there is no need to learn from redundant features maps. This allows the network to compete against ResNet using fewer parameters.
ChoiceNet: We propose an alternative connectivity that retains the advantages of the above architectures whilst reducing some of their limitations. Figure 1 illustrates the connectivity layout between each layer of a single module. Each block of ChoiceNet contains three modules and the total network is comprised of three blocks with pooling operations in the middle (see Figure 3).
Figure 2 shows a breakdown of each module. Letters A to G denote unique information generated by one forward pass through the model. B is generated by three consecutive convolutional operations, whereas A is the result of the same three convolutional operations but additionally connected by a skip connection. Following this pattern, we generate information represented by letters C, D, F and G. Letter E denotes the special case where no convolutional operation is done after the convolutional operation and it contains all the original information. This information is then concatenated with the others (ie. A, B etc.) at the final output.
Therefore, the final output contains information with and without skip connections from filters of size 3, 5 and 7 and also from the original input without any modification. Note that the convolutional operation at the start acts as a bottleneck to limit computational costs and all the convolutional operations are padded appropriately for the concatenation at the final stage.
Considering as input, our proposed connectivity is given by:
(3) |
(4) |
where is concatenation of feature maps. The feature maps are first summed and then concatenated which resembles characteristics of ResNet and DenseNet respectively.
Composite function: Each of the composite functions consists of a convolution operation followed by a batch normalisation, and ends with a rectified linear unit(ReLU) operation.
Pooling
: Pooling is an essential part of convolutional networks since Equations 1 and 2 are not viable when the feature maps are not of equal size. We divide the network into multiple blocks where each block contains same sized features. Instead of using either max pooling or average pooling, we use both pooling mechanisms and concatenate them before feeding it to the next layer (see figure
4).Bottleneck layers: The use of convolutional operations (known as bottleneck layers) can reduce computational complexity without hurting the overall performance of a network [29]. We introduce a convolutional operation at the start of each composite function (see fig 1 and 3).

Implementation Details: ChoiceNet has three blocks with equal number of modules inside. In each Choice operation (see fig 1), there are three , three and three convolutional operations. Each of the consecutive convolutional operations is connected via a skip connection (red line in fig 1). The feature maps are then concatenated so that both the outputs with the skip and without the skip connections are included (green and black lines in fig 1 before ”C”). Finally, the original input feature map is also merged (blue line in fig 1) to produce the final output.
The intuition behind having the skip (Letter A, Figure 2) and the non-skip connections (Letter B, Figure 2) output merged together is for enabling the network to choose between the two options for each filter size. We also merge the original input to this output (Letter E Figure 2) so that the network can choose a suitable depth for optimal performance. To allow the network further options, we use both Max and average pooling. Thus, each pooling layer contains both a Max-Pool and an Avg-Pool operation. The outputs of each pooling operation are merged before proceeding to the next layer.
3 Experiments
We evaluate our proposed ChoiceNet architecture on three benchmark datasets (CIFAR10 [21], CIFAR 100 [21] and SVHN [34]) and compared it with other state of the art architectures. We also evaluated it on state of the art semantic segmentation dataset CamVid [17].
3.1 Datasets
3.1.1 Cifar
The CIFAR dataset [21] is a collection of two datasets, CIFAR10 and CIFAR100. Each dataset consists of 50,000 training images and 10,000 test images with
pixels. The CIFAR10 dataset contains 10 class values and CIFAR100 dataset contains 100. In our experiment, we hold out 5,000 images from the training set for validation and use the rest of the images for training. We choose the model with the highest accuracy on the validation set to test on the testset. We adopt standard data augmentation with training including horizontally flipping images, random cropping, shifting and normalizing using channel mean and standard deviation. These augmentations were widely used in previous work
[12, 14, 24, 27, 29, 36, 40, 41]. We also tested our model on the datasets without augmentation. In our final output in Table 1, we denote the original dataset as C10 and C100, and the augmented dataset as C10+ and C100+.3.1.2 Svhn
The SVHN dataset contains images of Street View House Numbers with pixels.There are 73,257 images in the training set and 26,032 on the testset. It also contains additional 531,131 images for training purposes. Like in previous work [12, 14, 24, 27, 36], we use all the training data with no augmentation and use 10% of the training images as a validation set. We select the model with the highest accuracy on the validation set and report the test error in Table 1.
3.1.3 CamVid
The CamVid dataset [9] is a dataset consisting of 12 classes and has been mostly used for the task of semantic segmentation in previous work [32, 2, 7]. The dataset contains a training set of 367 images, a validation set of 100 images and a test set of 233 images. The challenge is to do pixel wise classification of the input image and correctly identify the objects in the scene. The metric called IoU or ’intersection over union’ is commonly used for this particular task [6, 17, 2].
3.2 Training
3.2.1 Classification
All networks were trained using stochastic gradient decent (SGD) [4]. We avoid using other optimizers such as Adam [18]
and RMSProp
[11]to keep the comparisons as fair and simple as possible. On all three datasets, we used a training batch of 128. For the first 100 epochs, we used a learning rate of
, for the next 100 epochs , and then a rate of for the final 300 epochs.We used weight decay of and Nesterov [38] momentum without dampening. We use a dropout layer after each ChoiceNet block with dropout rate at 0.2.
3.2.2 Segmentation
For this task we use the training procedure of U-Net [37] (Fig. 5) and we change the conv-blocks of U-Net with Res-Block (a block of the network that holds off the unique properties), Dense-Block and ChoiceNet-Module (Fig. 3). We use the Adam Optimizer with an initial learning rate of which was reduced by a factor of 10 after each 100 epochs until the network converged. A weight decay of and Nesterov [38] momentum without dampening was used. For fair comparison we kept the number of channels of Res-block and Dense-block unchanged as in the original article [13, 14].

Each of the experiments was performed 5 times and during the training process we took the model with the best validation score and reported its performance on the test set.
3.2.3 Setup
We used PyTorch
[35] to implement our models. We trained DenseNet and ResNet models using Pytorch implementations [35].We used a machine with 16Gb of RAM with Intel i7 8700 with 6 core CPU and a Nvidia RTX 2080ti with 11GB of VRAM.
Method | Depth | Params | C10 | C10+ | C100 | C100+ | SVHN |
---|---|---|---|---|---|---|---|
Network in Network | - | - | 10.41 | 8.81 | 35.68 | - | 2.35 |
All-CNN | - | - | 9.08 | 7.25 | - | 33.71 | - |
Deeply Supervised Net | - | - | 9.69 | 7.97 | - | 34.57 | 1.92 |
Highway Network | - | - | - | 7.72 | - | 32.39 | - |
FractalNet | 21 | 38.6M | 10.18 | 5.22 | 35.34 | 23.3 | 2.01 |
with Dropout/Drop-path | 21 | 38.6M | 7.33 | 4.6 | 28.2 | 23.73 | 1.87 |
ResNet | 110 | 1.7M | - | 6.61* | - | - | - |
ResNet (reported by [14] ) | 110 | 1.7M | 13.63 | 6.41 | 44.74 | 27.22 | 2.01 |
ResNet with Stochastic Depth | 110 | 1.7M | 11.66 | 5.23 | 37.8 | 24.58 | 1.75 |
1202 | 10.2M | - | 4.91 | - | - | - | |
Wide ResNet | 16 | 11.0M | 6.29 | 4.81 | - | 22.07 | - |
28 | 36.5M | - | 4.17 | - | 20.5 | - | |
with Dropout | 16 | 2.7M | - | 4.2* | - | - | 1.63* |
ResNet (pre-activation) | 164 | 1.7M | 10.5* | 5.83* | 35.78* | 24.34* | - |
1001 | 10.2M | 10.4* | 4.59* | 32.89* | 22.75* | - | |
DenseNet (k = 12) | 40 | 1.0M | 7.3* | 5.43* | 29.03* | 27.12* | 1.81* |
DenseNet (k = 12) | 100 | 7.0M | 5.81* | 4.5* | 24.97* | 20.84* | 1.76* |
DenseNet (k = 24) | 100 | 27.2M | 5.98* | 4.1* | 24.01* | 20.5* | 1.71* |
DenseNet-BC (k = 12) | 100 | 0.8M | 6.03* | 4.7* | 24.60 | 22.98* | 1.82* |
DenseNet-BC (k = 24) | 250 | 15.3M | 5.16* | 4.9* | 21.55* | 18.42* | 1.7* |
DenseNet-BC (k = 40) | 190 | 25.6M | - | 4.2* | - | 18.88* | - |
ChoiceNet-30 | 30 | 13M | 5.9 | 4.2 | 22.80 | 20.5 | 1.8 |
ChoiceNet-37 | 37 | 19.2M | 4.0 | 3.9 | 21.10 | 18.2 | 1.6 |
ChoiceNet-40 | 40 | 23.4M | 4.9 | 3.7 | 20.05 | 17.5 | 1.5 |
3.3 Result Analysis
3.3.1 CIFAR and SVHN
Accuracy: Table 1 shows that the ChoiceNet depth 40 achieves the highest accuracy on all three datasets. The error rate on C10+ and C100+ is 4.0% and 17.5% respectively which is lower than error rates achieved by other state of the art models. Our results on the original C10 and C100 (without augmentation) data sets are 2% lower than Wide ResNet and 5% lower than pre-activated ResNet. Our model ChoiceNet () performs comparably well to DenseNet-BC with and , whereas ChoiceNet () outperforms all other networks.
Parameter efficiency: Table 1 shows that ChoiceNet needs fewer parameters to give similar or better performance compared to other state of the art architectures. For instance, ChoiceNet with a depth of 30 has only 13 million parameters yet it performs comparably well to DenseNet-BC () which has 15.3 million parameters. Our best results were achieved by ChoiceNet ( ) with 23.4 million parameters compared to DenseNet-BC ( ) with 25.6m, DenseNet ( ) with 27.2m and Wide ResNet with 36.5m parameters.
Over-fitting
: Deep learning architectures can often be prone to overfitting however as ChoiceNet requires a smaller number of parameters, it is less likely to overfit the training datasets. Its performance on the non-augmented datasets appears to support this claim.
Exploding Gradient
: While training ChoiceNet we observed that it occasionally suffers from an exploding gradient problem. ResNet and DenseNet were both trained using stochastic gradient descend(SGD) and a learning rate of
that was later reduced to and after every 100 epochs. However, we had to start training our network using a learning rate of because setting the rate any higher was causing gradients to explode. We also had to reduce the learning rate to and then to after each 50 epochs instead of 100 to prevent the problem from reoccurring.The problem of exploding gradients is easier to handle than that of vanishing gradients. We used a smaller learning rate at the start and L2 regularisers with dropout layers () which addressed the problem.
Method | m_IoU |
---|---|
Wu et. al. [47] | 80.6 |
Wang et. al. [46] | 80.1 |
Ke et. al. [16] | 79.1 |
Kong et. al. [19] | 78.2 |
Wang et. al. [45] | 77.6 |
*ChoiceNet-block (13M ) | 73.5 |
Lin et. al. [28] |
73.6 |
*Res-Blocks (27M) | 70.6 |
Chen et. al. [6] |
70.4 |
Mehta et. al. [31] | 70.2 |
Fourure et. al. [10] |
69.8 |
*Dense-blocks ( 25M ) | 69.2 |
Lo et. al. [30] |
67.3 |
Yu et. al. [48] | 67.1 |
Kreo et. al. [20] | 66.3 |
Chen et. al. [5] |
63.1 |
Berman et. al. [3] | 63.1 |
Arnab et. al. [1] | 62.5 |
Huang et. al.[31] |
60.3 |
|
Ground Truth | DenseNet | ResNet | ChoiceNet |
---|---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
3.3.2 CamVid
We tested ChoiceNet on the CamVid dataset and compared it with specialist segmentation and other state of the art networks [47, 46, 16, 19, 45, 28, 6, 6, 31, 10]. Mean IoU (m_IoU) scores are shown in table 2.
Although our network did not perform better than specialist state of the art segmentation networks, it was able to outperform DenseNet and ResNet both in terms of m_IoU score as well as in terms of parameter efficiency. Our ChoiceNet with 13 million parameters was able to perform better than networks almost twice its size.
In Figure 7 we display some of the predicted images from ResNet, DenseNet and our model against ground truth data from CamVid dataset. The qualitative results show that our model has the ability to segment smaller classes with good precision.
4 Discussion
Model compactness: As a result of the use of different filter sizes with feature concatenation and skip connections at every stage, feature maps learned by any layer in a block can be accessed by all subsequent layers. This extensive feature reuse throughout the network leads to a compact model.
In figure 6, we showed a layer vs test error graph which demonstrates the compactness of ChoiceNet with respect to other state of the art architectures. Note that for training different networks we kept the environment same but changed the depth and later smoothed the curve for better visualisation. ChoiceNet’s curve always stays at the very bottom which means better error rate with fewer parameters/layers.
Feature Reuse: ChoiceNet uses different filter sizes with skip connections and channel concatenation in each module (see fig 1). In order to have a deeper and visual understanding of its operation, we took the weights of the first block (in ChoiceNet-37) and normalized them to the range . After normalizing the weights we mapped them to two sections, weights under 0.4 as white and over 0.4 as colored - see table 3. We assumed that the weights less than 0.4 will have insignificant effect on the total performance. The figure shows that after the very first convolution operation on the raw input, the conv operations with channel size 7 has more effect than size 3 and 5. In the second module all the conv operations’ weights were under 0.4 which suggests that the model used either the feature maps of the earlier output by concatenation (red line between filter 5 and 7 of the middle module) or it used the skip connection (red line above filter 3 with highlighted ”+” sign). On one hand this indicates that the skip connection or channel concatenation or both are working as they were suppose to but this also means that we still have many redundant parameters in the network. In the third module it was found that filters 3 and 5 had weights over 0.4 which indicates that they possibly had some contribution in the network. We suspect that the selection of filter size 7 in the first module and 3 and 5 in the third module echoes the hypothesis of AlexNet [23] where they found bigger filter sizes work better at the beginning of the networks and smaller filters work better in the later stages.
In table 2, we show the Mean Intersection over Union (m_IoU) on the CamVid dataset of some of the current state of the art models. We used the U-Net training scheme and changed the basic convolutional operations with ResBlocks, DenseBlocks and ChoiceNet-module (see figure 1). While our network has fewer parameters compared to ResBlock and Denseblocks, it achieved a higher score. Note that even though our model achieved a good m_IoU score , it is not as good as some of the network architectures designed specificaly for segmentation tasks [47, 46, 16, 19, 45]. Nevertheless, it performed well comparing to both ResBlock and Dense-block as well as some other general purpose convolutional neural networks [31].
Our intuition is that the extra connections and paths in our method enable the network to learn from a large variety of feature maps. This also enables the network to back propagate errors more efficiently (see also [12, 13]). We found that due to all the connections the network can be prone to exploding gradient and therefore needs a small learning rate to begin with. We also found by grid search that the network shows peak performance when the depth is between 30 to 40 layers and further increasing the layers appears to have little effect. We suspect that ChoiceNet plateaus at depth 30 to 40 although it is possible that it could be a local minima as we couldn’t train models with depth more than 60 layers due to resource limitation.
![]() |
![]() |
|
5 Conclusion
In this paper, we introduced a powerful yet lightweight and efficient network, ChoiceNet, which encodes better spatial information from images by learning from its numerous elements such as skip connections, the use of different filter size, dense connectivity and including both Max and Avg pooling. ChoiceNetis a general purpose network with good generalisation abilities and can be used across a wide range of tasks including classification, image segmentation and others. Our network shows promising performance when compared to state-of-the-art techniques across different tasks such as semantic segmentation and object classification while being more efficient.
References
-
[1]
A. Arnab, S. Jayasumana, S. Zheng, and P. H. S. Torr.
Higher order conditional random fields in deep neural networks.
In
European Conference on Computer Vision (ECCV)
, 2016. - [2] V. Badrinarayanan, A. Kendall, and R. Cipolla. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence, 39(12):2481–2495, 2017.
-
[3]
M. Berman, A. Rannen Triki, and M. B. Blaschko.
The lovász-softmax loss: a tractable surrogate for the
optimization of the intersection-over-union measure in neural networks.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pages 4413–4421, 2018. -
[4]
L. Bottou.
Large-scale machine learning with stochastic gradient descent.
In Proceedings of COMPSTAT’2010, pages 177–186. Springer, 2010. - [5] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Semantic image segmentation with deep convolutional nets and fully connected crfs. In ICLR, 2015.
- [6] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence, 40(4):834–848, 2018.
-
[7]
M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson,
U. Franke, S. Roth, and B. Schiele.
The cityscapes dataset for semantic urban scene understanding.
In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3213–3223, 2016. - [8] G. E. Dahl, T. N. Sainath, and G. E. Hinton. Improving deep neural networks for lvcsr using rectified linear units and dropout. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 8609–8613. IEEE, 2013.
- [9] J. Fauqueur, G. Brostow, and R. Cipolla. Assisted video object labeling by joint tracking of regions and keypoints. In 2007 IEEE 11th International Conference on Computer Vision, pages 1–7. IEEE, 2007.
- [10] D. Fourure, R. Emonet, E. Fromont, D. Muselet, A. Tremeau, and C. Wolf. Residual conv-deconv grid network for semantic segmentation. arXiv preprint arXiv:1707.07958, 2017.
- [11] A. Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
- [12] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
- [13] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In CVPR, volume 1, page 3, 2017.
- [14] G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Q. Weinberger. Deep networks with stochastic depth. In European Conference on Computer Vision, pages 646–661. Springer, 2016.
- [15] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
- [16] T.-W. Ke, J.-J. Hwang, Z. Liu, and S. X. Yu. Adaptive affinity fields for semantic segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 587–602, 2018.
- [17] A. Kendall, V. Badrinarayanan, and R. Cipolla. Bayesian segnet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding. arXiv preprint arXiv:1511.02680, 2015.
- [18] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- [19] S. Kong and C. C. Fowlkes. Recurrent scene parsing with perspective understanding in the loop. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 956–965, 2018.
- [20] I. Krešo, D. Čaušević, J. Krapac, and S. Šegvić. Convolutional scale invariance for semantic segmentation. In German Conference on Pattern Recognition. Springer, 2016.
- [21] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
- [22] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
- [23] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
- [24] G. Larsson, M. Maire, and G. Shakhnarovich. Fractalnet: Ultra-deep neural networks without residuals. arXiv preprint arXiv:1605.07648, 2016.
- [25] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4):541–551, 1989.
- [26] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
- [27] C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply-supervised nets. In Artificial Intelligence and Statistics, pages 562–570, 2015.
- [28] G. Lin, A. Milan, C. Shen, and I. Reid. Refinenet: Multi-path refinement networks for high-resolution semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1925–1934, 2017.
- [29] M. Lin, Q. Chen, and S. Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013.
- [30] S.-Y. Lo, H.-M. Hang, S.-W. Chan, and J.-J. Lin. Efficient dense modules of asymmetric convolution for real-time semantic segmentation. arXiv preprint arXiv:1809.06323, 2018.
- [31] S. Mehta, M. Rastegari, L. Shapiro, and H. Hajishirzi. Espnetv2: A light-weight, power efficient, and general purpose convolutional neural network. arXiv preprint arXiv:1811.11431, 2018.
- [32] E. Mulalić, N. Grujić, V. Ilić, M. Marković, et al. Object-level grouping and identification for tracking objects in a video, Feb. 20 2018. US Patent 9,898,677.
-
[33]
V. Nair and G. E. Hinton.
Rectified linear units improve restricted boltzmann machines.
In Proceedings of the 27th international conference on machine learning (ICML-10), pages 807–814, 2010. - [34] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in natural images with unsupervised feature learning. 2011.
- [35] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. 2017.
- [36] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550, 2014.
- [37] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015.
- [38] S. Ruder. An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747, 2016.
- [39] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
- [40] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.
- [41] R. K. Srivastava, K. Greff, and J. Schmidhuber. Training very deep networks. In Advances in neural information processing systems, pages 2377–2385, 2015.
-
[42]
C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi.
Inception-v4, inception-resnet and the impact of residual connections on learning.
In AAAI, volume 4, page 12, 2017. - [43] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9, 2015.
- [44] S. Targ, D. Almeida, and K. Lyman. Resnet in resnet: generalizing residual architectures. arXiv preprint arXiv:1603.08029, 2016.
- [45] P. Wang, P. Chen, Y. Yuan, D. Liu, Z. Huang, X. Hou, and G. Cottrell. Understanding convolution for semantic segmentation. arXiv preprint arXiv:1702.08502, 2017.
- [46] P. Wang, P. Chen, Y. Yuan, D. Liu, Z. Huang, X. Hou, and G. Cottrell. Understanding convolution for semantic segmentation. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1451–1460. IEEE, 2018.
- [47] Z. Wu, C. Shen, and A. Van Den Hengel. Wider or deeper: Revisiting the resnet model for visual recognition. Pattern Recognition, 90:119–133, 2019.
- [48] F. Yu and V. Koltun. Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122, 2015.
Comments
There are no comments yet.