1 Introduction
Convolutional neural networks (ConvNets) have been widely used in image classification, detection, segmentation, and many other applications. A recent trend in ConvNets design is to improve both accuracy and efficiency. Following this trend, depthwise convolutions are becoming increasingly more popular in modern ConvNets, such as MobileNets [Howard et al.(2017)Howard, Zhu, Chen, Kalenichenko, Wang, Weyand, Andreetto, and Adam, Sandler et al.(2018)Sandler, Howard, Zhu, Zhmoginov, and Chen], ShuffleNets [Zhang et al.(2018)Zhang, Zhou, Lin, and Sun, Ma et al.(2018)Ma, Zhang, Zheng, and Sun], NASNet [Zoph et al.(2018)Zoph, Vasudevan, Shlens, and Le], AmoebaNet [Real et al.(2019)Real, Aggarwal, Huang, and Le], MnasNet [Tan et al.(2019)Tan, Chen, Pang, Vasudevan, and Le], and EfficientNet [Tan and Le(2019)]. Unlike regular convolution, depthwise convolutional kernels are applied to each individual channel separately, thus reducing the computational cost by a factor of , where is the number of channels. While designing ConvNets with depthwise convolutional kernels, an important but often overlooked factor is kernel size. Although conventional practice is to simply use 3x3 kernels [Howard et al.(2017)Howard, Zhu, Chen, Kalenichenko, Wang, Weyand, Andreetto, and Adam, Sandler et al.(2018)Sandler, Howard, Zhu, Zhmoginov, and Chen, Zhang et al.(2018)Zhang, Zhou, Lin, and Sun, Ma et al.(2018)Ma, Zhang, Zheng, and Sun, Chollet(2017), Zoph et al.(2018)Zoph, Vasudevan, Shlens, and Le], recent research results have shown larger kernel sizes such as 5x5 kernels [Tan et al.(2019)Tan, Chen, Pang, Vasudevan, and Le] and 7x7 kernels [Cai et al.(2019)Cai, Zhu, and Han] can potentially improve model accuracy and efficiency.
In this paper, we revisit the fundamental question: do larger kernels always achieve higher accuracy? Since first observed in AlexNet [Krizhevsky et al.(2012)Krizhevsky, Sutskever, and Hinton], it has been wellknown that each convolutional kernel is responsible to capture a local image pattern, which could be edges in early stages and objects in later stages. Large kernels tend to capture highresolution patterns with more details at the cost of more parameters and computations, but do they always improve accuracy? To answer this question, we systematically study the impact of kernel sizes based on MobileNets [Howard et al.(2017)Howard, Zhu, Chen, Kalenichenko, Wang, Weyand, Andreetto, and Adam, Sandler et al.(2018)Sandler, Howard, Zhu, Zhmoginov, and Chen]. Figure 1 shows the results. As expected, larger kernel sizes significantly increase the model size with more parameters; however, model accuracy first goes up from 3x3 to 7x7, but then drops down quickly when the kernel size is larger than 9x9, suggesting very large kernel sizes can potentially hurt both accuracy and efficiency. In fact, this observation aligns to the very first intuition of ConvNets: in the extreme case that kernel size is equal to the input resolution, a ConvNet simply becomes a fullyconnected network, which is known to be inferior [7]. This study suggests the limitations of single kernel size: we need both large kernels to capture highresolution patterns and small kernels to capture lowresolution patterns for better model accuracy and efficiency.
(a) MobileNetV1  (b) MobileNetV2 
Based on this observation, we propose a mixed depthwise convolution (MixConv), which mixes up different kernel sizes in a single convolution op, such that it can easily capture different patterns with various resolutions. Figure 2 shows the structure of MixConv, which partitions channels into multiple groups and apply different kernel sizes to each group of channels. We show that our MixConv is a simple dropin replacement of vanilla depthwise convolution, but it can significantly improve MobileNets accuracy and efficiency on both ImageNet classification and COCO object detection.
To further demonstrate the effectiveness of our MixConv, we leverage neural architecture search [Tan et al.(2019)Tan, Chen, Pang, Vasudevan, and Le] to develop a new family of models named as MixNets. Experimental results show our MixNet models significantly outperform all previous mobile ConvNets, such as ShuffleNets [Ma et al.(2018)Ma, Zhang, Zheng, and Sun, Zhang et al.(2018)Zhang, Zhou, Lin, and Sun], MnasNet [Tan et al.(2019)Tan, Chen, Pang, Vasudevan, and Le], FBNet [Wu et al.(2019)Wu, Dai, Zhang, Wang, Sun, Wu, Tian, Vajda, Jia, and Keutzer], and ProxylessNAS [Cai et al.(2019)Cai, Zhu, and Han]. In particular, our mediumsize MixNetM achieves the same 77.0% ImageNet top1 accuracy, while using 12x fewer parameters and 31x fewer FLOPS than ResNet152 [He et al.(2016)He, Zhang, Ren, and Sun].
2 Related Work
Efficient ConvNets:
In recent years, significant efforts have been spent on improving ConvNet efficiency, from more efficient convolutional operations [Iandola et al.(2016)Iandola, Han, Moskewicz, Ashraf, Dally, and Keutzer, Howard et al.(2017)Howard, Zhu, Chen, Kalenichenko, Wang, Weyand, Andreetto, and Adam, Chollet(2017)], bottleneck layers [Sandler et al.(2018)Sandler, Howard, Zhu, Zhmoginov, and Chen, Xie et al.(2017)Xie, Girshick, Dollár, Tu, and He], to more efficient architectures [Tan et al.(2019)Tan, Chen, Pang, Vasudevan, and Le, Wu et al.(2019)Wu, Dai, Zhang, Wang, Sun, Wu, Tian, Vajda, Jia, and Keutzer, Cai et al.(2019)Cai, Zhu, and Han]. In particular, depthwise convolution has been increasingly popular in all mobilesize ConvNets, such as MobileNets [Howard et al.(2017)Howard, Zhu, Chen, Kalenichenko, Wang, Weyand, Andreetto, and Adam, Sandler et al.(2018)Sandler, Howard, Zhu, Zhmoginov, and Chen], ShuffleNets[Zhang et al.(2018)Zhang, Zhou, Lin, and Sun, Ma et al.(2018)Ma, Zhang, Zheng, and Sun], MnasNet [Tan et al.(2019)Tan, Chen, Pang, Vasudevan, and Le], and beyond [Chollet(2017), Zoph et al.(2018)Zoph, Vasudevan, Shlens, and Le, Real et al.(2019)Real, Aggarwal, Huang, and Le]. Recently, EfficientNet [Tan and Le(2019)] even achieves both stateoftheart ImageNet accuracy and tenfold better efficiency by extensively using depthwise and pointwise convolutions. Unlike regular convolution, depthwise convolution performs convolutional kernels for each channel separately, thus reducing parameter size and computational cost. Our proposed MixConv generalizes the concept of depthwise convolution, and can be considered as a dropin replacement of vanilla depthwise convolution.
MultiScale Networks and Features:
Our idea shares a lot of similarities to prior multibranch ConvNets, such as Inceptions [Szegedy et al.(2016)Szegedy, Vanhoucke, Ioffe, Shlens, and Wojna, Szegedy et al.(2017b)Szegedy, Ioffe, Vanhoucke, and Alemi], InceptionResNet [Szegedy et al.(2017a)Szegedy, Ioffe, Vanhoucke, and Alemi], ResNeXt [Xie et al.(2017)Xie, Girshick, Dollár, Tu, and He], and NASNet [Zoph et al.(2018)Zoph, Vasudevan, Shlens, and Le]. By using multiple branches in each layer, these ConvNets are able to utilize different operations (such as convolution and pooling) in a single layer. Similarly, there are also many prior work on combining multiscale feature maps from different layers, such as DenseNet [Huang et al.(2017b)Huang, Liu, Van Der Maaten, and Weinberger, Huang et al.(2017a)Huang, Chen, Li, Wu, van der Maaten, and Weinberger] and feature pyramid network [Lin et al.(2017)Lin, Dollár, Girshick, He, Hariharan, and Belongie]. However, unlike these prior works that mostly focus on changing the macroarchitecture of neural networks in order to utilize different convolutional ops, our work aims to design a dropin replacement of a single depthwise convolution, with the goal of easily utilizing different kernel sizes without changing the network structure.
Neural Architecture Search:
Recently, neural architecture search [Zoph and Le(2017), Zoph et al.(2018)Zoph, Vasudevan, Shlens, and Le, Liu et al.(2018)Liu, Zoph, Shlens, Hua, Li, FeiFei, Yuille, Huang, and Murphy, Liu et al.(2019)Liu, Simonyan, and Yang, Tan et al.(2019)Tan, Chen, Pang, Vasudevan, and Le] has achieved better performance than handcrafted models by automating the design process and learning better design choices. Since our MixConv is a flexible operation with many possible design choices, we employ existing architecture search methods similar to [Tan et al.(2019)Tan, Chen, Pang, Vasudevan, and Le, Wu et al.(2019)Wu, Dai, Zhang, Wang, Sun, Wu, Tian, Vajda, Jia, and Keutzer, Cai et al.(2019)Cai, Zhu, and Han] to develop a new family of MixNets by adding our MixConv into the search space.
3 MixConv
The main idea of MixConv is to mix up multiple kernels with different sizes in a single depthwise convolution op, such that it can easily capture different types of patterns from input images. In this section, we will discuss the feature map and design choices for MixConv.
3.1 MixConv Feature Map
We start from the vanilla depthwise convolution. Let
denotes the input tensor with shape
, where is the spatial height, is the spatial width, and is the channel size. Let denotes a depthwise convolutional kernel, where is the kernel size, is the input channel size, and is the channel multiplier. For simplicity, here we assume kernel width and height are the same , but it is straightforward to generalize to cases where kernel width and height are different.s The output tensor would have the same spatial shape and multiplied output channel size , with each output feature map value calculated as:(1) 
Unlike vanilla depthwise convolution, MixConv partitions channels into groups and applies different kernel sizes to each group, as shown in Figure 2. More concretely, the input tensor is partitioned into groups of virtual tensors , where all virtual tensors have the same spatial height and width , and their total channel size is equal to the original input tensor: . Similarly, we also partition the convolutional kernel into groups of virtual kernels . For th group of virtual input tensor and kernel, the corresponding virtual output is calculated as:
(2) 
The final output tensor is a concatenation of all virtual output tensors :
(3) 
where is the final output channel size.
Figure 3 shows a simple demo of TensorFlow python implementation for MixConv. On certain platforms, MixConv could be implemented as a single op and optimized with group convolution. Nevertheless, as shown in the figure, MixConv can be considered as a simple dropin replacement of vanilla depthwise convolution.
3.2 MixConv Design Choices
MixConv is a flexible convolutional op with several design choices:
Group Size :
It determines how many different types of kernels to use for a single input tensor. In the extreme case of , a MixConv becomes equivalent to a vanilla depthwise convolution. In our experiments, we find is generally a safe choice for MobileNets, but with the help of neural architecture search, we find it can further benefit the model efficiency and accuracy with a variety of group sizes from 1 to 5.
Kernel Size Per Group:
In theory, each group can have arbitrary kernel size. However, if two groups have the same kernel size, then it is equivalent to merge these two groups into a single group, so we restrict each group has different kernel size. Furthermore, since small kernel sizes generally have less parameters and FLOPS, we restrict kernel size always starts from 3x3, and monotonically increases by 2 per group. In other words, group always has kernel size . For example, a 4group MixConv always uses kernel sizes {3x3, 5x5, 7x7, 9x9}. With this restriction, the kernel size for each group is predefined for any group size , thus simplifying our design process.
Channel Size Per Group:
In this paper, we mainly consider two channel partition methods: (1) Equal partition: each group will have the same number of filters; (2) Exponential partition: the th group will have about portion of total channels. For example, given a 4group MixConv with total filter size 32, the equal partition will divide the channels into (8, 8, 8, 8), while the exponential partition will divide the channels into (16, 8, 4, 4).
Dilated Convolution:
Since large kernels need more parameters and computations, an alternative is to use dilated convolution [Yu and Koltun(2016)], which can increase receptive field without extra parameters and computations. However, as shown in our ablation study in Section 3.4, dilated convolutions usually have inferior accuracy than large kernel sizes.
(a) MobileNetV1  (b) MobileNetV2 
3.3 MixConv Performance on MobileNets
Since MixConv is a simple dropin replacement of vanilla depthwise convolution, we evaluate its performance on classification and detection tasks with existing MobileNets [Howard et al.(2017)Howard, Zhu, Chen, Kalenichenko, Wang, Weyand, Andreetto, and Adam, Sandler et al.(2018)Sandler, Howard, Zhu, Zhmoginov, and Chen].
ImageNet Classification Performance:
Figure 4 shows the performance of MixConv on ImageNet classification [Russakovsky et al.(2015)Russakovsky, Deng, Su, Krause, Satheesh, Ma, Huang, Karpathy, Khosla, Bernstein, et al.]. Based on MobileNet V1 and V2, we replace all original 3x3 depthwise convolutional kernels with larger kernels or MixConv kernels. Notably, MixConv always starts with 3x3 kernel size and then monotonically increases by 2 per group, so the rightmost point for MixConv in the figure has six groups of filters with kernel size {3x3, 5x5, 7x7, 9x9, 11x11, 13x13}. In this figure, we observe: (1) MixConv generally uses much less parameters and FLOPS, but its accuracy is similar or better than vanilla depthwise convolution, suggesting mixing different kernels can improve both efficiency and accuracy; (2) In contrast to vanilla depthwise convolution that suffers from accuracy degradation with larger kernels, as shown in Figure 1, MixConv is much less sensitive to very large kernels, suggesting mixing different kernels can achieve more stable accuracy for large kernel sizes.
COCO Detection Performance:
We have also evaluated our MixConv on COCO object detection based on MobileNets. Table 1 shows the performance comparison, where our MixConv consistently achieves better efficiency and accuracy than vanilla depthwise convolution. In particular, compared to the vanilla depthwise7x7, our MixConv357 (with 3 groups of kernels {3x3, 5x5, 7x7}) achieves 0.6% higher mAP on MobileNetV1 and 1.1% higher mAP on MobileNetV2 using fewer parameters and FLOPS.
MobileNetV1 [Howard et al.(2017)Howard, Zhu, Chen, Kalenichenko, Wang, Weyand, Andreetto, and Adam]  MobileNetV2 [Sandler et al.(2018)Sandler, Howard, Zhu, Zhmoginov, and Chen]  
Network  #Params  #FLOPS  #Params  #FLOPS  
baseline3x3  5.12M  1.31B  21.7  4.35M  0.79B  21.5 
depthwise5x5  5.20M  1.38B  22.3  4.47M  0.87B  22.1 
mixconv 35 (ours)  5.16M  1.35B  22.2  4.41M  0.83B  22.1 
depthwise7x7  5.32M  1.47B  21.8  4.64M  0.98B  21.2 
mixconv 357 (ours)  5.22M  1.39B  22.4  4.49M  0.88B  22.3 
3.4 Ablation Study
To better understand MixConv, we provide a few ablation studies:
MixConv for Single Layer:
In addition of applying MixConv to the whole network, Figure 5 shows the perlayer performance on MobileNetV2. We replace one of the 15 layers with either (1) vanilla DepthwiseConv9x9 with kernel size 9x9; or (2) MixConv3579
with 4 groups of kernels: {3x3, 5x5, 7x7, 9x9}. As shown in the figure, large kernel size has different impact on different layers: for most of layers, the accuracy doesn’t change much, but for certain layers with stride 2, a larger kernel can significantly improve the accuracy. Notably, although
MixConv3579 uses only half parameters and FLOPS than the vanilla DepthwiseConv9x9, our MixConv achieves similar or slightly better performance for most of the layers.(a) MobileNetV1  (b) MobileNetV2 
Channel Partition Methods:
Figure 6 compares the two channel partition methods: equal partition (MixConv) and exponential partition (MixConv+exp). As expected, exponential partition requires less parameters and FLOPS for the same kernel size, by assigning more channels to smaller kernels. Our empirical study shows exponential channel partition only performs slightly better than equal partition on MobileNetV1, but there is no clear winner if considering both MobileNet V1 and V2. A possible limitation of exponential partition is that large kernels won’t have enough channels to capture highresolution patterns.
Dilated Convolution:
Figure 6 also compares the performance of dilated convolution (denoted as MixConv+dilated). For kernel size KxK, it uses a 3x3 kernel with dilation rate : for example, a 9x9 kernel will be replaced by a 3x3 kernel with dilation rate 4. Notably, since Tensorflow dilated convolution is not compatible with stride 2, we only use dilated convolutions for a layer if its stride is 1. As shown in the figure, dilated convolution has reasonable performance for small kernels, but the accuracy drops quickly for large kernels. Our hypothesis is that when dilation rate is big for large kernels, a dilated convolution will skip a lot of local information, which would hurt the accuracy.
4 MixNet
To further demonstrate the effectiveness of MixConv, we leverage recent progress in neural architecture search to develop a new family of MixConvbased models, named as MixNets.
4.1 Architecture Search
Our neural architecture search settings are similar to recent MnasNet [Tan et al.(2019)Tan, Chen, Pang, Vasudevan, and Le], FBNet [Wu et al.(2019)Wu, Dai, Zhang, Wang, Sun, Wu, Tian, Vajda, Jia, and Keutzer], and ProxylessNAS [Cai et al.(2019)Cai, Zhu, and Han], which use MobileNetV2 [Sandler et al.(2018)Sandler, Howard, Zhu, Zhmoginov, and Chen] as the baseline network structure, and search for the best kernel size, expansion ratio, channel size, and other design choices. However, unlike these prior works that use vanilla depthwise convolution as the basic convolutional op, we adopt our proposed MixConv as the search options. Specifically, we have five MixConv candidates with group size :
Model  Type  #Parameters  #FLOPS  Top1 (%)  Top5 (%) 

MobileNetV1 [Howard et al.(2017)Howard, Zhu, Chen, Kalenichenko, Wang, Weyand, Andreetto, and Adam]  manual  4.2M  575M  70.6  89.5 
MobileNetV2 [Sandler et al.(2018)Sandler, Howard, Zhu, Zhmoginov, and Chen]  manual  3.4M  300M  72.0  91.0 
MobileNetV2 (1.4x)  manual  6.9M  585M  74.7  92.5 
ShuffleNetV2 [Ma et al.(2018)Ma, Zhang, Zheng, and Sun]  manual    299M  72.6   
ShuffleNetV2 (2x)  manual    597M  75.4   
ResNet153 [He et al.(2016)He, Zhang, Ren, and Sun]  manual  60M  11B  77.0  93.3 
NASNetA [Zoph et al.(2018)Zoph, Vasudevan, Shlens, and Le]  auto  5.3M  564M  74.0  91.3 
DARTS [Liu et al.(2019)Liu, Simonyan, and Yang]  auto  4.9M  595M  73.1  91 
MnasNetA1 [Tan et al.(2019)Tan, Chen, Pang, Vasudevan, and Le]  auto  3.9M  312M  75.2  92.5 
MnasNetA2  auto  4.8M  340M  75.6  92.7 
FBNetA [Wu et al.(2019)Wu, Dai, Zhang, Wang, Sun, Wu, Tian, Vajda, Jia, and Keutzer]  auto  4.3M  249M  73.0   
FBNetC  auto  5.5M  375M  74.9   
ProxylessNAS [Cai et al.(2019)Cai, Zhu, and Han]  auto  4.1M  320M  74.6  92.2 
ProxylessNAS (1.4x)  auto  6.9M  581M  76.7  93.3 
MobileNetV3Large [Howard et al.(2019)Howard, Sandler, Chu, Chen, Tan, Wang, Zhu, Pang, Vasudevan, Le, and Adam]  combined  5.4M  217M  75.2   
MobileNetV3Large (1.25x)  combined  7.5M  356M  76.6   
MixNetS  auto  4.1M  256M  75.8  92.8 
MixNetM  auto  5.0M  360M  77.0  93.3 
MixNetL  auto  7.3M  565M  78.9  94.2 

3x3: MixConv with one group of filters () with kernel size 3x3.

…

3x3, 5x5, 7x7, 9x9, 11x11: MixConv with five groups of filters () with kernel size {3x3, 5x5, 7x7, 9x9, 11x11}. Each group has roughly the same number of channels.
In order to simplify the search process, we don’t include exponential channel partition or dilated convolutions in our search space, but it is trivial to integrate them in future work.
Similar to recent neural architecture search approaches [Tan et al.(2019)Tan, Chen, Pang, Vasudevan, and Le, Cai et al.(2019)Cai, Zhu, and Han, Wu et al.(2019)Wu, Dai, Zhang, Wang, Sun, Wu, Tian, Vajda, Jia, and Keutzer]
, we directly search on ImageNet train set, and then pick a few topperforming models from search to verify their accuracy on ImageNet validation set and transfer learning datasets.
4.2 MixNet Performance on ImageNet
Table 2 shows the ImageNet performance of MixNets. Here we obtain MixNetS and M from neural architecture search, and scale up MixNetM with depth multiplier 1.3 to obtain MixNetL. All models are trained with the same settings as MnasNet [Tan et al.(2019)Tan, Chen, Pang, Vasudevan, and Le].
In general, our MixNets outperform all latest mobile ConvNets: Compared to the handcrafted models, our MixNets improve top1 accuracy by 4.2% than MobileNetV2 [Sandler et al.(2018)Sandler, Howard, Zhu, Zhmoginov, and Chen] and 3.5% than ShuffleNetV2 [Ma et al.(2018)Ma, Zhang, Zheng, and Sun], under the same FLOPS constraint; Compared to the latest automated models, our MixNets achieve better accuracy than MnasNet (+1.3%), FBNets (+2.0%), ProxylessNAS (+2.2%) under similar FLOPS constraint. MixNets also achieve similar performance (using less parameters) as the latest MobileNetV3 [Howard et al.(2019)Howard, Sandler, Chu, Chen, Tan, Wang, Zhu, Pang, Vasudevan, Le, and Adam], which is developed concurrently with our work with several manual optimizations in addition of architecture search. In particular, our MixNetL achieves a new stateoftheart 78.9% top1 accuracy under typical mobile FLOPS (<600M) constraint. Compared to the widely used ResNets [He et al.(2016)He, Zhang, Ren, and Sun], our MixNetM achieves the same 77% top1 accuracy, while using 12x fewer parameters and 31x fewer FLOPS than ResNet152.
Figure 7 visualizes the ImageNet performance comparison. We observe that recent progresses on neural architecture search have significantly improved model performance [Tan et al.(2019)Tan, Chen, Pang, Vasudevan, and Le, Wu et al.(2019)Wu, Dai, Zhang, Wang, Sun, Wu, Tian, Vajda, Jia, and Keutzer, Cai et al.(2019)Cai, Zhu, and Han] than previous handcrafted mobile ConvNets [Sandler et al.(2018)Sandler, Howard, Zhu, Zhmoginov, and Chen, Ma et al.(2018)Ma, Zhang, Zheng, and Sun]. However, by introducing a new type of efficient MixConv, we can further improve model accuracy and efficiency based on the same neural architecture search techniques.
4.3 MixNet Architectures
To understand why our MixNets achieve better accuracy and efficiency, Figure 8 illustrates the network architecture for MixNetS and MixNetM from Table 2. In general, they both use a variety of MixConv with different kernel sizes throughout the network: small kernels are more common in early stage for saving computational cost, while large kernels are more common in later stage for better accuracy. We also observe that the bigger MixNetM tends to use more large kernels and more layers to pursing higher accuracy, with the cost of more parameters and FLOPS. Unlike vanilla depthwise convolutions that suffer from serious accuracy degradation for large kernel sizes (Figure 1), our MixNets are capable of utilizing very large kernels such as 9x9 and 11x11 to capture highresolution patterns from input images, without hurting model accuracy and efficiency.
4.4 Transfer Learning Performance
We have also evaluated our MixNets on four widely used transfer learning datasets, including CIFAR10/100 [Krizhevsky and Hinton(2009)], OxfordIIIT Pets [Parkhi et al.(2012)Parkhi, Vedaldi, Zisserman, and Jawahar] , and Food101 [Bossard et al.(2014)Bossard, Guillaumin, and Van Gool]. Table 3 shows their statistics of train set size, test set size, and number of classes.
Figure 9 compares our MixNetS/M with a list of previous models on transfer learning accuracy and FLOPS. For each model, we first train it from scratch on ImageNet and than finetune all the weights on the target dataset using similar settings as [Kornblith et al.(2018)Kornblith, Shlens, and Le]. The accuracy and FLOPS data for MobileNets [Howard et al.(2017)Howard, Zhu, Chen, Kalenichenko, Wang, Weyand, Andreetto, and Adam, Sandler et al.(2018)Sandler, Howard, Zhu, Zhmoginov, and Chen], Inception [Szegedy et al.(2015)Szegedy, Liu, Jia, Sermanet, Reed, Anguelov, Erhan, Vanhoucke, and Rabinovich], ResNet [He et al.(2016)He, Zhang, Ren, and Sun], DenseNet [Huang et al.(2017b)Huang, Liu, Van Der Maaten, and Weinberger] are from [Kornblith et al.(2018)Kornblith, Shlens, and Le]. In general, our MixNets significantly outperform previous models on all these datasets, especially on the most widely used CIFAR10 and CIFAR100, suggesting our MixNets also generalize well to transfer learning. In particular, our MixNetM achieves 97.92% accuracy with 3.49M parameters and 352M FLOPS, which is 11.4x more efficient with 1% higher accuracy than ResNet50 [He et al.(2016)He, Zhang, Ren, and Sun].
Dataset  TrainSize  TestSize  Classes 

CIFAR10 [Krizhevsky and Hinton(2009)]  50,000  10,000  10 
CIFAR100 [Krizhevsky and Hinton(2009)]  50,000  10,000  100 
OxfordIIIT Pets [Parkhi et al.(2012)Parkhi, Vedaldi, Zisserman, and Jawahar]  3,680  3,369  37 
Food101 [Bossard et al.(2014)Bossard, Guillaumin, and Van Gool]  75,750  25,250  101 
5 Conclusions
In this paper, we revisit the impact of kernel size for depthwise convolution, and identify that traditional depthwise convolution suffers from the limitations of single kernel size. To address this issue, we proposes MixConv, which mixes multiple kernels in a single op to take advantage of different kernel sizes. We show that our MixConv is a simple dropin replacement of vanilla depthwise convolution, and improves the accuracy and efficiency for MobileNets, on both image classification and object detection tasks. Based on our proposed MixConv, we further develop a new family of MixNets using neural architecture search techniques. Experimental results show that our MixNets achieve significantly better accuracy and efficiency than all latest mobile ConvNets on both ImageNet classification and four widely used transfer learning datasets.
References

[Bossard et al.(2014)Bossard, Guillaumin, and Van Gool]
Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool.
Food101–mining discriminative components with random forests.
ECCV, pages 446–461, 2014.  [Cai et al.(2019)Cai, Zhu, and Han] Han Cai, Ligeng Zhu, and Song Han. Proxylessnas: Direct neural architecture search on target task and hardware. ICLR, 2019.

[Chollet(2017)]
François Chollet.
Xception: Deep learning with depthwise separable convolutions.
CVPR, pages 1610–02357, 2017.  [He et al.(2016)He, Zhang, Ren, and Sun] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CVPR, pages 770–778, 2016.
 [Howard et al.(2019)Howard, Sandler, Chu, Chen, Tan, Wang, Zhu, Pang, Vasudevan, Le, and Adam] Andrew Howard, Mark Sandler, Grace Chu, LiangChieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, Quoc V. Le, and Hartwig Adam. Searching for mobilenetv3. arXiv preprint arXiv:1905.02244, 2019.
 [Howard et al.(2017)Howard, Zhu, Chen, Kalenichenko, Wang, Weyand, Andreetto, and Adam] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
 [Huang et al.(2017a)Huang, Chen, Li, Wu, van der Maaten, and Weinberger] Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, and Kilian Q Weinberger. Multiscale dense networks for resource efficient image classification. ICLR, 2017a.
 [Huang et al.(2017b)Huang, Liu, Van Der Maaten, and Weinberger] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. CVPR, 2017b.
 [Iandola et al.(2016)Iandola, Han, Moskewicz, Ashraf, Dally, and Keutzer] Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. Squeezenet: Alexnetlevel accuracy with 50x fewer parameters and <0.5 mb model size. arXiv preprint arXiv:1602.07360, 2016.
 [Kornblith et al.(2018)Kornblith, Shlens, and Le] Simon Kornblith, Jonathon Shlens, and Quoc V Le. Do better imagenet models transfer better? CVPR, 2018.
 [Krizhevsky and Hinton(2009)] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical Report, 2009.
 [Krizhevsky et al.(2012)Krizhevsky, Sutskever, and Hinton] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, pages 1097–1105, 2012.
 [Lin et al.(2017)Lin, Dollár, Girshick, He, Hariharan, and Belongie] TsungYi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. CVPR, 2017.
 [Liu et al.(2018)Liu, Zoph, Shlens, Hua, Li, FeiFei, Yuille, Huang, and Murphy] Chenxi Liu, Barret Zoph, Jonathon Shlens, Wei Hua, LiJia Li, Li FeiFei, Alan Yuille, Jonathan Huang, and Kevin Murphy. Progressive neural architecture search. ECCV, 2018.
 [Liu et al.(2019)Liu, Simonyan, and Yang] Hanxiao Liu, Karen Simonyan, and Yiming Yang. DARTS: Differentiable architecture search. ICLR, 2019.
 [Ma et al.(2018)Ma, Zhang, Zheng, and Sun] Ningning Ma, Xiangyu Zhang, HaiTao Zheng, and Jian Sun. Shufflenet v2: Practical guidelines for efficient cnn architecture design. ECCV, 2018.
 [Parkhi et al.(2012)Parkhi, Vedaldi, Zisserman, and Jawahar] Omkar M Parkhi, Andrea Vedaldi, Andrew Zisserman, and CV Jawahar. Cats and dogs. CVPR, pages 3498–3505, 2012.

[Real et al.(2019)Real, Aggarwal, Huang, and Le]
Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V Le.
Regularized evolution for image classifier architecture search.
AAAI, 2019. 
[Russakovsky et al.(2015)Russakovsky, Deng, Su, Krause, Satheesh, Ma,
Huang, Karpathy, Khosla, Bernstein, et al.]
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma,
Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al.
Imagenet large scale visual recognition challenge.
International Journal of Computer Vision
, 115(3):211–252, 2015.  [Sandler et al.(2018)Sandler, Howard, Zhu, Zhmoginov, and Chen] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and LiangChieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. CVPR, 2018.
 [Szegedy et al.(2015)Szegedy, Liu, Jia, Sermanet, Reed, Anguelov, Erhan, Vanhoucke, and Rabinovich] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. CVPR, pages 1–9, 2015.
 [Szegedy et al.(2016)Szegedy, Vanhoucke, Ioffe, Shlens, and Wojna] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. CVPR, pages 2818–2826, 2016.

[Szegedy et al.(2017a)Szegedy, Ioffe, Vanhoucke, and
Alemi]
Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi.
Inceptionv4, inceptionresnet and the impact of residual connections on learning.
AAAI, 4:12, 2017a.  [Szegedy et al.(2017b)Szegedy, Ioffe, Vanhoucke, and Alemi] Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi. Inceptionv4, inceptionresnet and the impact of residual connections on learning. AAAI, 4:12, 2017b.
 [Tan and Le(2019)] Mingxing Tan and Quoc V Le. Efficientnet: Rethinking model scaling for convolutional neural networks. ICML, 2019.
 [Tan et al.(2019)Tan, Chen, Pang, Vasudevan, and Le] Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, and Quoc V Le. Mnasnet: Platformaware neural architecture search for mobile. CVPR, 2019.
 [Wu et al.(2019)Wu, Dai, Zhang, Wang, Sun, Wu, Tian, Vajda, Jia, and Keutzer] Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, and Kurt Keutzer. Fbnet: Hardwareaware efficient convnet design via differentiable neural architecture search. CVPR, 2019.
 [Xie et al.(2017)Xie, Girshick, Dollár, Tu, and He] Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. CVPR, pages 5987–5995, 2017.
 [Yu and Koltun(2016)] Fisher Yu and Vladlen Koltun. Multiscale context aggregation by dilated convolutions. ICLR, 2016.
 [Zhang et al.(2018)Zhang, Zhou, Lin, and Sun] Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. CVPR, 2018.

[Zoph and Le(2017)]
Barret Zoph and Quoc V Le.
Neural architecture search with reinforcement learning.
ICLR, 2017.  [Zoph et al.(2018)Zoph, Vasudevan, Shlens, and Le] Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. Learning transferable architectures for scalable image recognition. CVPR, 2018.
Comments
There are no comments yet.