1 Introduction
A common belief behind the design principles of most popular lightweight models (either manually designed or automatically searched) [sandler2018mobilenetv2, ma2018shufflenet, tan2019mnasnet, tan2019efficientnet] is to adopt the inverted residual block [sandler2018mobilenetv2]. Compared to the classic residual bottleneck block [he2016deep, he2016identity], this block shifts the identity mapping from highdimensional representations to lowdimensional ones (i.e., the bottlenecks). However, connecting identity mapping between thin bottlenecks would inevitably lead to information loss since the residual representations are compressed as shown in Figure 2(b). Moreover, it would also weaken the propagation capability of gradients across layers, due to gradient confusion arising from the narrowed feature dimensions, and hence affect the training convergence and model performance [sankararaman2019impact]. Therefore, despite the wide use of the inverted residual block, how to design residual blocks for mobile devices is very worthy of studying.
In this paper, in view of the above concerns, we rethink the rationality of shifting from the classic bottleneck structure (Figure 2(a)) to the popular inverted residual block (Figure 2(b)) in developing mobile networks. In particular, we consider the following three fundamental questions. (i) What are the effects if we position the identity mapping (i.e., shortcuts) at the highdimensional representations as done in the classic bottleneck structure? (ii) While the linear activation can reduce information loss, should it only be applied to the bottlenecks? (iii) The previous questions remind us of the classic bottleneck structure which suffers high computational complexity. This cost can be reduced by replacing the dense spatial convolutions with depthwise ones, but, regarding the bottlenecks, should the depthwise convolution be still added in the lowdimensional bottleneck as conventional?
Motivated by the above questions, we present and evaluate a new bottleneck design, termed the sandglass block. Unlike the inverted residual block that builds shortcuts between linear bottlenecks, our sandglass block puts shortcut connections between linear highdimensional representations, as shown in Figure 2(c). Such structure preserves more information delivered between blocks compared to the inverted residual block and propagates more gradients backward to better optimize network training because of the highdimensional residuals [sankararaman2019impact]. Furthermore, to learn more expressive spatial representation, instead of putting the spatial convolutions in the bottleneck with compressed channels, we propose to apply them in the expanded high dimensional feature space, which we find is an effective way of improving the model performance. In addition, we maintain the channel reduction and expansion process with pointwise convolutions to reduce computational cost. This makes our block quite different from the inverted residual block but more similar to the classic residual bottleneck.
We stack the sandglass blocks in a modularized way to build the proposed MobileNeXt. Our network achieves more than 1.7% top1 classification accuracy improvement over MobileNetV2 on ImageNet with slightly less computation and a comparable number of parameters as shown in Figure 1. When applying the sandglass block on the EfficientNet topology to replace the inverted residual block, the resulting model surpasses the previous stateoftheart by 0.5% with a comparable amount of computation but 20% parameter reduction. Particularly, in object detection, when taking SSDLite [liu2016ssd, sandler2018mobilenetv2] as the object detector, using our MobileNeXt as backbone gains 0.9% in mAP on the Pascal VOC 2007 test set over MobileNetV2. More interestingly, we also experimentally find the proposed sandglass block can be used to enrich the search space of neural architecture search algorithms [liu2018darts]. By adding the sandglass block into the search space as a ‘super’ operator, without changing the search algorithm, the resultant model can improve classification accuracy by 0.13% but with 25% less parameters compared to models searched from the vanilla space.
In summary, we make the following contributions in this paper:

Our results advocate a rethinking of the bottleneck structure for mobile network design. It seems that the inverted residuals are not so advantageous over the bottleneck structure as commonly believed.

Our study reveals that building shortcut connections along higherdimensional feature space could promote model performance. Moreover, depthwise convolutions should be conducted in the high dimensional space for learning more expressive features and learning linear residuals is also crucial for bottleneck structure.

Based on our study, we propose a novel sandglass block, which substantially extends the classic bottleneck structure. We experimentally demonstrate that this structure is more suitable for mobile applications in terms of both accuracy and efficiency and can be used as ‘super’ operators in architecture search algorithms for better architecture generation.
2 Related Work
Modern deep neural networks are mostly built by stacking building blocks, which are designed based on either the classic residual block with bottleneck structure
[he2016deep] or the inverted residual block [sandler2018mobilenetv2]. In this section, we categorize all related networks based on above two types of building blocks and briefly describe them below.Classic residual bottleneck blocks The bottleneck structure was first introduced in ResNet [he2016deep]. A typical bottleneck structure consists of three convolutional layers: an convolution for channel reduction, a
convolution for spatial feature extraction, and another
convolution for channel expansion. A residual network is often constructed by stacking a sequence of such residual blocks. The bottleneck structure was further developed in later works by widening the channels in each convolutional layer [zagoruyko2016wide], applying group convolutions to the middle bottleneck convolution for aggregating richer feature representations [xie2017aggregated], or introducing attention based modules to explicitly model interdependencies between channels [hu2018squeeze, li2019selective]. There are also other works [chen2017dual, touvron2019fixing] combining residual blocks with dense connections to boost the performance. However, in spite of the success in heavyweight network design, it is rarely used in lightweight networks due to the model complexity. Our work demonstrates that by reasonably adjusting the residual block, this kind of classic bottleneck structure is also suitable for lightweight networks and can yield stateoftheart results.Inverted residual blocks The inverted residual block, which was first introduced in MobileNetV2 [sandler2018mobilenetv2], reverses the idea of the classic bottleneck structure and connects shortcuts between linear bottlenecks. It largely improves performance and optimizes the model complexity compared to the classic MobileNet [howard2017mobilenets] which is composed of a sequence of depthwise separable convolutions. Because of high efficiency, the inverted residual block has been widely adopted in the later mobile network architectures. ShuffleNetV2 [ma2018shufflenet] inserts a channel split module before the inverted residual block and adds another channel shuffle module after it. In HBONet [li2019hbonet], downsampling operations are introduced into inverted residual blocks for modeling richer spatial information. MobileNetV3 [howard2019searching]
proposes to search for optimal activation functions and the expansion rate of inverted residual blocks at each stage. More recently, MixNet
[tan2019mixconv] proposes to search for optimal kernel sizes of the depthwise separable convolutions in the inverted residual block. EfficientNet [tan2019efficientnet] is also based on the inverted residual block but differently it uses a scaling method to control the network weight in terms of input resolution, network depth, and network width. Different from all the above approaches, our work advances the standard bottleneck structure and demonstrates the superiority of our building block over the inverted residual block in mobile settings.Model compression and neural architecture search Model compression algorithms are effective for removing redundant parameters for neural networks, such as network pruning [liu2017learning, han2015deep, radu2019performance, caron2020pruning], quantization [hubara2017quantized, choukroun2019low], factorization [zhou2019tensor, jaderberg2014speeding], and knowledge distillation [hinton2015distilling]. Despite efficient networks, the performance of the compressed networks is still closely related to the original networks’ architectures. Thus, designing more efficient network architectures is essential for yielding efficient models. Neural architecture search achieves so by automatically searching efficient network architectures [tan2019mnasnet, cai2018proxylessnas, guo2019single]. However, the search space requires human expertise and the performance of the searched networks is largely dependent upon the designed search space as pointed out in [ying2019bench, dong2020bench]. In this paper, we show that our proposed building block is complementary to existing search space design principles and can further improve the performance of searched networks if added to existing search spaces.
3 Method
In this section, we first review some preliminaries about the bottleneck structure widely used in previous residual networks and then describe our proposed sandglass block and network architecture.
3.1 Preliminaries
Residual block with bottleneck structure The classic residual block with bottleneck structure [he2016deep], as shown in Figure 2(a), consists of two convolution layers for channel reduction and expansion respectively and one convolution layer between them for spatial information encoding. In spite of its success in heavyweight network design [he2016deep], this conventional bottleneck structure is not suitable for building lightweight neural networks because of its large amount of parameters and computation cost in the standard convolutional layer.
Depthwise separable convolutions To reduce computational cost and make the network more efficient, depthwise separable convolutions [chollet2017xception, howard2017mobilenets] are developed to replace the standard one. As demonstrated in [chollet2017xception], a convolution with a
weight tensor, where
is the kernel size and and are the number of input and output channels respectively, can be factorized into two convolutions. The first is an channel depthwise (a.k.a channelwise) convolution to learn the spatial correlations among locations within each channel separately. The second is a pointwise convolution that learns to linearly combine channels to produce new features. As the combination of a pointwise convolution and a depthwise convolution has significantly less parameters and computations, using depthwise separable convolutions in basic building blocks can remarkably reduce the parameters and computational cost. Our proposed architecture also adopts such separable convolutions.Inverted residual block The inverted residual block is specifically tailored for mobile devices, especially those with limited computational resource budget. More specifically, unlike the classic bottleneck structure as shown in Figure 2(a), to save computations, it takes as input a lowdimensional compressed tensor and expands it to a higher dimensional one by a pointwise convolution. Then it applies depthwise convolution for spatial context encoding, followed by another pointwise convolution to generate a lowdimensional feature tensor as input to the next block. The inverted residual block presents two distinct architecture designs for gaining efficiency without suffering too much performance drop: the shortcut connection is put between the lowdimensional bottlenecks if necessary (as shown in Figure 2(b)); and linear bottleneck is adopted.
Despite good performance [sandler2018mobilenetv2], in inverted residual blocks, feature maps encoded by the intermediate expansion layer should be first projected to lowdimensional ones, which may not preserve enough useful information due to channel compression. Moreover, recent studies have unveiled that wider architecture is more favorable for alleviating gradient confusion [sankararaman2019impact] and hence can improve network performance. Putting shortcut connections between bottlenecks may prevent the gradients from top layers from being successfully propagated to bottom layers during model training because of the lowdimensionality of representations between adjacent inverted residual blocks.
3.2 Sandglass Block
In view of the aforementioned limitations of the inverted residual block, we rethink its design rules and present a sandglass block that can tackle the above issues by flipping the thought of inverted residuals.
Our design principle is mainly based on the following insights: (i) To preserve more information from the bottom layers when transiting to the top layers and to facilitate the gradients propagation across layers, the shortcuts should be positioned to connect highdimensional representations. (ii) Depthwise convolutions with small kernel size (e.g., ) are lightweight, so we can appropriately apply a couple of depthwise convolutions onto the higherdimensional features such that richer spatial information can be encoded to generate more expressive representations. We elaborate on these design considerations in the following.
Rethinking the positions of expansion and reduction layers Originally, the inverted residual block performs expansion at first and then reduction. Based on the aforementioned design principle, to make sure the shortcuts connect highdimensional representations, we propose to reverse the order of the two pointwise convolutions first. Let be the input tensor and the output tensor of a building block^{1}^{1}1For simplicity, we assume that the input and output of the building block share the same number of channels and resolution.
. We do not consider the depthwise convolution and activation layers at this moment. The formulation of our building block can be written as follows:
(1) 
where and denote the two pointwise convolutions for channel expansion and reduction, respectively. In this way, we can keep the bottleneck in the middle of the residual path for saving parameters and computation cost. More importantly, this allows us to use the shortcut connection to connect representations with a large number of channels instead of the bottleneck ones.
Highdimensional shortcuts Instead of putting the shortcut between bottlenecks, we put the shortcuts between higherdimensional representations as shown in Figure 3(b). The ‘wider’ shortcut delivers more information from the input to the output compared to the inverted residual block and allows more gradients to propagate across multiple layers.
Learning expressive spatial features Pointwise convolutions can be used to encode the interchannel information but fail to capture spatial information. In our building block, we follow previous mobile networks and adopt depthwise spatial convolutions to encode spatial information. The inverted residual block adds depthwise convolutions between pointwise convolutions to learn expressive spatial context information. However, in our case, the position between two pointwise convolutions is the bottleneck. Directly adding depthwise convolutions in the bottleneck as shown in Figure 3(a) makes them have fewer filters and thus, less spatial information can be encoded. We experimentally find that this structure largely degrades the performance compared to MobileNetV2 by more than 1%.
Regarding the positions of the pointwise convolutions, instead of directly putting the depthwise convolution between the two pointwise convolutions, we propose to add depthwise convolutions at the ends of the residual path as shown in Figure 3(b). Mathematically, our building block can be formulated as follows:
(2)  
(3) 
where and are the th pointwise convolution and depthwise convolution, respectively. In this way, since both depthwise convolutions are conducted in highdimensional spaces, richer feature representations can be extracted compared to the inverted residual block. We will give more explanations on the advantages of such design.
Input dimension  Operator type  Output dimension 

33 Dwise conv, ReLU6  
11 conv, linear  
11 conv, ReLU6  
3 3 Dwise conv, linear, stride = 
Activation layers It has been demonstrated in [sandler2018mobilenetv2] that using linear bottlenecks can help prevent the feature values from being zeroed and hence reduce information loss. Following this suggestion, we do not add any activation layer after the reduction layer (the first pointwise convolutional layer). It should also be noted that though the output of our building block is highdimensional, we empirically find adding an activation layer after the last convolution can negatively influence the classification performance. Therefore, activation layers are only added after the first depthwise convolutional layer and the last pointwise convolutional layer. We will give more explanations in our experiments on this.
Block structure Taking the above considerations, we design a novel residual bottleneck block. The structure details are given in Table 1, and the diagram can also be found in Figure 3(b). Note that when the input and output have different channel numbers, we do not add the shortcut connection. For depthwise convolutions, we always use kernel size as done in other works [he2016deep, sandler2018mobilenetv2]
. We also utilize batch normalization and ReLU6 activation if necessary during training.
Relation to the inverted and classic residual blocks Albeit both architectures exploit the bottlenecks, the design intuition and the internal structure are quite different. Our goal is to demonstrate that the idea of building shortcut connections between highdimensional representations as in the classic bottleneck structure [he2016deep] is suitable for lightweight networks as well. To the best of our knowledge, this is the first work that attempts to investigate the advantages of the classic bottleneck structure over the inverted residual block for efficient network design. On the other hand, we also attempt to demonstrate that adding depthwise convolutions to the ends of the residual path in our structure can encourage the network to learn more expressive spatial information and hence yield better performance. In our experiment section, we will show more numerical results and provide detailed analysis.
3.3 MobileNeXt Architecture
Based on our sandglass block, we develop a modularized architecture, MobileNeXt. At the beginning of our network, there is a convolutional layer with 32 output channels. After that, our sandglass blocks are stacked together. Detailed information about the network architecture can be found in Table 2. Following [sandler2018mobilenetv2]
, the expansion ratio used in our network is set to 6 by default. The output of the last building block is followed by a global average pooling layer to transform 2D feature maps to 1D feature vectors. A fullyconnected layer is finally added to predict the final score for each category.
Identity tensor multiplier The shortcut connections in residual blocks have been shown essential for improving the capability of propagating gradients across layers [he2016deep, sandler2018mobilenetv2]. According to our experiments, we find that there is no need to keep the whole identity tensor to combine with the residual path. To make our network more friendly to mobile devices, we introduce a new hyperparameter—identity tensor multiplier, denoted by , which controls what portion of the channels in the identity tensor is preserved. For convenience, let be the transformation function of the residual path in our block. Originally, the formulation of our block can be written as . After applying the multiplier, our building block can be rewritten as
(4) 
where the subscripts index the channel dimension.
The advantages of using are mainly twofold. First, after reducing the multiplier, the number of elementwise additions in each building block can be reduced. As pointed out in [ma2018shufflenet], the elementwise addition is time consuming. Users can choose a lower identity tensor multiplier to yield better latency with nearly no performance drop. Second, the number of memory access times can be reduced. One of the main factors that affect the model latency is the memory access cost (MAC). As the shortcut identity tensor is from the output of the last building block, its recurrent nature hints an opportunity to cache it on the chip in order to avoid the excessive offchip memory access. Therefore, reducing the channel dimension of the identity tensor can effectively encourage the processors to store it in the cache or other faster memory near the processors and hence improve the latency. We will give more details on how this multiplier affects the performance and model latency in the experiment section.
No.  t  Output dimension  s  b  Input dimension  Operator 

1    2  1  conv2d 3x3  
2  2  2  1  sandglass block  
3  6  1  1  sandglass block  
4  6  2  3  sandglass block  
5  6  2  3  sandglass block  
6  6  1  4  sandglass block  
7  6  2  4  sandglass block  
8  6  1  2  sandglass block  
9  6  1  1  sandglass block  
10      1  avgpool 7x7  
11      1  conv2d 1x1 
4 Experiments
4.1 Experiment Setup
We adopt the PyTorch toolbox
[paszke2019pytorch] to implement all our experiments. We use the standard SGD optimizer to train our models with both decay and momentum of 0.9 and the weight decay is. We use the cosine learning schedule with an initial learning rate of 0.05. The batch size is set to 256 and four GPUs are used for training. Without special declaration, we train all the models for 200 epochs and report results on the ImageNet
[krizhevsky2012imagenet] for classification and Pascal VOC dataset [pascalvoc2012] for object detection. We use distributed training with three epochs of warmup.4.2 Comparisons with MobileNetV2
In this subsection, we extensively study the advantages of our MobileNeXt over MobileNetV2 under various settings. Besides comparing performance of their full models (i.e., , with weight multiplier of 1) for classification, we also compare their performance with other weight multipliers and quantization. This can help unveil the performance advantage of our model w.r.t. the full spectrum of model architecture configurations.
Comparison under different width multipliers We use the width multiplier as a scaling factor to trade off the model complexity and accuracy of the model as used in [howard2017mobilenets, sandler2018mobilenetv2, howard2019searching]. Here, we adopt five different multipliers, including 1.4, 1.0, 0.75, 0.5, and 0.35, to show the superiority of our network over MobileNetV2. As can be seen in Table 3 ^{2}^{2}2We also conduct latency measurements with TFLite on Pixel 4XL and the measured latency for MobileNeXt and MobileNetV2 are 66ms and 68 ms respectively., our networks with different multipliers all outperform MobileNetV2 with comparable numbers of learnable parameters and computational cost. The performance gain of our model over MobileNetV2 is especially high when the multiplier is small. This demonstrates that our model is more efficient since our model performance is much better at small sizes.
No.  Models  Param. (M)  MAdd(M)  Top1 Acc. (%) 

1  MobileNetV21.40  6.9  690  74.9 
2  MobileNetV21.00  3.5  300  72.3 
3  MobileNetV20.75  2.6  150  69.9 
4  MobileNetV20.50  2.0  97  65.4 
5  MobileNetV20.35  1.7  59  60.3 
6  MobileNeXt1.40  6.1  590  76.1 
7  MobileNeXt1.00  3.4  300  74.0 
8  MobileNeXt0.75  2.5  210  72.0 
9  MobileNeXt0.50  2.1  110  67.7 
10  MobileNeXt0.35  1.8  80  64.7 
Model  Precision (W/A)  Method  Top1 Acc. (%) 

MobileNetV2  INT8/INT8  Post Training Quant.  65.07 
MobileNeXt  INT8/INT8  Post Training Quant.  
MobileNetV2  FP32/FP32    72.25 
MobileNeXt  FP32/FP32   
Comparison under posttraining quantization Quantization algorithms are often used in realworld applications as a kind of effective compression tool with subtle performance loss. However, the performance of the quantized model is significantly affected by the original base model. We experimentally show that the MobileNeXt can achieve better performance than the MobileNetV2 when combined with the quantization algorithm. Here, we use a widelyused posttraining linear quantization method introduced in [migacz2017nvidia]. We apply 8bit quantization on both weights and activations as 8bit is the most common scheme used on hardware platforms. The results are shown in Table 4. Without quantization, our network improves MobileNetV2 by more than 1.7% in terms of top1 accuracy. When the parameters and activations are quantized to 8 bits, our network outperforms MobileNetV2 by 3.55% under the same quantization settings. The reasons for this large improvement are twofold. First, compared to MobileNetV2, we move the shortcut in each building block from lowdimensional representations to highdimensional ones. After quantization, more informative feature representations can be preserved. Second, using more depthwise spatial convolutions can help preserve more spatial information, which we believe is beneficial to the classification performance.
Method  #Dwise convs  Param. (M)  MAdds (M)  Top1 Acc. (%) 

MobileNetV2  2 (middle)  3.6  340  73.02 
MobileNeXt  2 (top, bottom)  3.5  300  74.02 
Comparison with MobileNetV2 on structure As shown in Figure 3(b), our sandglass block contains two depthwise convolutions for encoding rich spatial context information. To demonstrate the benefit of our model comes from our novel architecture rather than leveraging one more depthwise convolution or larger receptive field, in this experiment, we attempt to compare with an improved version of MobileNetV2 with one more depthwise convolution inserted in the middle of each inverted residual block. The results are shown in Table 5. Obviously, after adding one more depthwise convolution, the performance of MobileNetV2 increases to 73%, which is still far worse than ours (74%) with even more learnable parameters and complexity. This indicates that structurally our network does have an edge over MobileNetV2.
4.3 Comparison with StateoftheArt Mobile Networks
To further verify the superiority of our proposed sandglass block over the inverted residual blocks, we add squeeze and excite modules into our MobileNeXt as done in [howard2019searching, tan2019efficientnet]. We do not apply any searching algorithms on the architecture design and data augmentation policy. We directly take the EfficientNetb0 architecture [tan2019efficientnet] and replace the inverted residual block with sandglass block with the basic augmentation policy. As shown in table 6, with a comparable amount of computation and 20% parameter reduction, replacing the inverted residual block with sandglass block results in 0.4% top1 classification accuracy improvement on ImageNet1k dataset.
4.4 Ablation Studies
In Sec. 4.2, we have shown the importance of connecting highdimensional representations with shortcuts. In this subsection, we study how other model design choices contribute to the model performance and efficiency, including the effect of using wider transformation, the importance of learning linear residuals, and the role of identity tensor multiplier.
Importance of using wider transformation As described in Sec. 3, we apply spatial transformation and shortcut connections to highdimensional representations. To demonstrate the importance of such operations, we follow the inverted residual block to use the shortcuts to connect the bottleneck representations. This operation leads to an accuracy decrease of 1%, which indicates applying shortcuts at wider dimension is more beneficial.
Importance of linear residuals According to MobileNetV2 [sandler2018mobilenetv2], its classification performance will be degraded when replacing the linear bottleneck with the nonlinear one because of information loss. From our experiment, we obtain a more general conclusion. We find that though the shortcuts connect highdimensional representations in our model, adding nonlinear activations (ReLU6) to the last convolutional layer decreases the performance by nearly 1% compared to the setting using linear activations (no ReLU6). This indicates that learning linear residual (i.e., adding no nonlinear activation layer on the top of the residual path) is essential for lightweight networks with shortcuts connecting either expansion layers or reduction layers.
Models  Param. (M)  MAdd (M)  Top1 Acc. (%) 
MobilenetV11.0[howard2017mobilenets]  4.2  575  70.6 
ShuffleNetV21.5[ma2018shufflenet]  3.5  299  72.6 
MobilenetV21.0[sandler2018mobilenetv2]  3.5  300  72.3 
MnasNetA1[tan2019mnasnet]  3.9  312  75.2 
MobilenetV3L0.75[howard2019searching]  4.0  155  73.3 
ProxylessNAS[cai2018proxylessnas]  4.1  320  74.6 
FBNetB[wu2019fbnet]  4.5  295  74.1 
IGCV3D[sun2018igcv3]  7.2  610  74.6 
GhostNet1.3[han2019ghostnet]  7.3  226  75.7 
EfficientNetb0[tan2019efficientnet]  5.3  390  76.3 
MobileNeXt1.0  3.4  300  74.02 
MobileNeXt1.0  3.94  330  76.05 
MobileNeXt1.1  420 
Effect of identity tensor multiplier Here, we investigate how the identity tensor multiplier (Sec. 3.3) would tradesoff the model accuracy and latency. We use pytorch to generate the model and run it on Google Pixel 4XL. For each model, we measure the average inference time of 10 images as the final inference latency. As shown in Table 7, the reduction of the multiplier has subtle impacts on the classification accuracy. When half of the identity representations are removed, the performance has no drop but the latency is improved. When the multiplier is set to , the performance decreases by 0.34% from 74.02% to 73.68%, but with further improvement in terms of latency. This indicates that introducing such a hyperparameter does matter for balancing the model performance and latency.
No.  Models  Tensor multiplier  Param. (M)  Top1 Acc. (%)  Latency (ms) 

1  MobileNeXt  1.0  3.4  74.02  211 
2  MobileNeXt  3.4  74.09  196  
3  MobileNeXt  3.4  73.91  195  
4  MobileNeXt  3.4  73.68  188 
4.5 Application for Object Detection
To explore the transferable capability of the proposed approach against MobileNetV2, in this subsection, we apply our classification model to the object detection task as pretrained models. We use both the proposed network and MobileNetv2 as feature extractors and report results on the Pascal VOC 2007 test set [everingham2015pascal] following [liu2016ssd] using SSDLite [sandler2018mobilenetv2, liu2016ssd]. Similar to [sandler2018mobilenetv2], the first and second layers of SSDLite are connected to the last pointwise convolution layer with output stride of 16 and 32, respectively. The rest of SSDLite layers are attached on top of the last convolutional layer with output stride of 32. During training, we use a batch size of 24 and all the models are trained for 240,000 iterations. For more detailed settings, readers can refer to [sandler2018mobilenetv2, liu2016ssd].
In Table 8, we show the results when different backbone networks are used. Obviously, with the nearly the same number of parameters and computation, SSDLite with our backbone improves the one with MobileNetV2 by nearly 1%. This demonstrates that the proposed network has better transferable capability compared to MobileNetV2.
No.  Method  Backbone  Param. (M)  MAdds (B)  mAP (%) 

1  SSD300  VGG [simonyan2014very]  36.1  35.2  77.2 
2  SSDLite320  MobileNetV2 [sandler2018mobilenetv2]  4.3  0.8  71.7 
3  SSDLite320  MobileNeXt  4.3  0.8  72.6 
4.6 Improving Architecture Search as Superoperators
It has been verified in previous subsections that our proposed sandglass block is more effective than the inverted residual block in both the classification task and the object detection task. From a holistic perspective, we can also regard a residual block as a ‘super’ operator with more powerful transformation power than a regular convolutional operator. To further investigate the superiority of the proposed sandglass block over the inverted residual block, we separately add it into the search space of the differentiable searching algorithm (DARTS) [liu2018darts] to see the network performance after architecture search and report the corresponding results on CIFAR10 dataset. As shown in Table 9, by adding our sandglass block as an new operator into the DARTS search space without changing the cell structure, the resulting model achieves higher accuracy than the model with the original DARTS search space with about 25% parameter reduction. However, the searched model with the inverted residual block added in the search space decreases the original performance. This demonstrates that our proposed sandglass block can generate more expressive representations than the inverted residual block and can also be used in architecture search algorithms as a kind of ‘super’ operator. For more details on the searched cell structure, please refer to our supplementary materials.
No.  Search Space  Test Error (%)  Param. (M)  Search Method  #Operators 

1  DARTS original  3.11  3.25  gradient based  7 
2  DARTS + IR Block  3.26  3.29  gradient based  8 
3  DARTS + sandglass block  2.98  2.45  gradient based  8 
5 Conclusions
In this paper, we deeply analyze the design rules and shortcomings of the previous inverted residual block. Based on the analysis, we propose to reverse the thought of adding shortcut connections between lowdimensional representations and present a novel building block, called the sandglass block, that connects highdimensional representations instead. We furthermore break through the tradition of previous residual blocks using one spatial convolution in each and emphasize the importance of using one more such convolution. Experiments in both classification, object detection, and neural architecture search demonstrate the effectiveness of the proposed sandglass block and its potential to be used in more contexts.
Acknowledgement
Jiashi Feng was partially supported by MOE Tier 2 MOE2017T22151, NUS_ECRA_FY17_P08, AISG100E2019035.
References
Appendix 0.A Variants of the Proposed Sandglass Block
In this section, we introduce and compare the variants of our proposed sandglass block, which are shown in Figures 4 (ac). The corresponding results are listed in Table 10.

The first variant (Figure 4(a)) is built from direct modification of the classic bottleneck structure [he2016deep] by replacing the standard convolution with a depthwise convolution. From the result, we can observe performance drop of about 5% compared to our sandglass block. We argue this is mostly because the depthwise convolution is conducted in the bottleneck with a lowdimensional feature space and hence cannot capture enough spatial information, leading to much worse performance compared to our proposed sandglass block (Figure 4(d)).

The second variant (Figure 4(b)) is derived from the first variant, but differently we add another depthwise convolution in the bottleneck. As can be seen, the top1 accuracy improves by more than 1% compared to the structure shown in Figure 4(a). This indicates encoding more spatial information indeed helps. This phenomenon can also be observed by comparing Figure 4(b) and Figure 4(d) (70.11 v.s. 74.02).

The third variant (Figure 4(c)) is based on the original inverted residual block [sandler2018mobilenetv2]. We move the depthwise convolution from the highdimensional feature space to the bottleneck positions with less feature channels. Compared with Figure 4(b), this variant in Figure 4(c) has a comparable number of learnable parameters and more computational cost but worse performance (69.26 v.s. 70.11). This also means building shortcuts between highdimensional representations is more beneficial to the network performance.
As shown in Table 10, our proposed sandglass block achieves much better results than all the three variants. The performance improvements can be explained by the two rules that we have presented in the main paper: (1) adding shortcut connections between highdimensional representations, and (2) performing the depthwise convolution in highdimensional feature space. Our experiments also indicate that bottleneck structure is suitable for mobile networks and it can work better than the inverted residual block.
Block structure  #Dwise convs  Param. (M)  MAdds (M)  Top1 Acc. (%) 

MobileNetV2  1  3.5  300  72.3 
Figure 4(a)  1  3.4  240  68.90 
Figure 4(b)  2  3.4  250  70.11 
Figure 4(c)  2  3.5  300  69.26 
Figure 4(d)  2  3.5  300  74.02 
No.  Search Space  Test Error (%)  Param. (M)  Search Method  #Operators 

1  DARTS original  3.11  3.25  gradient based  7 
2  DARTS + IR Block  3.26  3.29  gradient based  7 
3  DARTS + sandglass block  2.98  2.45  gradient based  7 
Appendix 0.B Searched Architectures
Following [liu2018darts, zoph2018learning, cai2018proxylessnas], we also search for a computation cell and use it as the basic building block for the final architecture. The searching space and algorithm are described in details as below.
Search space In our experiments, we use the search space from [liu2018darts] as our baseline (denoted as original), which includes the following operators:

Convolutional operations (ConvOp): regular convolution, dilated convolution, depthwise convolution;

Convolution kernel size^{3}^{3}3For the inverted residual block and our sandglass block, we only use a kernel size of for depthwise convolutions.: , , followed by ;

Nonparametric operations: average pooling, max pooling, skip connection, None.
The results are reported in Line 1 of Table 11. To compare with the inverted residual block, we conduct architecture search within the following two new search spaces.

Original + IR block: the original search space plus the inverted residual block as the depthwise separable convolution operation candidate.

Original + sandglass block: the original search space plus the sandglass block.
The corresponding results are reported in Lines 23 of Table 11, respectively. The zero (None) operation is also included to indicate the miss of the connections as used in [liu2018darts].
Searching algorithm As mentioned in the main paper, we adopt the DARTS searching algorithm [liu2018darts] to search for the cell structure and set the number of nodes to 7 in each directed acyclic graph (DAG) of the cell. During the searching process, we strictly follow the training policy and use the same hyperparameters as in [liu2018darts] for a fair comparison.
Architecture and results The searched cell structures (including both the normal cell and the reduction cell) can be found in Figure 5. As can be seen in Table 11, adding the proposed sandglass block into the original search space can largely reduce the number of learnable parameters in the searched architecture with improved classification performance on the CIFAR10 dataset. This again shows when combined with the searching algorithm, our proposed sandglass block can be used to replace the original block to improve the performance.
Conclusion and discussion From the above results, we can observe that introducing appropriate super operators (e.g., our sandglass block) into the search space can bring better performance compared to using the original basic operators. We hope this experiment could benefit the development of architecture searching algorithms in the future.
Comments
There are no comments yet.