Image super-resolution (SR) is an important computer vision task that aims at designing effective models to reconstruct the high-resolution (HR) images from the low-resolution (LR) images[9, 11, 34, 35, 21, 12]. Most SR models consist of two components, namely several upsampling layers that increase spatial resolution and a set of computational blocks (e.g., residual block) that increase the model capacity. These two kinds of blocks/layers often follow a two-level architecture, where the network-level architecture determines the positions of the upsampling layers (e.g., SRCNN  and LapSRN ) and the cell-level architecture controls the computation of each block/layer (e.g., RCAB ). In practice, designing deep models is often very labor-intensive and relies heavily on human expertise [13, 15, 3, 14]. More critically, these hand-crafted architectures are often not optimal in practice.
Regarding this issue, many efforts have been made to automate the model designing process via Neural Architecture Search (NAS) . Specifically, NAS methods seek to find the optimal cell architecture [25, 29, 16] or a whole network architecture . These automatically discovered architectures often outperform the manually designed architectures in both image classification and language modeling tasks [40, 29]. However, existing methods may suffer from two limitations if we apply them to search for an optimal SR architecture.
First, it is hard to directly search for the optimal two-level SR architecture. For SR models, both the cell-level blocks and network-level positions of upsampling layers play very important roles to model performance. However, existing NAS methods only focus on one of the architecture levels. Thus, how to simultaneously find the optimal cell-level block and network-level positions of upsampling layers is still unknown.
Second, most methods only focus on improving image reconstruction performance but ignore the computational complexity. In general, a larger or more complex model often has better performance. However, given limited computation resources, these models are often too large and become hard to be applied to real-world applications [39, 26]. Regarding this issue, how to automatically design promising architectures with low computation cost is very important.
To address the above issues, we propose a novel Hierarchical Neural Architecture Search (HNAS) method to automatically design SR architectures. Unlike existing methods, HNAS simultaneously searches for the optimal cell-level blocks and the network-level positions of upsampling layers. Moreover, by considering the computation cost to build the joint reward, our method is able to produce promising architectures with low computation cost.
Our contributions are summarized as follows:
We propose a novel Hierarchical Neural Architecture Search (HNAS) method to automatically design cell-level blocks and determine network-level positions of upsampling layers.
We propose a joint reward that considers both the SR performance and the computation cost of SR architectures. By training HNAS with such a reward, we can obtain a series of architectures with different performance and computation cost.
Extensive experiments on several benchmark datasets demonstrate the superiority of the proposed method.
Ii Proposed Method
In this paper, we propose a Hierarchical Neural Architecture Search (HNAS) method to automatically design promising two-level SR architectures, i.e., with good performance and low computation cost. To this end, we first define our hierarchical search space that consists of a cell-level search space and a network-level search space. Then, we propose a hierarchical controller as an agent to search for good architectures. To search for promising SR architectures with low computation cost, we develop a joint reward by considering both the performance and computation cost. We show the overall architecture and the controller model of HNAS in Figure 1.
Ii-a Hierarchical SR Search Space
In general, SR models often consist of two components, namely several upsampling layers that increase spatial resolution and a series of computational blocks that increase the model capacity. These two components form a two-level architecture, where the cell-level identifies the computation of each block and the network-level determines the positions of the upsampling layers. Based on the hierarchical architecture, we propose a hierarchical SR search space that contains a cell-level search space and a network-level search space.
Cell-level search space. In the cell-level search space, as shown in Fig. 2, we represent a cell as a directed acyclic graph (DAG) [29, 25], where the nodes denote the feature maps in deep networks and the edges denote some computational operations, e.g., convolution. In this paper, we define two kinds of cells: (1) the normal cell that controls the model capacity and keeps the spatial resolution of feature map unchanged, and (2) the upsampling cell that increases the spatial resolution. To design these cells, we collect the two sets of operations that have been widely used in SR models. We show the candidate operations for both cells in TABLE I.
For the normal cell, we consider seven candidate operations, including identity mapping, and dilated convolution, and separable convolution, up and down-projection block (UDPB) , and residual channel attention block (RCAB) 
. For the upsampling cell, we consider 5 widely used operations to increase spatial resolution. Specifically, there are 3 interpolation-based upsampling operations, including area interpolation, bilinear interpolation, nearest-neighbor interpolation . Moreover, we also consider 2 trainable convolutional layers, namely the deconvolution layer (also known as transposed convolution)  and the sub-pixel convolution .
Based on the candidate operations, the goal of HNAS is to select the optimal operation for each edge of DAG and learn the optimal connectivity among nodes (See more details in Section II-B).
|Normal Cell/Block||Upsampling Cell/Block|
Network-level search space. Note that the position of upsampling block/layer plays an important role in both the performance and computation cost of SR models. Specifically, if we put the upsampling block in a very shallow layer, the feature map would increase too early and hence significantly increase the computational cost of the whole model. By contrast, when we put the upsampling block in a deep layer, there would be little or no layers to process the upsampled features and hence the computation to obtain high resolution images may be insufficient, leading to suboptimal SR performance. Regarding this issue, we seek to find the optimal position of the upsampling block for different SR models.
Let and denote the number of layers before and after the upsampling block (See Figure 1 (a)). Thus, there are blocks in total. Given specific normal and upsampling cells, our goal is to find the optimal position of the upsampling block among layers. We will show how to determine the position in Section II-B.
Ii-B Hierarchical Controller for HNAS
, we use a long short-term memory (LSTM) as the controller to produce candidate architectures (represented by a sequence of tokens ). Regarding the two-level hierarchy of SR models, we propose a hierarchical controller to produce promising architectures. Specifically, we consider two kinds of controllers, including a cell-level controller that searches for the optimal architectures for both normal block and upsampling block, and a network-level controller that determines the positions of upsampling layers.
Cell-level controller. We utilize a cell-level controller to find the optimal computational DAG with nodes (See example in Fig. 2). In a DAG, the input nodes and node denote the outputs of the second nearest and the nearest cell in front of the current block, respectively. The remaining nodes are intermediate nodes, each of which also takes two previous nodes in this cell as inputs. For each intermediate node, the controller makes two kinds of decisions: 1) which previous node should be taken as input and 2) which operation should be applied to each edge. All of these decisions can be represented as a sequence of tokens and thus can be predicted using the LSTM controller . After repeating times, all of the nodes are concatenated together to obtain the final output of the cell, i.e., the output node.
Network-level controller. Once we have the normal block and upsampling block, we seek to further determine where we should put the upsampling block to build the SR model. Given a model with layers, we predict the position, i.e., an integer ranging from 1 to , where we put the upsampling block. Since such a position relies on the design of both normal and upsampling blocks, we build the network-level controller that takes the embeddings (i.e., hidden states) of two kinds of blocks as inputs to determine the position. Specifically, let and denote the last hidden states of the controllers for normal block and upsampling block, respectively. We concatenate these embeddings as the initial state of the network level controller (See Fig. 1(b)). Since the network-level controller considers the information of the architecture design of both the normal and upsampling blocks, it becomes possible to determine the position of the upsampling block.
Ii-C Training and Inference Methods
To train HNAS, we first propose the joint reward to guide the architecture search process. Then, we depict the detailed training and inference methods of HNAS.
Joint reward. Note that designing promising architectures with low computation cost is of critical importance for real-world SR applications. To this end, we build a joint reward by considering both performance and computation cost to guide the architecture search process. Given any architecture , let be the PSNR performance of , be the computation cost of in terms of FLOPs111FLOPs denotes the number of multiply-add operations.. The joint reward can be computed by
where denotes the weight of model performance in the joint reward. In general, the higher PSNR or lower cost the controller produces, the better the model will be. From Eqn. (1), a larger would result in the architectures with better performance but higher cost (See results in Table II). In practice, we can adjust according to the requirements of real-world applications.
Training method for HNAS. With the joint reward, following [40, 29], we apply the policy gradient  to train the controller. We show the training method in Algorithm 1. To accelerate the training process, we adopt the parameter sharing technique , i.e., we construct a large computational graph, where each subgraph represents a neural network architecture, hence forcing all architectures to share the parameters.
Let and be the parameters of the controller model and the shared parameters. The goal of HNAS is to learn an optimal policy and produce candidate architectures by conduct sampling . To encourage exploration, we introduce an entropy regularization term into the objective to prevent the controllers from premature convergence .
|Bicubic||-||33.65 / 0.930||30.24 / 0.869||29.56 / 0.844||26.88 / 0.841||30.84 / 0.935|
|SRCNN ||52.7||36.66 / 0.954||32.42 / 0.906||31.36 / 0.887||29.50 / 0.894||35.72 / 0.968|
|VDSR ||612.6||37.53 / 0.958||33.03 / 0.912||31.90 / 0.896||30.76 / 0.914||37.16 / 0.974|
|DRCN ||17,974.3||37.63 / 0.958||33.04 / 0.911||31.85 / 0.894||30.75 / 0.913||37.57 / 0.973|
|DRRN ||6,796.9||37.74 / 0.959||33.23 / 0.913||32.05 / 0.897||31.23 / 0.918||37.92 / 0.976|
|SelNet ||225.7||37.89 / 0.959||33.61 / 0.916||32.08 / 0.898||-||-|
|CARN ||222.8||37.76 / 0.959||33.52 / 0.916||32.09 / 0.897||31.92 / 0.925||38.36 / 0.976|
|MoreMNAS-A ||238.6||37.63 / 0.958||33.23 / 0.913||31.95 / 0.896||31.24 / 0.918||-|
|FALSR ||74.7||37.61 / 0.958||33.29 / 0.914||31.97 / 0.896||31.28 / 0.919||37.46 / 0.974|
|Residual Block ||47.5||36.72 / 0.955||32.20 / 0.905||31.30 / 0.888||29.53 / 0.897||33.36 / 0.962|
|RCAB ||84.9||37.66 / 0.959||33.17 / 0.913||31.93 / 0.896||31.19 / 0.918||37.80 / 0.974|
|Random||111.7||37.83 / 0.959||33.31 / 0.915||31.98 / 0.897||31.42 / 0.920||38.31 / 0.976|
|HNAS-A ()||30.6||37.84 / 0.959||33.39 / 0.916||32.06 / 0.898||31.50 / 0.922||38.15 / 0.976|
|HNAS-B ()||48.2||37.92 / 0.960||33.46 / 0.917||32.08 / 0.898||31.66 / 0.924||38.46 / 0.977|
|HNAS-C ()||83.6||38.11 / 0.964||33.60 / 0.920||32.17 / 0.902||31.93 / 0.928||38.71 / 0.985|
Inferring Architectures. Based on the learned policy , we conduct sampling to obtain promising architectures. Specifically, we first sample several candidate architectures and then select the architecture with the highest validation performance. Finally, we build SR models using the searched architectures (including both the cell-level blocks and network-level position of upsampling blocks) and train them from scratch.
In the experiment, we use the DIV2K dataset  to train all the models. To show the effectiveness of out method, we conduct comparisons on five benchmark datasets, including Set5 , Set14 , BSD100 , Urban100 , and Manga109 . We compare different models in terms of PSNR, SSIM, and FLOPs. Please see more training details in supplementary. We have made the code of HNAS available at https://github.com/guoyongcs/HNAS-SR.
Iii-a Quantitative Results
In this experiment, we compare our method with several hand-crafted SR models, including the Bicubic, SRCNN , VDSR , DRCN , DRRN , SelNet , CARN . Moreover, we also compare two NAS based methods, namely MoreMNAS-A  and FALSR . By training HNAS with different , we can obtain a series of architectures with different performance and computation cost. To illustrate this, we set and obtain 3 architectures HNAS-A/B/C accordingly. Please see the detailed architectures in supplementary.
Table II shows quantitative comparisons for SR. Note that all FLOPs are measured based on a input LR image. Compared with the hand-crafted models, our models tend to yield higher PSNR and SSIM and lower fewer FLOPs. Specifically, HNAS-A yields the lowest FLOPs but still outperforms a large number of baseline methods. Moreover, when we gradually increase , HNAS-B and HNAS-C take higher computation cost and yield better performance. These results demonstrate that HNAS can produce architectures with promising performance and low computation cost.
Iii-B Visual Results
To further show the effectiveness of the proposed method, we also conduct visual comparisons between HNAS and three SR methods, such as CARN , MoreMNAS  and FALSR . We show the results in Fig. 3.
From Fig. 3, the considered baseline methods often produce very blurring images with salient artifacts. By contrast, the searched models by HNAS are able to produce sharper images than other methods. These results demonstrate the effectiveness of the proposed method.
In this paper, we have proposed a novel Hierarchical Neural Architecture Search (HNAS) method to automatically search for the optimal architectures for image super-resolution (SR) models. Since most SR models follow the two-level architecture design, we define a hierarchical SR search space and develop a hierarchical controller to produce candidate architectures. Moreover, we build a joint reward by considering both SR performance and computation cost to guide the search process of HNAS. With such a joint reward, HNAS is able to design promising architectures with low computation cost. Extensive results on five benchmark datasets demonstrate the effectiveness of the proposed method.
-  (2018) Fast, accurate, and lightweight super-resolution with cascading residual network. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 252–268. Cited by: TABLE II, §III-A, §III-B.
-  (2012) Low-complexity single-image super-resolution based on nonnegative neighbor embedding. Cited by: §III, §IV-B.
Adversarial learning with local coordinate coding.
International Conference on Machine Learning, J. Dy and A. Krause (Eds.), Proceedings of Machine Learning Research, Vol. 80, Stockholmsm?ssan, Stockholm Sweden, pp. 707–715. Cited by: §I.
A deep convolutional neural network with selection units for super-resolution. In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 154–160. Cited by: TABLE II, §III-A.
-  (2019) Fast, accurate and lightweight super-resolution with neural architecture search. arXiv preprint arXiv:1901.07261. Cited by: TABLE II, §III-A, §III-B.
-  (2019) Multi-objective reinforced evolution in mobile neural architecture search. arXiv preprint arXiv:1901.01074. Cited by: TABLE II, §III-A, §III-B.
-  (2015) Image super-resolution using deep convolutional networks. IEEE transactions on pattern analysis and machine intelligence 38 (2), pp. 295–307. Cited by: §I, §II-A, TABLE II, §III-A.
-  (2016) A learned representation for artistic style. arXiv preprint arXiv:1610.07629. Cited by: §II-A.
-  (2000) Learning low-level vision. International Journal of Computer Vision 40 (1), pp. 25–47. Cited by: §I.
-  (2016) Manga109 dataset and creation of metadata. In Proceedings of the 1st International Workshop on coMics ANalysis, Processing and Understanding, pp. 2. Cited by: §III.
-  (2020) Closed-loop matters: dual regression networks for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §I.
-  (2018) Dual reconstruction nets for image super-resolution with gradient sensitive loss. arXiv preprint arXiv:1809.07099. Cited by: §I.
-  (2019) Auto-embedding generative adversarial networks for high resolution image synthesis. IEEE Transactions on Multimedia 21 (11), pp. 2726–2737. Cited by: §I.
-  (2016) The shallow end: empowering shallower deep-convolutional networks through auxiliary outputs. arXiv preprint arXiv:1611.01773. Cited by: §I.
Double forward propagation for memorized batch normalization. In
Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §I.
-  (2019) NAT: neural architecture transformer for accurate and compact architectures. In Advances in Neural Information Processing Systems, pp. 735–747. Cited by: §I.
-  (2018) Deep back-projection networks for super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1664–1673. Cited by: §II-A, TABLE I.
-  (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: TABLE II.
-  (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: §II-B, §IV-B.
-  (2015) Single image super-resolution from transformed self-exemplars. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5197–5206. Cited by: §III.
-  (2019) Pyramid-structured depth map super-resolution based on deep dense-residual network. IEEE Signal Processing Letters 26 (12), pp. 1723–1727. Cited by: §I.
-  (2016) Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1646–1654. Cited by: TABLE II, §III-A, §IV-B.
-  (2016) Deeply-recursive convolutional network for image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1637–1645. Cited by: TABLE II, §III-A.
-  (2017) Deep laplacian pyramid networks for fast and accurate super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 624–632. Cited by: §I.
-  (2018) Darts: differentiable architecture search. In International Conference on Learning Representations, May 7-9, 2015, San Diego, CA, Cited by: §I, §II-A.
-  (2020) Discrimination-aware network pruning for deep model compression. arXiv preprint arXiv:2001.01050. Cited by: §I.
Sgdr: stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983. Cited by: §IV-B.
-  (2001) A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, Vol. 2, pp. 416–423. Cited by: §III.
-  (2019) Efficient neural architecture search via parameter sharing. In International Conference on Machine Learning, Cited by: §I, §II-A, §II-B, §II-C.
-  (2016) Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1874–1883. Cited by: §II-A.
-  (2017) Image super-resolution via deep recursive residual network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3147–3155. Cited by: TABLE II, §III-A.
-  (2017) Ntire 2017 challenge on single image super-resolution: methods and results. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 114–125. Cited by: §III, §IV-B.
Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning 8 (3-4), pp. 229–256. Cited by: §II-C.
-  (2019) Lightweight feature fusion network for single image super-resolution. IEEE Signal Processing Letters 26 (4), pp. 538–542. Cited by: §I.
-  (2019) Multilevel and multiscale network for single-image super-resolution. IEEE Signal Processing Letters 26 (12), pp. 1877–1881. Cited by: §I.
-  (2011) Adaptive deconvolutional networks for mid and high level feature learning.. In Proceedings of the IEEE international conference on computer vision, Vol. 1, pp. 6. Cited by: §II-A.
-  (2010) On single image scale-up using sparse-representations. In International conference on curves and surfaces, pp. 711–730. Cited by: §III.
-  (2018) Image super-resolution using very deep residual channel attention networks. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 286–301. Cited by: §I, §II-A, TABLE I, TABLE II.
-  (2018) Discrimination-aware channel pruning for deep neural networks. In Advances in Neural Information Processing Systems, pp. 875–886. Cited by: §I.
-  (2016) Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578. Cited by: §I, §II-B, §II-B, §II-C.
-  (2018) Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8697–8710. Cited by: §II-C.