Memory-Efficient Hierarchical Neural Architecture Search for Image Restoration

by   Haokui Zhang, et al.

Recently, much attention has been spent on neural architecture search (NAS) approaches, which often outperform manually designed architectures on highlevel vision tasks. Inspired by this, we attempt to leverage NAS technique to automatically design efficient network architectures for low-level image restoration tasks. In this paper, we propose a memory-efficient hierarchical NAS HiNAS (HiNAS) and apply to two such tasks: image denoising and image super-resolution. HiNAS adopts gradient based search strategies and builds an flexible hierarchical search space, including inner search space and outer search space, which in charge of designing cell architectures and deciding cell widths, respectively. For inner search space, we propose layerwise architecture sharing strategy (LWAS), resulting in more flexible architectures and better performance. For outer search space, we propose cell sharing strategy to save memory, and considerably accelerate the search speed. The proposed HiNAS is both memory and computation efficient. With a single GTX1080Ti GPU, it takes only about 1 hour for searching for denoising network on BSD 500 and 3.5 hours for searching for the super-resolution structure on DIV2K. Experimental results show that the architectures found by HiNAS have fewer parameters and enjoy a faster inference speed, while achieving highly competitive performance compared with state-of-the-art methods.


page 9

page 10

page 11

page 12

page 15

page 16

page 21


IR-NAS: Neural Architecture Search for Image Restoration

Recently, neural architecture search (NAS) methods have attracted much a...

Searching Efficient Model-guided Deep Network for Image Denoising

Neural architecture search (NAS) has recently reshaped our understanding...

Auto-DeepLab: Hierarchical Neural Architecture Search for Semantic Image Segmentation

Recently, Neural Architecture Search (NAS) has successfully identified n...

ISNAS-DIP: Image-Specific Neural Architecture Search for Deep Image Prior

Recent works show that convolutional neural network (CNN) architectures ...

Training-free Transformer Architecture Search

Recently, Vision Transformer (ViT) has achieved remarkable success in se...

Trilevel Neural Architecture Search for Efficient Single Image Super-Resolution

This paper proposes a trilevel neural architecture search (NAS) method f...

NAS-DIP: Learning Deep Image Prior with Neural Architecture Search

Recent work has shown that the structure of deep convolutional neural ne...