Progressive Image Deraining Networks: A Better and Simpler Baseline

01/26/2019
by   Dongwei Ren, et al.
14

Along with the deraining performance improvement of deep networks, their structures and learning become more and more complicated and diverse, making it difficult to analyze the contribution of various network modules when developing new deraining networks. To handle this issue, this paper provides a better and simpler baseline deraining network by considering network architecture, input and output, and loss functions. Specifically, by repeatedly unfolding a shallow ResNet, progressive ResNet (PRN) is proposed to take advantage of recursive computation. A recurrent layer is further introduced to exploit the dependencies of deep features across stages, forming our progressive recurrent network (PReNet). Furthermore, intra-stage recursive computation of ResNet can be adopted in PRN and PReNet to notably reduce network parameters with graceful degradation in deraining performance. For network input and output, we take both stage-wise result and original rainy image as input to each ResNet and finally output the prediction of residual image. As for loss functions, single MSE or negative SSIM losses are sufficient to train PRN and PReNet. Experiments show that PRN and PReNet perform favorably on both synthetic and real rainy images. Considering its simplicity, efficiency and effectiveness, our models are expected to serve as a suitable baseline in future deraining research. The source codes are available at https://github.com/csdwren/PReNet.

READ FULL TEXT

page 1

page 6

page 7

page 8

page 9

page 10

research
06/05/2021

T-Net: Deep Stacked Scale-Iteration Network for Image Dehazing

Hazy images reduce the visibility of the image content, and haze will le...
research
10/21/2020

ComboLoss for Facial Attractiveness Analysis with Squeeze-and-Excitation Networks

Loss function is crucial for model training and feature representation l...
research
07/19/2022

RepBNN: towards a precise Binary Neural Network with Enhanced Feature Map via Repeating

Binary neural network (BNN) is an extreme quantization version of convol...
research
03/21/2021

ProgressiveSpinalNet architecture for FC layers

In deeplearning models the FC (fully connected) layer has biggest import...
research
12/06/2022

Semi-supervised Deep Large-baseline Homography Estimation with Progressive Equivalence Constraint

Homography estimation is erroneous in the case of large-baseline due to ...
research
06/21/2023

Efficient ResNets: Residual Network Design

ResNets (or Residual Networks) are one of the most commonly used models ...
research
05/09/2023

Recursions Are All You Need: Towards Efficient Deep Unfolding Networks

The use of deep unfolding networks in compressive sensing (CS) has seen ...

Please sign up or login with your details

Forgot password? Click here to reset