SL-CycleGAN: Blind Motion Deblurring in Cycles using Sparse Learning
In this paper, we introduce an end-to-end generative adversarial network (GAN) based on sparse learning for single image blind motion deblurring, which we called SL-CycleGAN. For the first time in blind motion deblurring, we propose a sparse ResNet-block as a combination of sparse convolution layers and a trainable spatial pooler k-winner based on HTM (Hierarchical Temporal Memory) to replace non-linearity such as ReLU in the ResNet-block of SL-CycleGAN generators. Furthermore, unlike many state-of-the-art GAN-based motion deblurring methods that treat motion deblurring as a linear end-to-end process, we take our inspiration from the domain-to-domain translation ability of CycleGAN, and we show that image deblurring can be cycle-consistent while achieving the best qualitative results. Finally, we perform extensive experiments on popular image benchmarks both qualitatively and quantitatively and achieve the record-breaking PSNR of 38.087 dB on GoPro dataset, which is 5.377 dB better than the most recent deblurring method.
READ FULL TEXT