Denoising and Regularization via Exploiting the Structural Bias of Convolutional Generators

by   Reinhard Heckel, et al.

Convolutional Neural Networks (CNNs) have emerged as highly successful tools for image generation, recovery, and restoration. This success is often attributed to large amounts of training data. However, recent experimental findings challenge this view and instead suggest that a major contributing factor to this success is that convolutional networks impose strong prior assumptions about natural images. A surprising experiment that highlights this architectural bias towards natural images is that one can remove noise and corruptions from a natural image without using any training data, by simply fitting (via gradient descent) a randomly initialized, over-parameterized convolutional generator to the single corrupted image. While this over-parameterized network can fit the corrupted image perfectly, surprisingly after a few iterations of gradient descent one obtains the uncorrupted image. This intriguing phenomenon enables state-of-the-art CNN-based denoising and regularization of linear inverse problems such as compressive sensing. In this paper, we take a step towards demystifying this experimental phenomenon by attributing this effect to particular architectural choices of convolutional networks, namely convolutions with fixed interpolating filters. We then formally characterize the dynamics of fitting a two-layer convolutional generator to a noisy signal and prove that early-stopped gradient descent denoises/regularizes. This result relies on showing that convolutional generators fit the structured part of an image significantly faster than the corrupted portion.


Compressive sensing with un-trained neural networks: Gradient descent finds the smoothest approximation

Un-trained convolutional neural networks have emerged as highly successf...

Regularizing linear inverse problems with convolutional neural networks

Deep convolutional neural networks trained on large datsets have emerged...

Deep Decoder: Concise Image Representations from Untrained Non-convolutional Networks

Deep neural networks, in particular convolutional neural networks, have ...

CNN Denoisers As Non-Local Filters: The Neural Tangent Denoiser

Convolutional Neural Networks (CNNs) are now a well-established tool for...

The Spectral Bias of the Deep Image Prior

The "deep image prior" proposed by Ulyanov et al. is an intriguing prope...

Robust Recovery via Implicit Bias of Discrepant Learning Rates for Double Over-parameterization

Recent advances have shown that implicit bias of gradient descent on ove...

The Devil is in the Upsampling: Architectural Decisions Made Simpler for Denoising with Deep Image Prior

Deep Image Prior (DIP) shows that some network architectures naturally b...

Please sign up or login with your details

Forgot password? Click here to reset