Dissecting Arbitrary-scale Super-resolution Capability from Pre-trained Diffusion Generative Models

06/01/2023
by   Ruibin Li, et al.
0

Diffusion-based Generative Models (DGMs) have achieved unparalleled performance in synthesizing high-quality visual content, opening up the opportunity to improve image super-resolution (SR) tasks. Recent solutions for these tasks often train architecture-specific DGMs from scratch, or require iterative fine-tuning and distillation on pre-trained DGMs, both of which take considerable time and hardware investments. More seriously, since the DGMs are established with a discrete pre-defined upsampling scale, they cannot well match the emerging requirements of arbitrary-scale super-resolution (ASSR), where a unified model adapts to arbitrary upsampling scales, instead of preparing a series of distinct models for each case. These limitations beg an intriguing question: can we identify the ASSR capability of existing pre-trained DGMs without the need for distillation or fine-tuning? In this paper, we take a step towards resolving this matter by proposing Diff-SR, a first ASSR attempt based solely on pre-trained DGMs, without additional training efforts. It is motivated by an exciting finding that a simple methodology, which first injects a specific amount of noise into the low-resolution images before invoking a DGM's backward diffusion process, outperforms current leading solutions. The key insight is determining a suitable amount of noise to inject, i.e., small amounts lead to poor low-level fidelity, while over-large amounts degrade the high-level signature. Through a finely-grained theoretical analysis, we propose the Perceptual Recoverable Field (PRF), a metric that achieves the optimal trade-off between these two factors. Extensive experiments verify the effectiveness, flexibility, and adaptability of Diff-SR, demonstrating superior performance to state-of-the-art solutions under diverse ASSR environments.

READ FULL TEXT

page 4

page 8

page 18

page 19

page 20

page 21

page 22

page 23

research
04/13/2023

DiffFit: Unlocking Transferability of Large Diffusion Models via Simple Parameter-Efficient Fine-Tuning

Diffusion models have proven to be highly effective in generating high-q...
research
03/29/2023

Implicit Diffusion Models for Continuous Super-Resolution

Image super-resolution (SR) has attracted increasing attention due to it...
research
12/01/2020

GLEAN: Generative Latent Bank for Large-Factor Image Super-Resolution

We show that pre-trained Generative Adversarial Networks (GANs), e.g., S...
research
05/11/2023

Exploiting Diffusion Prior for Real-World Image Super-Resolution

We present a novel approach to leverage prior knowledge encapsulated in ...
research
04/06/2021

Test-Time Adaptation for Super-Resolution: You Only Need to Overfit on a Few More Images

Existing reference (RF)-based super-resolution (SR) models try to improv...
research
06/21/2023

HSR-Diff:Hyperspectral Image Super-Resolution via Conditional Diffusion Models

Despite the proven significance of hyperspectral images (HSIs) in perfor...
research
09/18/2022

Perception-Distortion Trade-off in the SR Space Spanned by Flow Models

Flow-based generative super-resolution (SR) models learn to produce a di...

Please sign up or login with your details

Forgot password? Click here to reset