Restart Sampling for Improving Generative Processes

06/26/2023
by   Yilun Xu, et al.
0

Generative processes that involve solving differential equations, such as diffusion models, frequently necessitate balancing speed and quality. ODE-based samplers are fast but plateau in performance while SDE-based samplers deliver higher sample quality at the cost of increased sampling time. We attribute this difference to sampling errors: ODE-samplers involve smaller discretization errors while stochasticity in SDE contracts accumulated errors. Based on these findings, we propose a novel sampling algorithm called Restart in order to better balance discretization errors and contraction. The sampling method alternates between adding substantial noise in additional forward steps and strictly following a backward ODE. Empirically, Restart sampler surpasses previous SDE and ODE samplers in both speed and accuracy. Restart not only outperforms the previous best SDE results, but also accelerates the sampling speed by 10-fold / 2-fold on CIFAR-10 / ImageNet 64 × 64. In addition, it attains significantly better sample quality than ODE samplers within comparable sampling times. Moreover, Restart better balances text-image alignment/visual quality versus diversity than previous samplers in the large-scale text-to-image Stable Diffusion model pre-trained on LAION 512 × 512. Code is available at https://github.com/Newbeeer/diffusion_restart_sampling

READ FULL TEXT

page 9

page 29

page 30

page 31

page 32

research
04/29/2022

Fast Sampling of Diffusion Models with Exponential Integrator

The past few years have witnessed the great success of Diffusion models ...
research
02/09/2023

UniPC: A Unified Predictor-Corrector Framework for Fast Sampling of Diffusion Models

Diffusion probabilistic models (DPMs) have demonstrated a very promising...
research
01/27/2023

Minimizing Trajectory Curvature of ODE-based Generative Models

Recent ODE/SDE-based generative models, such as diffusion models, rectif...
research
09/13/2023

DCTTS: Discrete Diffusion Model with Contrastive Learning for Text-to-speech Generation

In the Text-to-speech(TTS) task, the latent diffusion model has excellen...
research
10/14/2021

Diffusion Normalizing Flow

We present a novel generative modeling method called diffusion normalizi...
research
11/29/2022

Wavelet Diffusion Models are fast and scalable Image Generators

Diffusion models are rising as a powerful solution for high-fidelity ima...
research
09/12/2021

Prioritized Subnet Sampling for Resource-Adaptive Supernet Training

A resource-adaptive supernet adjusts its subnets for inference to fit th...

Please sign up or login with your details

Forgot password? Click here to reset