Improving Diffusion Model Efficiency Through Patching

07/09/2022
by   Troy Luhman, et al.
11

Diffusion models are a powerful class of generative models that iteratively denoise samples to produce data. While many works have focused on the number of iterations in this sampling procedure, few have focused on the cost of each iteration. We find that adding a simple ViT-style patching transformation can considerably reduce a diffusion model's sampling time and memory usage. We justify our approach both through an analysis of the diffusion model objective, and through empirical experiments on LSUN Church, ImageNet 256, and FFHQ 1024. We provide implementations in Tensorflow and Pytorch.

READ FULL TEXT

page 4

page 7

page 17

page 18

page 19

page 20

research
05/16/2023

Expressiveness Remarks for Denoising Diffusion Models and Samplers

Denoising diffusion models are a class of generative models which have r...
research
03/05/2019

Theoretical guarantees for sampling and inference in generative models with latent diffusions

We introduce and study a class of probabilistic generative models, where...
research
11/13/2020

Diffusion models for Handwriting Generation

In this paper, we propose a diffusion probabilistic model for handwritin...
research
03/14/2023

Interpretable ODE-style Generative Diffusion Model via Force Field Construction

For a considerable time, researchers have focused on developing a method...
research
06/01/2022

Elucidating the Design Space of Diffusion-Based Generative Models

We argue that the theory and practice of diffusion-based generative mode...
research
09/29/2022

Analyzing Diffusion as Serial Reproduction

Diffusion models are a class of generative models that learn to synthesi...
research
10/10/2022

f-DM: A Multi-stage Diffusion Model via Progressive Signal Transformation

Diffusion models (DMs) have recently emerged as SoTA tools for generativ...

Please sign up or login with your details

Forgot password? Click here to reset