Person Image Synthesis via Denoising Diffusion Model

11/22/2022
by   Ankan Kumar Bhunia, et al.
0

The pose-guided person image generation task requires synthesizing photorealistic images of humans in arbitrary poses. The existing approaches use generative adversarial networks that do not necessarily maintain realistic textures or need dense correspondences that struggle to handle complex deformations and severe occlusions. In this work, we show how denoising diffusion models can be applied for high-fidelity person image synthesis with strong sample diversity and enhanced mode coverage of the learnt data distribution. Our proposed Person Image Diffusion Model (PIDM) disintegrates the complex transfer problem into a series of simpler forward-backward denoising steps. This helps in learning plausible source-to-target transformation trajectories that result in faithful textures and undistorted appearance details. We introduce a 'texture diffusion module' based on cross-attention to accurately model the correspondences between appearance and pose information available in source and target images. Further, we propose 'disentangled classifier-free guidance' to ensure close resemblance between the conditional inputs and the synthesized output in terms of both pose and appearance information. Our extensive results on two large-scale benchmarks and a user study demonstrate the photorealism of our proposed approach under challenging scenarios. We also show how our generated images can help in downstream tasks. Our code and models will be publicly released.

READ FULL TEXT

page 1

page 5

page 6

page 7

page 8

research
07/23/2021

Human Pose Transfer with Disentangled Feature Consistency

Deep generative models have made great progress in synthesizing images w...
research
11/09/2020

Two-Stream Appearance Transfer Network for Person Image Generation

Pose guided person image generation means to generate a photo-realistic ...
research
05/04/2023

Multimodal-driven Talking Face Generation, Face Swapping, Diffusion Model

Multimodal-driven talking face generation refers to animating a portrait...
research
05/26/2020

Region-adaptive Texture Enhancement for Detailed Person Image Synthesis

The ability to produce convincing textural details is essential for the ...
research
08/11/2023

Taming the Power of Diffusion Models for High-Quality Virtual Try-On with Appearance Flow

Virtual try-on is a critical image synthesis task that aims to transfer ...
research
04/13/2022

Neural Texture Extraction and Distribution for Controllable Person Image Synthesis

We deal with the controllable person image synthesis task which aims to ...
research
08/04/2021

Combining Attention with Flow for Person Image Synthesis

Pose-guided person image synthesis aims to synthesize person images by t...

Please sign up or login with your details

Forgot password? Click here to reset