SGDiff: A Style Guided Diffusion Model for Fashion Synthesis

08/15/2023
by   Zhengwentai Sun, et al.
0

This paper reports on the development of a novel style guided diffusion model (SGDiff) which overcomes certain weaknesses inherent in existing models for image synthesis. The proposed SGDiff combines image modality with a pretrained text-to-image diffusion model to facilitate creative fashion image synthesis. It addresses the limitations of text-to-image diffusion models by incorporating supplementary style guidance, substantially reducing training costs, and overcoming the difficulties of controlling synthesized styles with text-only inputs. This paper also introduces a new dataset – SG-Fashion, specifically designed for fashion image synthesis applications, offering high-resolution images and an extensive range of garment categories. By means of comprehensive ablation study, we examine the application of classifier-free guidance to a variety of conditions and validate the effectiveness of the proposed model for generating fashion images of the desired categories, product attributes, and styles. The contributions of this paper include a novel classifier-free guidance method for multi-modal feature fusion, a comprehensive dataset for fashion image synthesis application, a thorough investigation on conditioned text-to-image synthesis, and valuable insights for future research in the text-to-image synthesis domain. The code and dataset are available at: <https://github.com/taited/SGDiff>.

READ FULL TEXT

page 1

page 5

page 6

research
09/27/2022

Draw Your Art Dream: Diverse Digital Art Synthesis with Multimodal Guided Diffusion

Digital art synthesis is receiving increasing attention in the multimedi...
research
09/08/2023

Style Generation: Image Synthesis based on Coarsely Matched Texts

Previous text-to-image synthesis algorithms typically use explicit textu...
research
05/11/2023

Null-text Guidance in Diffusion Models is Secretly a Cartoon-style Creator

Classifier-free guidance is an effective sampling technique in diffusion...
research
07/26/2022

Text-Guided Synthesis of Artistic Images with Retrieval-Augmented Diffusion Models

Novel architectures have recently improved generative image synthesis le...
research
04/12/2023

DreamPose: Fashion Image-to-Video Synthesis via Stable Diffusion

We present DreamPose, a diffusion-based method for generating animated f...
research
03/30/2023

Methods and advancement of content-based fashion image retrieval: A Review

Content-based fashion image retrieval (CBFIR) has been widely used in ou...
research
09/11/2023

PAI-Diffusion: Constructing and Serving a Family of Open Chinese Diffusion Models for Text-to-image Synthesis on the Cloud

Text-to-image synthesis for the Chinese language poses unique challenges...

Please sign up or login with your details

Forgot password? Click here to reset