Personalizing Text-to-Image Generation via Aesthetic Gradients

09/25/2022
by   Víctor Gallego, et al.
0

This work proposes aesthetic gradients, a method to personalize a CLIP-conditioned diffusion model by guiding the generative process towards custom aesthetics defined by the user from a set of images. The approach is validated with qualitative and quantitative experiments, using the recent stable diffusion model and several aesthetically-filtered datasets. Code is released at https://github.com/vicgalle/stable-diffusion-aesthetic-gradients

READ FULL TEXT

page 1

page 5

research
03/30/2023

Token Merging for Fast Stable Diffusion

The landscape of image generation has been forever changed by open vocab...
research
11/15/2022

Versatile Diffusion: Text, Images and Variations All in One Diffusion Model

The recent advances in diffusion models have set an impressive milestone...
research
12/01/2022

Score Jacobian Chaining: Lifting Pretrained 2D Diffusion Models for 3D Generation

A diffusion model learns to predict a vector field of gradients. We prop...
research
05/15/2023

A Reproducible Extraction of Training Images from Diffusion Models

Recently, Carlini et al. demonstrated the widely used model Stable Diffu...
research
05/04/2023

Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion

Diffusion-based generative models' impressive ability to create convinci...
research
05/31/2023

Tree-Ring Watermarks: Fingerprints for Diffusion Images that are Invisible and Robust

Watermarking the outputs of generative models is a crucial technique for...
research
05/22/2023

If at First You Don't Succeed, Try, Try Again: Faithful Diffusion-based Text-to-Image Generation by Selection

Despite their impressive capabilities, diffusion-based text-to-image (T2...

Please sign up or login with your details

Forgot password? Click here to reset