ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation

04/12/2023
by   Jiazheng Xu, et al.
5

We present ImageReward – the first general-purpose text-to-image human preference reward model – to address various prevalent issues in generative models and align them with human values and preferences. Its training is based on our systematic annotation pipeline that covers both the rating and ranking components, collecting a dataset of 137k expert comparisons to date. In human evaluation, ImageReward outperforms existing scoring methods (e.g., CLIP by 38.6%), making it a promising automatic metric for evaluating and improving text-to-image synthesis. The reward model is publicly available via the package at <https://github.com/THUDM/ImageReward>.

READ FULL TEXT

page 2

page 4

page 6

page 11

page 20

page 23

page 24

research
06/15/2023

Human Preference Score v2: A Solid Benchmark for Evaluating Human Preferences of Text-to-Image Synthesis

Recent text-to-image generative models can generate high-fidelity images...
research
04/04/2023

Toward Verifiable and Reproducible Human Evaluation for Text-to-Image Generation

Human evaluation is critical for validating the performance of text-to-i...
research
09/06/2023

Everyone Deserves A Reward: Learning Customized Human Preferences

Reward models (RMs) are crucial in aligning large language models (LLMs)...
research
03/25/2023

Better Aligning Text-to-Image Models with Human Preference

Recent years have witnessed a rapid growth of deep generative models, wi...
research
10/10/2022

Not All Errors are Equal: Learning Text Generation Metrics using Stratified Error Synthesis

Is it possible to build a general and automatic natural language generat...
research
05/22/2023

If at First You Don't Succeed, Try, Try Again: Faithful Diffusion-based Text-to-Image Generation by Selection

Despite their impressive capabilities, diffusion-based text-to-image (T2...
research
07/07/2021

Keep it Simple: Unsupervised Simplification of Multi-Paragraph Text

This work presents Keep it Simple (KiS), a new approach to unsupervised ...

Please sign up or login with your details

Forgot password? Click here to reset