TRIP: Refining Image-to-Image Translation via Rival Preferences

by   Yinghua Yao, et al.

Relative attribute (RA), referring to the preference over two images on the strength of a specific attribute, can enable fine-grained image-to-image translation due to its rich semantic information. Existing work based on RAs however failed to reconcile the goal for fine-grained translation and the goal for high-quality generation. We propose a new model TRIP to coordinate these two goals for high-quality fine-grained translation. In particular, we simultaneously train two modules: a generator that translates an input image to the desired image with smooth subtle changes with respect to the interested attributes; and a ranker that ranks rival preferences consisting of the input image and the desired image. Rival preferences refer to the adversarial ranking process: (1) the ranker thinks no difference between the desired image and the input image in terms of the desired attributes; (2) the generator fools the ranker to believe that the desired image changes the attributes over the input image as desired. RAs over pairs of real images are introduced to guide the ranker to rank image pairs regarding the interested attributes only. With an effective ranker, the generator would "win" the adversarial game by producing high-quality images that present desired changes over the attributes compared to the input image. The experiments on two face image datasets and one shoe image dataset demonstrate that our TRIP achieves state-of-art results in generating high-fidelity images which exhibit smooth changes over the interested attributes.



There are no comments yet.


page 7

page 8

page 15

page 16

page 17

page 18

page 19

page 20


RelGAN: Multi-Domain Image-to-Image Translation via Relative Attributes

Multi-domain image-to-image translation has gained increasing attention ...

Generative Reversible Data Hiding by Image to Image Translation via GANs

The traditional reversible data hiding technique is based on cover image...

Fine-grained Image-to-Image Transformation towards Visual Recognition

Existing image-to-image transformation approaches primarily focus on syn...

Instance-wise Hard Negative Example Generation for Contrastive Learning in Unpaired Image-to-Image Translation

Contrastive learning shows great potential in unpaired image-to-image tr...

Describe What to Change: A Text-guided Unsupervised Image-to-Image Translation Approach

Manipulating visual attributes of images through human-written text is a...

Semantic Jitter: Dense Supervision for Visual Comparisons via Synthetic Images

Distinguishing subtle differences in attributes is valuable, yet learnin...

High-Fidelity and Arbitrary Face Editing

Cycle consistency is widely used for face editing. However, we observe t...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.