Harnessing the Conditioning Sensorium for Improved Image Translation

10/13/2021
by   Cooper Nederhood, et al.
4

Multi-modal domain translation typically refers to synthesizing a novel image that inherits certain localized attributes from a 'content' image (e.g. layout, semantics, or geometry), and inherits everything else (e.g. texture, lighting, sometimes even semantics) from a 'style' image. The dominant approach to this task is attempting to learn disentangled 'content' and 'style' representations from scratch. However, this is not only challenging, but ill-posed, as what users wish to preserve during translation varies depending on their goals. Motivated by this inherent ambiguity, we define 'content' based on conditioning information extracted by off-the-shelf pre-trained models. We then train our style extractor and image decoder with an easy to optimize set of reconstruction objectives. The wide variety of high-quality pre-trained models available and simple training procedure makes our approach straightforward to apply across numerous domains and definitions of 'content'. Additionally it offers intuitive control over which aspects of 'content' are preserved across domains. We evaluate our method on traditional, well-aligned, datasets such as CelebA-HQ, and propose two novel datasets for evaluation on more complex scenes: ClassicTV and FFHQ-Wild. Our approach, Sensorium, enables higher quality domain translation for more complex scenes.

READ FULL TEXT

page 1

page 4

page 5

page 6

page 7

page 8

research
09/26/2021

ISF-GAN: An Implicit Style Function for High-Resolution Image-to-Image Translation

Recently, there has been an increasing interest in image editing methods...
research
06/16/2021

Smoothing the Disentangled Latent Style Space for Unsupervised Image-to-Image Translation

Image-to-Image (I2I) multi-domain translation models are usually evaluat...
research
06/04/2019

Cross-Domain Cascaded Deep Feature Translation

In recent years we have witnessed tremendous progress in unpaired image-...
research
04/14/2021

StEP: Style-based Encoder Pre-training for Multi-modal Image Synthesis

We propose a novel approach for multi-modal Image-to-image (I2I) transla...
research
12/11/2021

Unsupervised Image to Image Translation for Multiple Retinal Pathology Synthesis in Optical Coherence Tomography Scans

Image to Image Translation (I2I) is a challenging computer vision proble...
research
02/20/2023

Simple Disentanglement of Style and Content in Visual Representations

Learning visual representations with interpretable features, i.e., disen...

Please sign up or login with your details

Forgot password? Click here to reset