Separating Style and Content for Generalized Style Transfer

11/17/2017
by   Yexun Zhang, et al.
0

Neural style transfer has drawn broad attention in recent years. However, most existing methods aim to explicitly model the transformation between different styles, and the learned model is thus not generalizable to new styles. We here attempt to separate the representations for styles and contents, and propose a generalized style transfer network consisting of style encoder, content encoder, mixer and decoder. The style encoder and content encoder are used to extract the style and content factors from the style reference images and content reference images, respectively. The mixer employs a bilinear model to integrate the above two factors and finally feeds it into a decoder to generate images with target style and content. To separate the style features and content features, we leverage the conditional dependence of styles and contents given an image. During training, the encoder network learns to extract styles and contents from two sets of reference images in limited size, one with shared style and the other with shared content. This learning framework allows simultaneous style transfer among multiple styles and can be deemed as a special `multi-task' learning scenario. The encoders are expected to capture the underlying features for different styles and contents which is generalizable to new styles and contents. For validation, we applied the proposed algorithm to the Chinese Typeface transfer problem. Extensive experiment results on character generation have demonstrated the effectiveness and robustness of our method.

READ FULL TEXT

page 5

page 6

research
06/13/2018

A Unified Framework for Generalizable Style Transfer: Style and Content Separation

Image style transfer has drawn broad attention in recent years. However,...
research
03/30/2021

Diagonal Attention and Style-based GAN for Content-Style Disentanglement in Image Generation and Translation

One of the important research topics in image generative models is to di...
research
07/07/2022

Harnessing Out-Of-Distribution Examples via Augmenting Content and Style

Machine learning models are vulnerable to Out-Of-Distribution (OOD) exam...
research
07/25/2019

Style Conditioned Recommendations

We propose Style Conditioned Recommendations (SCR) and introduce style i...
research
04/11/2018

SHAPED: Shared-Private Encoder-Decoder for Text Style Adaptation

Supervised training of abstractive language generation models results in...
research
12/01/2017

Multi-Content GAN for Few-Shot Font Style Transfer

In this work, we focus on the challenge of taking partial observations o...
research
10/02/2019

A Deep Factorization of Style and Structure in Fonts

We propose a deep factorization model for typographic analysis that dise...

Please sign up or login with your details

Forgot password? Click here to reset