Separating Style and Content for Generalized Style Transfer

by   Yexun Zhang, et al.

Neural style transfer has drawn broad attention in recent years. However, most existing methods aim to explicitly model the transformation between different styles, and the learned model is thus not generalizable to new styles. We here attempt to separate the representations for styles and contents, and propose a generalized style transfer network consisting of style encoder, content encoder, mixer and decoder. The style encoder and content encoder are used to extract the style and content factors from the style reference images and content reference images, respectively. The mixer employs a bilinear model to integrate the above two factors and finally feeds it into a decoder to generate images with target style and content. To separate the style features and content features, we leverage the conditional dependence of styles and contents given an image. During training, the encoder network learns to extract styles and contents from two sets of reference images in limited size, one with shared style and the other with shared content. This learning framework allows simultaneous style transfer among multiple styles and can be deemed as a special `multi-task' learning scenario. The encoders are expected to capture the underlying features for different styles and contents which is generalizable to new styles and contents. For validation, we applied the proposed algorithm to the Chinese Typeface transfer problem. Extensive experiment results on character generation have demonstrated the effectiveness and robustness of our method.


page 5

page 6


A Unified Framework for Generalizable Style Transfer: Style and Content Separation

Image style transfer has drawn broad attention in recent years. However,...

Diagonal Attention and Style-based GAN for Content-Style Disentanglement in Image Generation and Translation

One of the important research topics in image generative models is to di...

Harnessing Out-Of-Distribution Examples via Augmenting Content and Style

Machine learning models are vulnerable to Out-Of-Distribution (OOD) exam...

Style Conditioned Recommendations

We propose Style Conditioned Recommendations (SCR) and introduce style i...

SHAPED: Shared-Private Encoder-Decoder for Text Style Adaptation

Supervised training of abstractive language generation models results in...

Multi-Content GAN for Few-Shot Font Style Transfer

In this work, we focus on the challenge of taking partial observations o...

A Deep Factorization of Style and Structure in Fonts

We propose a deep factorization model for typographic analysis that dise...

Please sign up or login with your details

Forgot password? Click here to reset