Disentangling Writer and Character Styles for Handwriting Generation

03/26/2023
by   Gang Dai, et al.
0

Training machines to synthesize diverse handwritings is an intriguing task. Recently, RNN-based methods have been proposed to generate stylized online Chinese characters. However, these methods mainly focus on capturing a person's overall writing style, neglecting subtle style inconsistencies between characters written by the same person. For example, while a person's handwriting typically exhibits general uniformity (e.g., glyph slant and aspect ratios), there are still small style variations in finer details (e.g., stroke length and curvature) of characters. In light of this, we propose to disentangle the style representations at both writer and character levels from individual handwritings to synthesize realistic stylized online handwritten characters. Specifically, we present the style-disentangled Transformer (SDT), which employs two complementary contrastive objectives to extract the style commonalities of reference samples and capture the detailed style patterns of each sample, respectively. Extensive experiments on various language scripts demonstrate the effectiveness of SDT. Notably, our empirical findings reveal that the two learned style representations provide information at different frequency magnitudes, underscoring the importance of separate style extraction. Our source code is public at: https://github.com/dailenson/SDT.

READ FULL TEXT
research
12/06/2017

Learning to Write Stylized Chinese Characters by Reading a Handful of Examples

Automatically writing stylized Chinese characters is an attractive yet c...
research
11/09/2018

Typeface Completion with Generative Adversarial Networks

The mood of a text and the intention of the writer can be reflected in t...
research
11/08/2019

Content-Consistent Generation of Realistic Eyes with Style

Accurately labeled real-world training data can be scarce, and hence rec...
research
09/23/2020

Few-shot Font Generation with Localized Style Representations and Factorization

Automatic few-shot font generation is in high demand because manual desi...
research
03/27/2023

Handwritten Text Generation from Visual Archetypes

Generating synthetic images of handwritten text in a writer-specific sty...
research
04/02/2021

Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Experts

A few-shot font generation (FFG) method has to satisfy two objectives: t...
research
01/19/2022

CAST: Character labeling in Animation using Self-supervision by Tracking

Cartoons and animation domain videos have very different characteristics...

Please sign up or login with your details

Forgot password? Click here to reset