Diff-Font: Diffusion Model for Robust One-Shot Font Generation

12/12/2022
by   Haibin He, et al.
0

Font generation is a difficult and time-consuming task, especially in those languages using ideograms that have complicated structures with a large number of characters, such as Chinese. To solve this problem, few-shot font generation and even one-shot font generation have attracted a lot of attention. However, most existing font generation methods may still suffer from (i) large cross-font gap challenge; (ii) subtle cross-font variation problem; and (iii) incorrect generation of complicated characters. In this paper, we propose a novel one-shot font generation method based on a diffusion model, named Diff-Font, which can be stably trained on large datasets. The proposed model aims to generate the entire font library by giving only one sample as the reference. Specifically, a large stroke-wise dataset is constructed, and a stroke-wise diffusion model is proposed to preserve the structure and the completion of each generated character. To our best knowledge, the proposed Diff-Font is the first work that developed diffusion models to handle the font generation task. The well-trained Diff-Font is not only robust to font gap and font variation, but also achieved promising performance on difficult character generation. Compared to previous font generation methods, our model reaches state-of-the-art performance both qualitatively and quantitatively.

READ FULL TEXT

page 1

page 7

research
05/30/2023

Calliffusion: Chinese Calligraphy Generation and Style Transfer with Diffusion Modeling

In this paper, we propose Calliffusion, a system for generating high-qua...
research
06/22/2021

Zero-Shot Chinese Character Recognition with Stroke-Level Decomposition

Chinese character recognition has attracted much research interest due t...
research
07/03/2023

DifFSS: Diffusion Model for Few-Shot Semantic Segmentation

Diffusion models have demonstrated excellent performance in image genera...
research
11/09/2018

Typeface Completion with Generative Adversarial Networks

The mood of a text and the intention of the writer can be reflected in t...
research
05/25/2023

Zero-shot Generation of Training Data with Denoising Diffusion Probabilistic Model for Handwritten Chinese Character Recognition

There are more than 80,000 character categories in Chinese while most of...
research
05/16/2023

A Method for Training-free Person Image Picture Generation

The current state-of-the-art Diffusion model has demonstrated excellent ...
research
12/06/2022

GAS-Net: Generative Artistic Style Neural Networks for Fonts

Generating new fonts is a time-consuming and labor-intensive, especially...

Please sign up or login with your details

Forgot password? Click here to reset