Chinese Typeface Transformation with Hierarchical Adversarial Network

11/17/2017 ∙ by Jie Chang, et al. ∙ Shanghai Jiao Tong University 0

In this paper, we explore automated typeface generation through image style transfer which has shown great promise in natural image generation. Existing style transfer methods for natural images generally assume that the source and target images share similar high-frequency features. However, this assumption is no longer true in typeface transformation. Inspired by the recent advancement in Generative Adversarial Networks (GANs), we propose a Hierarchical Adversarial Network (HAN) for typeface transformation. The proposed HAN consists of two sub-networks: a transfer network and a hierarchical adversarial discriminator. The transfer network maps characters from one typeface to another. A unique characteristic of typefaces is that the same radicals may have quite different appearances in different characters even under the same typeface. Hence, a stage-decoder is employed by the transfer network to leverage multiple feature layers, aiming to capture both the global and local features. The hierarchical adversarial discriminator implicitly measures data discrepancy between the generated domain and the target domain. To leverage the complementary discriminating capability of different feature layers, a hierarchical structure is proposed for the discriminator. We have experimentally demonstrated that HAN is an effective framework for typeface transfer and characters restoration.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Chinese Typeface design is a very time-consuming task, requiring considerable efforts on manual design of benchmark characters. Automated typeface synthesis, i.e. synthesizing characters of a certain typeface given few manually designed samples, has been explored, usually based on manually extracted features. For example, each Chinese character is treated as a combination of its radicals and strokes, and shape representation of specified typefaces such as the contour, orientation and the component size are explicitly learned  [23, 24, 28, 27, 22]. However, these manual features heavily relies on preceding structural segmentation of characters, which itself is a non-trivial task and heavily affected by prior knowledge.

In this paper, we model typeface transformation as an image-to-image transformation problem and attempt to directly learn the transformation end-to-end. Typically, image-to-image transformation involves a transfer network to map the source images to target images. A set of losses are proposed in learning the transfer network. The pixel loss is defined as pixel-wise difference between the output and the corresponding ground-truth [11, 7]. The perceptual loss [8], perceptual similarity [3] and style&content loss [1] are proposed to evaluate the differences between hidden-level features and all are based on the ideology of feature matching [18]. More recently, several variant of generative adversarial networks (e.g CGAN [14], CycleGAN [29]), which introduce a discriminant network in addition to the transfer network for adversarial learning, have been successfully applied to image-to-image transformation including in-painting [15], de-noising [25]

and super-resolution 

[9]. While the above methods have shown great promise for various applications, they are not directly applicable to typeface transformation due to the following domain specific characteristics.

  • Different from style-transfer between natural images where the source image shares high-frequency features with the target image, the transformation between two different typefaces usually leads to distortion of strokes or radicals (e.g Fig 1), meaning change between different styles leads to change of high-level representations. Hence, we cannot use a pre-trained network(e.g. VGG [19]) to extract high-level representations as invariant content representation in training or explicitly define the style representation.

  • For typeface transformation task, different characters may share the same radicals. This is a nice peculiarity that typeface transformation methods can leverage, i.e. learning the direct mapping of radicals between source and target styles. However, sometimes in one certain typeface, the same radicals may appear quite differently in different characters. Fig 1(b) presents two examples where certain radicals have different appearance in another styles. It will leads to severe over-fitting if we just considering the global property while ignore detailed local information.

Figure 1: (a) target style twists strokes in source character, making they do not share the invariant high-frequency features though they are the same character semantically. (b) The components in blue dotted box share the same radicals but their corresponding ones (in red dotted box) with target style are quite different.

To overcome the above problems, we design a hierarchical adversarial network(HAN) for Chinese typeface transformation, consisting of a transfer network and a hierarchical discriminator (Fig. 2

), both of which are fully convolutional neural networks. First, different from existing

transfer network

, a staged-decoder is proposed which generates artificial images in multiple decoding layers, which is expected to help the decoder learn better representations in its hidden layers. Specially, the staged-decoder attempts to maximally preserve the global topological structure in different decoding layers simultaneously considers the local features decoded in hidden layers, thus enabling the transfer network to generate close to authentic characters instead of disordered strokes. Second, inspired by multi-classifier design in GoogLeNet 

[20], which shows that the final feature layer may not provide rich and robust information for measuring the discrepancy between prediction and ground-truth, we propose a hierarchical discriminator for adversarial learning. Specifically, the discriminator introduce additional adversarial losses, each of which employs feature representations from different hidden layers. The multi-adversarial losses constitute a hierarchical form, enabling the discriminator to dynamically measure the discrepancy in distribution between the generated domain and target domain, so that the transfer network is trained to generate outputs with more similar statistical characteristics to the targets on different level of feature representation. The main contribution of our work is summarized as follows.

  • We introduce a staged-decoder in transfer network which generates multiple sets of characters based on different layers of decoded information, capturing both the global and local information for transfer.

  • We propose a hierarchical discriminator which involves a cascade of adversarial losses at different layers of the network, each providing complementary adversarial capability. We have experimentally shown that the hierarchical discriminator leads to faster model convergence and generates more realistic samples.

  • The proposed hierarchical adversarial network(HAN) is shown to be successful for both typeface transfer and character restoration through extensive experimental studies. The impact of proposed hierarchical adversarial loss is further investigated from different perspective including gradient propagation and the ideology of adversarial training.

Figure 2: The proposed Hierarchical Adversarial Network(HAN). HAN consists of an Encoder, a Staged-Decoder and a Hierarchical Adversarial Discriminator. The Encoder follows the Conv-BatchNorm [6]-ELU [2]

architecture. The Staged-Decoder follows the Conv-BatchNorm-ReLU while two extra transformed characters are decoded from two intermediate features. The hierarchical adversarial discriminator is used to distinguish the transformed characters and the ground-truth from multi-level features.

2 Related Work

Many natural image-to-image transformation tasks are domain transfer problem that maps images from source domain to target domain. This transformation can be formulated on pixel level(i.e. pixel-wise loss[26, 12]) or more recently, feature level(i.e. perceptual loss [8], Gram-Matrix [4], VGG loss [9], style loss [1]). Methods based on feature-level even can be extended to unsupervised situation in the assumption that both the input image and desired output image share identical or close high-level representations. However, this premise of assumption does not exist in handwriting transfer since sometimes the high-level representations between source characters and target ones are totally different. Recently, generative adversarial networks [5], especially its variants, CGAN [14] and DCGAN [16], have been successfully applied to a wide spectrum of image-to-image transformation tasks. Beyond the transfer network, CGAN-based methods introduce a discriminator, which involves an adversarial loss for constraining the distribution of generated domain to be close to that of target domain. The adversarial loss is employed by all the above GAN-based studies, such as image super-resolution [9], de-noising [25] and in-painting [15]. Several studies leverage generator or discriminator to extract hidden-level representation and then perform feature matching in both domains [3, 21].

In recent years, many image classification, detection or segmentation problems leverage the information in hidden layers of CNN for training. GoogLeNet [20]

introduced auxiliary classifiers connected to intermediate layers based on the conclusion that the features produced by the layers in the middle of the network should also be very discriminative. Many other CNN models utilize features produced in different intermediate layers to construct extra loss function

[21, 13]). These auxiliary loss is thought to combat the gradient-vanish problem while providing regularization. We first applies this thought to discriminator in GAN, measuring the similarity of two distributions not only based on the high-level features but also relative low-lever features.

3 Methods

In this section, we present the proposed Hierarchical Adversarial Network (HAN) for typeface transformation task. HAN consists of a transfer network and a hierarchical discriminator. The former is further consists of an Encoder and a Staged-Decoder. First, we introduce a transfer network which is responsible for mapping typeface-A characters to typeface-B characters. Then we introduce a hierarchical adversarial discriminator which helps the transfer network generate more realistic characters especially for the subtle structure in Chinese characters. Finally, we introduce the details of objective function.

3.1 FCN-Based Transfer Network

Encoder The transfer Network has a similar architecture to that of  [17]

with some modification. Because any information of relative-location is critical for Chinese character synthesis, we replace pooling operation with strided convolution in down-sampling since pooling helps reduce dimension and retains only robust activations in a receptive fields, however leading to the loss of spatial information in some degree. Additionally, it is a straightforward way to improve the performance of model by increasing the size of neural network, especially the depth. So more uniform-sized conv-layers were added in our encoder for extracting more local features (see Fig 

2).

Staged-Decoder Same as encoder, we insert additional uniform-sized convolution layers before each up-sampling conv-layer in decoder. A deeper decoder helps us model hierarchical representations of characters including the global topological structure and local topological of complicated Chinese characters. Considering the domain insight of our task in Section 1. We further propose a staged-decoder that leverages the hierarchical representation of decoder. Specifically, different intermediate features of decoder are utilized to generate characters (, ) too. Together with the last generated characters (), all of them will be sent to the discriminator(see Fig 2). We only measure the pixel-wise difference between the last generated characters () and corresponding ground-truth. The adversarial loss produced by and helps to refine the transfer network. Meanwhile, the loss produced by the intermediate layers of decoder may provide regularization for the parameters in transfer network, which will relieves the over-fitting problem in some degree. In addition, for typeface transformation, the input character and the desired output are expected to share underlying topological structure, but differ in appearance or style. Skip connection [17] is utilized to supplement partial invariant skeleton information of characters with encoded features concatenated on decoded features. Both encoder and staged decoder are fully convolutional networks [12].

3.2 Hierarchical Adversarial Discriminator

As mentioned in Section 2

, adversarial loss introduced by discriminator is widely used in existing GAN-based image transformation task while all of them estimate the distribution consistency of two domain merely based on the final extracted features of discriminator. It is actually uncertain whether the learned features in last layers will provide rich and robust representations for discriminator. Additionally, We know the perceptual loss which penalizes the discrepancy between representations in different hidden space of images, is recently used in existing image-relative works. We combine the thought of perceptual loss and GANs, proposing a hierarchical adversarial discriminator which leverage the perceptual representations extracted from different intermediate layers of discriminator

and then distinguishes real/fake distribution between generated domain and target domain (See Fig 2). Each adversarial loss is defined as:

(1)
(2)

where and are perceptual representations learned in Discriminator from target domain and generated domain respectively. is branch discriminator cascaded after every intermediate layer and which depends on the number of convolutional layers in our discriminator . This variation brings a complementary adversarial training for our model, which urges discriminator to find more detailed local discrepancy beyond the global distribution. Assuming and its corresponding reach nash equilibrium, which means the the perceptual representations and are considered sharing the similar distribution, however other adversarial losses , , may have not reach nash equilibrium since these losses produced by shallow losses pay more attention on regional information during training. The still high loss promotes the model to be continuously optimized until all perceptual representations pairs , , are indistinguishable by discriminator. Experiments shows this strategy makes the discriminator to dynamically and automatically discover the un-optimized space from various perspectives.

Theoretically, our hierarchical adversarial discriminator actually plays an implicitly role of fitting distribution from two domains instead of fitting hidden features from paired images to be identical compared with existing methods. Thus our HAN model reduces the possibility of over-fitting and does not require pre-trained networks responsible for extracting features adopted by previous methods. Another merit our hierarchical adversarial strategy brought is that these auxiliary discriminators improve the flow of information and gradients throughout the network. The previous convolutional layers are optimized mainly by its neighbour adversarial loss beyond the other posterior adversarial losses so that the parameters existing in every discriminator layer is better optimized and the generator can thus be optimized better than before.

3.3 Losses

Pixel-level Loss The transfer network

can be viewed as the generator in GANs. It aims to synthesize characters similar to the specified ground-truth ones. L1- or L2-norm are often used to measure the pixel distance between paired images. For our typeface transformation task, each pixel in character is normalized near 0 or 1 value. So cross entropy function is selected as per-pixel loss since this character generation problem can be viewed as a logistic regression. The pixel-wise loss is hence defined as follows:

(3)

where denotes the transformation of transfer network, is pair-wise samples where and . is activation. Particularly, a weighted parameter is introduced into pixel-wise loss for balancing the ratio of positive(value 0) to negative(value 1) pixels in every typeface style. We add this trade-off parameter based on the observation that some typefaces are thin (i.e. more negative pixels) while some may be relatively thick (i.e. more positive pixels). is not a parameter determined by cross validation, it is explicitly defined by:

(4)

where the is the resolution of one character image(here ), denotes the number of target characters in training set and denotes the pixel value of target character.

Hierarchical Adversarial Loss For our proposed HAN, each adversarial loss is defined by Eq 1 and Eq 2:

(5)

Noted that here we integrate original and into Eq. 5 for a unified formulation, then the total adversarial losses is

(6)

where are weighted parameters to control the effect of every branch discriminator. The total loss function is formulated as follows:

(7)

where and are the trade-off parameters.

We optimize transfer network and hierarchical adversarial discriminator by turns.

4 Experiments

4.1 Data Set

There is no public data set available for Chinese characters in different typefaces. We build a data set by downloading large amount of .ttf scripts denoting different typefaces from the the website http://www.founder.com/. After pre-processing, each typeface ends up with 6000+ grey-scale images in .png format. We choose a standard printed typeface named FangSong(FS) as the source and the rest typefaces with handwriting styles are used as target ones. Most of our experiments use 50% characters (3̃000 characters) as training set and the remaining as test set.

4.2 Network Setup

The hyper-parameters relevant to our proposed network are annotated in Fig 2. The encoder includes 8 conv-layers while the staged-Decoder is more deeper including 4 transform-conv layers and 8 con-layers. Every and are followed by Conv-BatchNorm(BN) [6]-ELU [2]/ReLU structure. 4 skip connections are used on mirror layers both in encoder and staged-decoder.

For the trade-off parameters in Section 3.3, is determined by Eq 4. The number of adversarial loss of HAN is 4 and weighted parameter is decay from to with rate , . and are both set to to weight the pixel loss and adversarial loss.

4.3 Performance Comparison

To validate the proposed HAN model, we compare the transfer performance of HAN with a Chinese calligraphy synthesis method (AEGG [13]

) and two state-of-the-art image-to-image transformation methods(Pix2Pix 

[7], Cycle-GAN [29]). Our proposed HAN can be trained in two modes. The first is strong-paired mode which minimizes pixel-wise discrepancy obtained by paired characters as well as hierarchical adversarial loss obtained by generated and target domain. The second is soft-paired mode by removing and just minimizing , which looses the constrain of pairing source characters with corresponding target ones.

Strong-Paired Learning. Baseline AEGG and Pix2Pix both need pair the generated images with corresponding ground-truths for training so we compare our HAN with them in strong-paired mode. The transfer network of Pix2Pix shares the identical framework with that in our HAN(see Fig 2) and the model used in AEGG follows the instructions of their paper with some tiny adjustment for dimension adaptation. 50%(3̃000) characters randomly selected from FS typeface as well as 50% corresponding target style characters selected from other handwriting-style typeface are used as training set. The remaining 50% of FS typefaces is used for testing. We perform 5 experiments transferring FS typeface to other Chinese handwriting-style(see Fig 3). All methods can capture general style of handwriting however AEGG and Pix2Pix failed to synthesize recognizable characters because most strokes in generated character are disordered even chaotic. Our HAN significantly outperforms AEGG and Pix2Pix, especially imitating cursive handwriting characters. Experimental result shows HAN is superior in generating detailed component of characters. We also observed that both baselines perform well on training set but far worse on test set, which suggests the proposed hierarchical adversarial loss makes our model less prone to over-fitting in some degree.

Source AEGG Pix2Pix Ours Target
Source AEGG Pix2Pix Ours Target
Source AEGG Pix2Pix Ours Target
Source AEGG Pix2Pix Ours Target
Source AEGG Pix2Pix Ours Target
Figure 3: Performance of transferring FS typeface to other 5 personal handwriting-style typefaces.

Soft-Paired Learning. Another model Cycle-GAN actually is an unpaired method which does not require ground-truth for training. Nevertheless we experiment unpaired form with Cycle-GAN and proposed HAN, both of their results are very bad. So we compare our HAN with Cycle-GAN in soft-paired mode, saving the trouble of tedious pairing but leaving the ground-truths in training set. As illustrated in Fig 4, under the condition of soft-paired, our HAN performs well than Cycle-GAN. Though Cycle-GAN correctly captures the style of target characters, it cannot reconstruct correct location of every stroke and Cycle-GAN leads to model collapse. Of course, results of HAN trained in soft-paired mode is not as good as that strong-paired mode since the strong supervision information is reduced by removing .

HAN(strong-pair) CycleG(soft-pair) HAN(soft-pair)
Target Characters
Figure 4: We compare our HAN with Cycle-GAN by loosing the pairing constraint and HAN performs better than Cycle-GAN.

Quantitative Evaluation. Beyond directly illustrating qualitative results of comparison experiments, two quantitative measurements: Root Mean Square Error(RMSE) and Average Pixel Disagreement Ration [10](APDR) are utilized as evaluation criterion. As shown in Table 1, our HAN leads to the lowest RMSE and APDR value both under the mode of strong-paired and soft-paired mode compared with existing methods.

Model
RMSE APDR RMSE APDR RMSE APDR RMSE APDR
AEGG [13] 22.671 0.143 28.010 0.211 24.083 0.171 22.110 0.131
Pix2Pix [7] 29.731 0.231 27.117 0.225 26.580 0.187 24.135 0.180
Cycle-GAN [29] 29.602 0.253 29.145 0.234 28.845 0.241 25.632 0.191
HAN(Soft-pair) 20.984 0.125 25.442 0.207 24.741 0.181 20.714 0.134
HAN(Strong-pair) 19.498 0.118 23.303 0.181 22.266 0.162 19.528 0.110
Table 1: Quantitative Measurements

4.4 Analysis of Hierarchical Adversarial Loss

We analyze each adversarial loss, and , defined in Section 3.2. As shown in Fig 5, the generator loss produced by the last conv-layer in hierarchical discriminator fluctuates greatly and then produced by the penultimate layer, produced by shallower conv-layers are relatively gentle because is set larger than so that network mainly optimizes . However for discriminator loss, derived from , , are mostly numerical approach. We further observed that the trend of increase or reduction among various discriminator losses are not always consistent. We experimentally conclude that adversarial losses produced by intermediate layers can assist training: when is severely cheated by real/fake characters, or or can still give a high confidence of differentiating, which means True/False discrimination based on different representations can be compensated each other(see Fig 5 for more details) during training.

Figure 5: Each generator loss and discriminator loss during ste 700 to 900.

We further explore the influence brought by our hierarchical adversarial loss. By removing the effect of hierarchical architecture from our HAN model, we run another contrast experiment, Single Adversarial Network (SAN). The detail of network follows Fig 2 and we set trade-off parameters and in loss function of HAN, while we set and for SAN in order to remove the influence of extra 3 adversarial losses. Considering the value of hierarchical adversarial loss(we accumulate four adversarial losses) is bigger than that of single adversarial loss, the gradients in back propagation of HAN is hence theoretically bigger than that of SAN. For demonstrating that our HAN works not for this reason, we multiply a constant before the adversarial loss in SAN so that these two adversarial loss respectively existing in HAN and SAN are close proximity. Characters generated during different training period are illustrated in Fig 6 from which we can see qualitative effect of proposed hierarchical adversarial discriminator. Our proposed HAN generates more clear characters compared with SAN at the same phase of training period, which suggests HAN converge greatly faster than SAN. We also run 3 parallel typeface-transfer experiments then calculate RMSE along with the iterations of training on train set. Left loss-curves in Fig 6 demonstrates that hierarchical adversarial architecture assists to accelerate convergence and leads to lower RMSE value.

Figure 6:

Contrast experiments for HAN and SAN. Characters generated by HAN are far more better than that by SAN in same training epoch.

HAN converge row shows characters generated when our HAN model converges. The RMSE evaluation loss along with the training iterations under HAN and SAN shows HAN leads to more lower value than SAN.

4.5 Character Restoration with HAN

Beyond transferring standard printed typeface to any handwriting-style typeface, we also applied our HAN model to character restoration. We randomly mask 30% region on every handwriting characters in one typeface’s training set. Under strong-paired mode, our HAN learned to correctly reconstruct the original characters. As illustrated in Fig 7, our HAN is able to correctly reconstruct the missing part of one character on test set.

Figure 7: Performance of repairing personal handwriting characters with HAN on test set.

4.6 Impact of Training Set Size

Last, we experiment at least how many handwriting characters should be given in training to ensure a satisfied transfer performance. So we experiment three typeface-transfer tasks(type-1, type-2 and type-3) with different proportion of training samples and then evaluate on each test set. As the synthesized characters shown in Fig 8, the performance improves along with increase of training samples. We also use RMSE to quantify the performance under different training samples. All 3 curves suggests when the proportion of training size is not less than 35%(2000 samples), the performance will not be greatly improved.

Figure 8: The RMSE evaluation under different proportion of training set. The red and black number denote how many train samples we used. We present 3 transferring handwriting characters from FS-typeface to handwriting type-1, type-2 and type-3.

5 Conclusion and Future Work

In this paper, we propose a hierarchical adversarial network (HAN) for typeface transformation. The HAN is consisted of a transfer network and a hierarchical adversarial discriminator. The transfer network consists of an encoder and a staged-decoder which can generate characters based on different decoded information. The proposed hierarchical discriminator can dynamically estimate the consistency of two domains from different-level perceptual representations, which helps our HAN converge faster and better. Experimental results show our HAN can synthesize most handwriting-style typeface compared with existing natural image-to-image transformation methods. Additionally, our HAN can be applied to handwriting character restoration.

References