Chinese Typeface design is a very time-consuming task, requiring considerable efforts on manual design of benchmark characters. Automated typeface synthesis, i.e. synthesizing characters of a certain typeface given few manually designed samples, has been explored, usually based on manually extracted features. For example, each Chinese character is treated as a combination of its radicals and strokes, and shape representation of specified typefaces such as the contour, orientation and the component size are explicitly learned [23, 24, 28, 27, 22]. However, these manual features heavily relies on preceding structural segmentation of characters, which itself is a non-trivial task and heavily affected by prior knowledge.
In this paper, we model typeface transformation as an image-to-image transformation problem and attempt to directly learn the transformation end-to-end. Typically, image-to-image transformation involves a transfer network to map the source images to target images. A set of losses are proposed in learning the transfer network. The pixel loss is defined as pixel-wise difference between the output and the corresponding ground-truth [11, 7]. The perceptual loss , perceptual similarity  and style&content loss  are proposed to evaluate the differences between hidden-level features and all are based on the ideology of feature matching . More recently, several variant of generative adversarial networks (e.g CGAN , CycleGAN ), which introduce a discriminant network in addition to the transfer network for adversarial learning, have been successfully applied to image-to-image transformation including in-painting , de-noising 
and super-resolution. While the above methods have shown great promise for various applications, they are not directly applicable to typeface transformation due to the following domain specific characteristics.
Different from style-transfer between natural images where the source image shares high-frequency features with the target image, the transformation between two different typefaces usually leads to distortion of strokes or radicals (e.g Fig 1), meaning change between different styles leads to change of high-level representations. Hence, we cannot use a pre-trained network(e.g. VGG ) to extract high-level representations as invariant content representation in training or explicitly define the style representation.
For typeface transformation task, different characters may share the same radicals. This is a nice peculiarity that typeface transformation methods can leverage, i.e. learning the direct mapping of radicals between source and target styles. However, sometimes in one certain typeface, the same radicals may appear quite differently in different characters. Fig 1(b) presents two examples where certain radicals have different appearance in another styles. It will leads to severe over-fitting if we just considering the global property while ignore detailed local information.
To overcome the above problems, we design a hierarchical adversarial network(HAN) for Chinese typeface transformation, consisting of a transfer network and a hierarchical discriminator (Fig. 2
), both of which are fully convolutional neural networks. First, different from existingtransfer network
, a staged-decoder is proposed which generates artificial images in multiple decoding layers, which is expected to help the decoder learn better representations in its hidden layers. Specially, the staged-decoder attempts to maximally preserve the global topological structure in different decoding layers simultaneously considers the local features decoded in hidden layers, thus enabling the transfer network to generate close to authentic characters instead of disordered strokes. Second, inspired by multi-classifier design in GoogLeNet, which shows that the final feature layer may not provide rich and robust information for measuring the discrepancy between prediction and ground-truth, we propose a hierarchical discriminator for adversarial learning. Specifically, the discriminator introduce additional adversarial losses, each of which employs feature representations from different hidden layers. The multi-adversarial losses constitute a hierarchical form, enabling the discriminator to dynamically measure the discrepancy in distribution between the generated domain and target domain, so that the transfer network is trained to generate outputs with more similar statistical characteristics to the targets on different level of feature representation. The main contribution of our work is summarized as follows.
We introduce a staged-decoder in transfer network which generates multiple sets of characters based on different layers of decoded information, capturing both the global and local information for transfer.
We propose a hierarchical discriminator which involves a cascade of adversarial losses at different layers of the network, each providing complementary adversarial capability. We have experimentally shown that the hierarchical discriminator leads to faster model convergence and generates more realistic samples.
The proposed hierarchical adversarial network(HAN) is shown to be successful for both typeface transfer and character restoration through extensive experimental studies. The impact of proposed hierarchical adversarial loss is further investigated from different perspective including gradient propagation and the ideology of adversarial training.
2 Related Work
Many natural image-to-image transformation tasks are domain transfer problem that maps images from source domain to target domain. This transformation can be formulated on pixel level(i.e. pixel-wise loss[26, 12]) or more recently, feature level(i.e. perceptual loss , Gram-Matrix , VGG loss , style loss ). Methods based on feature-level even can be extended to unsupervised situation in the assumption that both the input image and desired output image share identical or close high-level representations. However, this premise of assumption does not exist in handwriting transfer since sometimes the high-level representations between source characters and target ones are totally different. Recently, generative adversarial networks , especially its variants, CGAN  and DCGAN , have been successfully applied to a wide spectrum of image-to-image transformation tasks. Beyond the transfer network, CGAN-based methods introduce a discriminator, which involves an adversarial loss for constraining the distribution of generated domain to be close to that of target domain. The adversarial loss is employed by all the above GAN-based studies, such as image super-resolution , de-noising  and in-painting . Several studies leverage generator or discriminator to extract hidden-level representation and then perform feature matching in both domains [3, 21].
In recent years, many image classification, detection or segmentation problems leverage the information in hidden layers of CNN for training. GoogLeNet 
introduced auxiliary classifiers connected to intermediate layers based on the conclusion that the features produced by the layers in the middle of the network should also be very discriminative. Many other CNN models utilize features produced in different intermediate layers to construct extra loss function([21, 13]). These auxiliary loss is thought to combat the gradient-vanish problem while providing regularization. We first applies this thought to discriminator in GAN, measuring the similarity of two distributions not only based on the high-level features but also relative low-lever features.
In this section, we present the proposed Hierarchical Adversarial Network (HAN) for typeface transformation task. HAN consists of a transfer network and a hierarchical discriminator. The former is further consists of an Encoder and a Staged-Decoder. First, we introduce a transfer network which is responsible for mapping typeface-A characters to typeface-B characters. Then we introduce a hierarchical adversarial discriminator which helps the transfer network generate more realistic characters especially for the subtle structure in Chinese characters. Finally, we introduce the details of objective function.
3.1 FCN-Based Transfer Network
Encoder The transfer Network has a similar architecture to that of 
with some modification. Because any information of relative-location is critical for Chinese character synthesis, we replace pooling operation with strided convolution in down-sampling since pooling helps reduce dimension and retains only robust activations in a receptive fields, however leading to the loss of spatial information in some degree. Additionally, it is a straightforward way to improve the performance of model by increasing the size of neural network, especially the depth. So more uniform-sized conv-layers were added in our encoder for extracting more local features (see Fig2).
Staged-Decoder Same as encoder, we insert additional uniform-sized convolution layers before each up-sampling conv-layer in decoder. A deeper decoder helps us model hierarchical representations of characters including the global topological structure and local topological of complicated Chinese characters. Considering the domain insight of our task in Section 1. We further propose a staged-decoder that leverages the hierarchical representation of decoder. Specifically, different intermediate features of decoder are utilized to generate characters (, ) too. Together with the last generated characters (), all of them will be sent to the discriminator(see Fig 2). We only measure the pixel-wise difference between the last generated characters () and corresponding ground-truth. The adversarial loss produced by and helps to refine the transfer network. Meanwhile, the loss produced by the intermediate layers of decoder may provide regularization for the parameters in transfer network, which will relieves the over-fitting problem in some degree. In addition, for typeface transformation, the input character and the desired output are expected to share underlying topological structure, but differ in appearance or style. Skip connection  is utilized to supplement partial invariant skeleton information of characters with encoded features concatenated on decoded features. Both encoder and staged decoder are fully convolutional networks .
3.2 Hierarchical Adversarial Discriminator
As mentioned in Section 2
, adversarial loss introduced by discriminator is widely used in existing GAN-based image transformation task while all of them estimate the distribution consistency of two domain merely based on the final extracted features of discriminator. It is actually uncertain whether the learned features in last layers will provide rich and robust representations for discriminator. Additionally, We know the perceptual loss which penalizes the discrepancy between representations in different hidden space of images, is recently used in existing image-relative works. We combine the thought of perceptual loss and GANs, proposing a hierarchical adversarial discriminator which leverage the perceptual representations extracted from different intermediate layers of discriminatorand then distinguishes real/fake distribution between generated domain and target domain (See Fig 2). Each adversarial loss is defined as:
where and are perceptual representations learned in Discriminator from target domain and generated domain respectively. is branch discriminator cascaded after every intermediate layer and which depends on the number of convolutional layers in our discriminator . This variation brings a complementary adversarial training for our model, which urges discriminator to find more detailed local discrepancy beyond the global distribution. Assuming and its corresponding reach nash equilibrium, which means the the perceptual representations and are considered sharing the similar distribution, however other adversarial losses , , may have not reach nash equilibrium since these losses produced by shallow losses pay more attention on regional information during training. The still high loss promotes the model to be continuously optimized until all perceptual representations pairs , , are indistinguishable by discriminator. Experiments shows this strategy makes the discriminator to dynamically and automatically discover the un-optimized space from various perspectives.
Theoretically, our hierarchical adversarial discriminator actually plays an implicitly role of fitting distribution from two domains instead of fitting hidden features from paired images to be identical compared with existing methods. Thus our HAN model reduces the possibility of over-fitting and does not require pre-trained networks responsible for extracting features adopted by previous methods. Another merit our hierarchical adversarial strategy brought is that these auxiliary discriminators improve the flow of information and gradients throughout the network. The previous convolutional layers are optimized mainly by its neighbour adversarial loss beyond the other posterior adversarial losses so that the parameters existing in every discriminator layer is better optimized and the generator can thus be optimized better than before.
Pixel-level Loss The transfer network
can be viewed as the generator in GANs. It aims to synthesize characters similar to the specified ground-truth ones. L1- or L2-norm are often used to measure the pixel distance between paired images. For our typeface transformation task, each pixel in character is normalized near 0 or 1 value. So cross entropy function is selected as per-pixel loss since this character generation problem can be viewed as a logistic regression. The pixel-wise loss is hence defined as follows:
where denotes the transformation of transfer network, is pair-wise samples where and . is activation. Particularly, a weighted parameter is introduced into pixel-wise loss for balancing the ratio of positive(value 0) to negative(value 1) pixels in every typeface style. We add this trade-off parameter based on the observation that some typefaces are thin (i.e. more negative pixels) while some may be relatively thick (i.e. more positive pixels). is not a parameter determined by cross validation, it is explicitly defined by:
where the is the resolution of one character image(here ), denotes the number of target characters in training set and denotes the pixel value of target character.
Noted that here we integrate original and into Eq. 5 for a unified formulation, then the total adversarial losses is
where are weighted parameters to control the effect of every branch discriminator. The total loss function is formulated as follows:
where and are the trade-off parameters.
We optimize transfer network and hierarchical adversarial discriminator by turns.
4.1 Data Set
There is no public data set available for Chinese characters in different typefaces. We build a data set by downloading large amount of .ttf scripts denoting different typefaces from the the website http://www.founder.com/. After pre-processing, each typeface ends up with 6000+ grey-scale images in .png format. We choose a standard printed typeface named FangSong(FS) as the source and the rest typefaces with handwriting styles are used as target ones. Most of our experiments use 50% characters (3̃000 characters) as training set and the remaining as test set.
4.2 Network Setup
The hyper-parameters relevant to our proposed network are annotated in Fig 2. The encoder includes 8 conv-layers while the staged-Decoder is more deeper including 4 transform-conv layers and 8 con-layers. Every and are followed by Conv-BatchNorm(BN) -ELU /ReLU structure. 4 skip connections are used on mirror layers both in encoder and staged-decoder.
4.3 Performance Comparison
To validate the proposed HAN model, we compare the transfer performance of HAN with a Chinese calligraphy synthesis method (AEGG 
) and two state-of-the-art image-to-image transformation methods(Pix2Pix, Cycle-GAN ). Our proposed HAN can be trained in two modes. The first is strong-paired mode which minimizes pixel-wise discrepancy obtained by paired characters as well as hierarchical adversarial loss obtained by generated and target domain. The second is soft-paired mode by removing and just minimizing , which looses the constrain of pairing source characters with corresponding target ones.
Strong-Paired Learning. Baseline AEGG and Pix2Pix both need pair the generated images with corresponding ground-truths for training so we compare our HAN with them in strong-paired mode. The transfer network of Pix2Pix shares the identical framework with that in our HAN(see Fig 2) and the model used in AEGG follows the instructions of their paper with some tiny adjustment for dimension adaptation. 50%(3̃000) characters randomly selected from FS typeface as well as 50% corresponding target style characters selected from other handwriting-style typeface are used as training set. The remaining 50% of FS typefaces is used for testing. We perform 5 experiments transferring FS typeface to other Chinese handwriting-style(see Fig 3). All methods can capture general style of handwriting however AEGG and Pix2Pix failed to synthesize recognizable characters because most strokes in generated character are disordered even chaotic. Our HAN significantly outperforms AEGG and Pix2Pix, especially imitating cursive handwriting characters. Experimental result shows HAN is superior in generating detailed component of characters. We also observed that both baselines perform well on training set but far worse on test set, which suggests the proposed hierarchical adversarial loss makes our model less prone to over-fitting in some degree.
Soft-Paired Learning. Another model Cycle-GAN actually is an unpaired method which does not require ground-truth for training. Nevertheless we experiment unpaired form with Cycle-GAN and proposed HAN, both of their results are very bad. So we compare our HAN with Cycle-GAN in soft-paired mode, saving the trouble of tedious pairing but leaving the ground-truths in training set. As illustrated in Fig 4, under the condition of soft-paired, our HAN performs well than Cycle-GAN. Though Cycle-GAN correctly captures the style of target characters, it cannot reconstruct correct location of every stroke and Cycle-GAN leads to model collapse. Of course, results of HAN trained in soft-paired mode is not as good as that strong-paired mode since the strong supervision information is reduced by removing .
Quantitative Evaluation. Beyond directly illustrating qualitative results of comparison experiments, two quantitative measurements: Root Mean Square Error(RMSE) and Average Pixel Disagreement Ration (APDR) are utilized as evaluation criterion. As shown in Table 1, our HAN leads to the lowest RMSE and APDR value both under the mode of strong-paired and soft-paired mode compared with existing methods.
4.4 Analysis of Hierarchical Adversarial Loss
We analyze each adversarial loss, and , defined in Section 3.2. As shown in Fig 5, the generator loss produced by the last conv-layer in hierarchical discriminator fluctuates greatly and then produced by the penultimate layer, produced by shallower conv-layers are relatively gentle because is set larger than so that network mainly optimizes . However for discriminator loss, derived from , , are mostly numerical approach. We further observed that the trend of increase or reduction among various discriminator losses are not always consistent. We experimentally conclude that adversarial losses produced by intermediate layers can assist training: when is severely cheated by real/fake characters, or or can still give a high confidence of differentiating, which means True/False discrimination based on different representations can be compensated each other(see Fig 5 for more details) during training.
We further explore the influence brought by our hierarchical adversarial loss. By removing the effect of hierarchical architecture from our HAN model, we run another contrast experiment, Single Adversarial Network (SAN). The detail of network follows Fig 2 and we set trade-off parameters and in loss function of HAN, while we set and for SAN in order to remove the influence of extra 3 adversarial losses. Considering the value of hierarchical adversarial loss(we accumulate four adversarial losses) is bigger than that of single adversarial loss, the gradients in back propagation of HAN is hence theoretically bigger than that of SAN. For demonstrating that our HAN works not for this reason, we multiply a constant before the adversarial loss in SAN so that these two adversarial loss respectively existing in HAN and SAN are close proximity. Characters generated during different training period are illustrated in Fig 6 from which we can see qualitative effect of proposed hierarchical adversarial discriminator. Our proposed HAN generates more clear characters compared with SAN at the same phase of training period, which suggests HAN converge greatly faster than SAN. We also run 3 parallel typeface-transfer experiments then calculate RMSE along with the iterations of training on train set. Left loss-curves in Fig 6 demonstrates that hierarchical adversarial architecture assists to accelerate convergence and leads to lower RMSE value.
4.5 Character Restoration with HAN
Beyond transferring standard printed typeface to any handwriting-style typeface, we also applied our HAN model to character restoration. We randomly mask 30% region on every handwriting characters in one typeface’s training set. Under strong-paired mode, our HAN learned to correctly reconstruct the original characters. As illustrated in Fig 7, our HAN is able to correctly reconstruct the missing part of one character on test set.
4.6 Impact of Training Set Size
Last, we experiment at least how many handwriting characters should be given in training to ensure a satisfied transfer performance. So we experiment three typeface-transfer tasks(type-1, type-2 and type-3) with different proportion of training samples and then evaluate on each test set. As the synthesized characters shown in Fig 8, the performance improves along with increase of training samples. We also use RMSE to quantify the performance under different training samples. All 3 curves suggests when the proportion of training size is not less than 35%(2000 samples), the performance will not be greatly improved.
5 Conclusion and Future Work
In this paper, we propose a hierarchical adversarial network (HAN) for typeface transformation. The HAN is consisted of a transfer network and a hierarchical adversarial discriminator. The transfer network consists of an encoder and a staged-decoder which can generate characters based on different decoded information. The proposed hierarchical discriminator can dynamically estimate the consistency of two domains from different-level perceptual representations, which helps our HAN converge faster and better. Experimental results show our HAN can synthesize most handwriting-style typeface compared with existing natural image-to-image transformation methods. Additionally, our HAN can be applied to handwriting character restoration.
-  D. Chen, L. Yuan, J. Liao, N. Yu, and G. Hua. Stylebank: An explicit representation for neural image style transfer. arXiv preprint arXiv:1703.09210, 2017.
-  D.-A. Clevert, T. Unterthiner, and S. Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). Computer Science, 2015.
-  A. Dosovitskiy and T. Brox. Generating images with perceptual similarity metrics based on deep networks. In Advances in Neural Information Processing Systems, 2016.
-  L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In Computer Vision and Pattern Recognition, pages 2414–2423, 2016.
-  I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In International Conference on Neural Information Processing Systems, pages 2672–2680, 2014.
-  S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. Computer Science, 2015.
-  P. Isola, J. Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. arXiv preprint arXiv:1611.07004, 2016.
-  J. Johnson, A. Alahi, and F. F. Li. Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision, pages 694–711, 2016.
-  C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, and Z. Wang. Photo-realistic single image super-resolution using a generative adversarial network. arXiv preprint arXiv:1609.04802, 2016.
-  M.-Y. Liu and O. Tuzel. Coupled generative adversarial networks. In Advances in neural information processing systems, pages 469–477, 2016.
-  J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. IEEE Transactions on Pattern Analysis & Machine Intelligence, 39(4):640, 2014.
-  J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Computer Vision and Pattern Recognition, pages 3431–3440, 2015.
-  P. Lyu, X. Bai, C. Yao, Z. Zhu, T. Huang, and W. Liu. Auto-encoder guided gan for chinese calligraphy synthesis. arXiv preprint arXiv:1706.08789, 2017.
-  M. Mirza and S. Osindero. Conditional generative adversarial nets. Computer Science, pages 2672–2680, 2014.
-  D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2536–2544, 2016.
-  A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. Computer Science, 2015.
-  O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, volume 9351, pages 234–241, 2015.
-  T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems, 2016.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. Computer Science, 2014.
-  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Computer Vision and Pattern Recognition, pages 1–9, 2015.
-  C. Wang, C. Xu, C. Wang, and D. Tao. Perceptual adversarial networks for image-to-image transformation. arXiv preprint arXiv:1706.09138, 2017.
-  J. Xiao, J. Xiao, and J. Xiao. Automatic generation of large-scale handwriting fonts via style learning. In SIGGRAPH ASIA 2016 Technical Briefs, page 12, 2016.
-  S. Xu, H. Jiang, T. Jin, F. C. M. Lau, and Y. Pan. Automatic generation of chinese calligraphic writings with style imitation. IEEE Intelligent Systems, 24(2):44–53, 2009.
S. Xu, T. Jin, H. Jiang, and F. C. M. Lau.
Automatic generation of personal chinese handwriting by capturing the
characteristics of personal handwriting.
Conference on Innovative Applications of Artificial Intelligence, July 14-16, 2009, Pasadena, California, Usa, 2010.
-  H. Zhang, V. Sindagi, and V. M. Patel. Image de-raining using a conditional generative adversarial network. arXiv preprint arXiv:1701.05957, 2017.
R. Zhang, P. Isola, and A. A. Efros.
Colorful image colorization.In European Conference on Computer Vision, pages 649–666, 2016.
X.-Y. Zhang, F. Yin, Y.-M. Zhang, C.-L. Liu, and Y. Bengio.
Drawing and recognizing chinese characters with recurrent neural network.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
-  B. Zhou, W. Wang, and Z. Chen. Easy generation of personal chinese handwritten fonts. In IEEE International Conference on Multimedia and Expo, pages 1–6, 2011.
-  J. Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017.