With the advent of Generative Adversarial Networks (GAN) 
, image generation has made huge strides in terms of both image quality and diversity. However, the original GAN model cannot generate images tailored to meet design specifications. To this end, many conditional GAN models have been proposed to fit different task scenarios [2, 3, 4, 5, 6, 7, 8]
. Among these works, Text-to-Image (TTI) synthesis is a challenging yet less studied topic. TTI refers to generating a photo-realistic image which matches a given text description. As an inverse image captioning task, TTI aims to establish an interpretable mapping between image space and the text semantic space. TTI has huge potential and can be used in many applications including photo editing and computer-aided design. However, natural language is high dimensional information which is often less specific but also much more abstract than images. Therefore, this research problem is quite challenging.
Just like TTI synthesis, the sub-topic of Text-to-Face (TTF) synthesis also has practical value in areas such as crime investigation and also biometric research. For example, the police often need professional artists to sketch pictures of suspects based on the descriptions of the eyewitnesses. This task is time-consuming, requires great skill and often results in inferior images. Many police may not have access to such professionals. However, with a well-trained Text-to-Face model, we could quickly produce a wide diversity of high-quality photo-realistic pictures based simply on the descriptions of eyewitnesses. Moreover, TTF can be used to address the emerging issues of data scarcity arising from the growing ethical concerns regarding informed consent for the use of faces in biometrics research.
A major challenge of the TTF task is that the linkage between face images and their text descriptions are much looser than for, say, bird and flower images commonly used in TTI research. A few sentences of description are hardly adequate to cover all the variations of human facial features. Also, for the same face image, different people may use quite different descriptions. This increases the challenge of finding mappings between these descriptions and the facial features. Therefore, in addition to the aforementioned two criteria, a TTF model should have the ability to produce a group of images with high diversity conditioned on the same text description. In a real-world application, a witness could choose one picture among these output images which they think is the closest to the appearance of the suspect. This feature is also important for biometric researchers to get sufficient data from rare ethnicities and demographics when synthesising ethical face datasets that do not require informed consent.
Therefore, to meet these demands, we proposed a model which includes a novel TTF framework satisfying: 1) high image quality; 2) improved consistency of synthesised images and their descriptions; and 3) ability to generate a group of diverse faces from the same text description.
To achieve these goals, we propose a pre-trained BERT 
multi-label model for natural language processing. This model outputs sparse text embeddings of length 40. We fine-tune a pre-trained MobileNets model using CelebA’s  training data where images have paired labels. We then predict the labels from the input images. Next, we structure a feature space with 40 orthogonal axes based on the noise vectors and the predicted labels. After this operation, the input noise vectors can be moved along a specified axis to render output images which have the desired features. Last but certainly not least, we use the state-of-the-art image generator, StyleGAN2 , which maps the noise vectors into a feature disentangled latent space, to generate high-resolution images. As Fig.1 shows, the synthesised images match the features of the description with good diversity and image quality.
Our work has the following main contributions.
We propose a novel TTF-HD framework that comprises a text multi-label classifier, an image label encoder, and a feature-disentangled image generator to generate high-quality faces with a wide range of diversity.
In addition, we added a novel design to the framework: a 40-label orthogonal coordinate system to guide the trajectory of the input noise vector.
Last but not least, we use the state-of-the-art StyleGAN2  as our generator to map the manipulated noise vectors into the disentangled feature space to generate our 10241024 high-resolution images.
This paper is continued as follow. In Section 2, we review the important works in TTI, TTF, and models of the generators. In Section 3, we describe our proposed framework in detail. In Section 4, experimental results are presented both qualitatively and quantitatively. An ablation study is also conducted to show the importance of the vector manipulating operations. In Section 5, we conclude our work by summarising our contributions and the limitations of the approach for future research.
Ii Related Works
Ii-a Text-to-image Synthesis
In the area of TTI, Reel et al.  first proposed to take advantage of GAN, which includes a text encoder and an image generator and concatenated the text embedding to the noise vector as input. Unfortunately, the model failed to establish good mappings between the keywords and the corresponding image features. Besides, due to the final results being directly generated from the concatenated vectors, the image quality was poor so that images could be easily discerned as fake. To address these two issues, StackGAN  proposed to generate images hierarchically by utilising two pairs of generators and discriminators. Later, Xu et al. proposed AttnGAN 
. By introducing the attention mechanism, the model successfully matched the keywords with the corresponding image features. Their interpolation experimental results indicated that the model could correctly render the image features according to the selected keywords. The model works remarkably well in translating bird and flower descriptions. However, such descriptions are mostly just one sentence. If the descriptions have more sentences, the efficacy of the text encoder deteriorates because the attention map becomes harder to train.
Ii-B Text-to-face Synthesis
Compared to the number of works in TTI, the published works in TTF are far fewer. The main reason is that a face description has a much weaker connection to facial features compared to that of, say, bird or flower images. Typically, the descriptions of birds and flowers are mostly about the colour of feathers and petals. Descriptions of faces can be much more complicated with gender, age, ethnicity, pose, and other facial attributes. Moreover, most of the TTI models are trained with Oxford-102 , CUB , and COCO  which are not face image datasets. On the other hand, the only face dataset that is suitable is Face2text  which has just five thousand pairs of samples, which is not sufficient for training a satisfactory model.
With all the challenges mentioned above, there are still several inspiring works engaging in text-to-face synthesis. In a project named T2F , Akanimax proposed to encode the text descriptions into a summary vector using the LSTM network. ProGAN  was adopted as the generator of the model. Unfortunately, the final output images exhibited poor image quality. Later, the author improved his work, which he named T2F 2.0, by replacing the ProGAN with MSG-GAN . As a result, the image quality and image-text consistency improved considerably, but the output showed low diversity in facial appearance. To address the data scarcity issue, O.R. Nasir et al.  proposed to utilise the labels of CelebA  to produce structured pseudo text descriptions automatically. In this way, the samples in the dataset are paired with sentences which contains the positive feature names separated by conjunctions and punctuation. The results are 6464 pixel images showing a certain degree of diversity in appearance. The best output image quality so far is from  which also adopted the model structure of AttnGAN . Therefore, this work has the same issues with text encoding mentioned previously.
Ii-C Feature-disentangled Latent Space
Conventionally, the generator will produce random images from noise vectors sampled from a normal distribution. However, we desire to control the rendering of the images in response to the feature labels. To do this, Chen et al.  proposed to disentangle the desired features, by maximising the mutual information between the latent code of the desired features and the noise vector . In his experiments, he introduced a variation distribution to approach . Finally, the latent code indicates that it has managed to learn interpretable information by changing the value in a certain dimension. However, the latent code in this work has only 3 or 4 dimensions, but we require 40 features, which is much more complicated. Later, Karras et al.  established a novel style-based generator architecture, named StyleGAN, which does not take the noise vector as input like the previous works. The input vector is mapped into an intermediate latent space through a non-linear network before being fed into the generator network. The non-linear network consists of eight fully connected layers. A benefit for such a setting is that the latent space does not have to support sampling according to any fixed distribution . In other words, we have more freedom to combine the desired features.
Iii Proposed Method
Our proposed model, named TTF-HD, comprising a multi-label classifier , image encoder , and a generator is shown in Fig.2. Details will be discussed in the following subsections.
Iii-a Multi-label Text Classification
To conduct the TTF task, it is of vital importance to have sufficient facial attribute labels to best describe a face. We propose to use the CelebA  dataset which includes 40 facial attribute labels for each face. To map the free-form natural language descriptions to the 40 facial attributes, we propose to fine-tune a multi-label text classifier to get text embeddings of length 40. With these considerations, we adopt the state-of-the-art natural language processing model, Bidirectional Transformer (BERT) . In light of the fact that this is a 40-class classification task, we choose to use the large network of the BERT model to have a stronger fitting ability for high-dimensional training data. Some features have different names for opposites. For example, when training the model , the feature “age” could be represented by either “young” or “old” where “young” would be a value close to 0 and “old” would be a be a value close to 1. If a feature isn’t specified, it is set to 0. This process is shown in Fig.3. Finally, the classifier outputs a text vector of length 40 for each description.
Note that there is one advantage of the text classifier compared to the traditional text encoder in previous works. It is that there are no restrictions to the length of text descriptions. In previous works, the text encoders are mostly crammed into one or two sentences. But for face descriptions, the length is longer than for bird and flower descriptions, which makes traditional text encoders less appropriate.
Iii-B Image Multi-label Embeddings
In the proposed framework, an image encoder is required to predict the feature labels of the generated images. To do this, we fine-tune a MobileNet model , with the samples of CelebA . The reason for choosing MobileNet is that it is a light-weight network model which has a good trade-off between accuracy and speed. With this model, we can obtain the image embeddings which have the same length of that of the text vectors of the images generated from the noise vectors.
Iii-C Feature Axes
After training the image encoder, now we can find the relationship between the noise vectors and the predicted feature labels by logistic regression. The length of the noise vectors is 512 () and the feature vectors is 40 (). Therefore, we can obtain:
where is a matrix to be solved with dimention 51240.
This matrix needs to be orthogonalised because we need to disentangle all the attributes so that the noise vectors can move along a certain feature axis without affecting other ones. By the Gram-Schmidt process, the projection operator is:
where is the axis to be orthogonalised and is the reference axis. Then, we can obtain:
In (3), the matrix is normalised so that becomes unitary.
After these steps, we get the feature axes which are used to guide the update direction of the input noise vectors to obtain the desired features in the output images.
Iii-D Noise Vector Manipulation
Manipulating the noise vectors is vital to our work because this determines whether the output images will have the described features in the text corpus. In the model diagram Fig. 2, this is the process of changing the random noise vector from to by (4) where is a column vector to determine the direction and magnitude of the movement along feature axes.
To ensure that the model will produce an image of desired features no matter where the noise vectors are in the latent space, we introduce four operations.
Differentiation. As shown in Fig.2, the text classifier embedding output is denoted as and the predicted embedding from the initial random vector is denoted as . Intuitively, we can use to guide the movement of noise vectors in the feature axes. However, the value range of is . This means that the model cannot render features in opposite directions, say, young versus old, because there are no labels of negative value. To solve this, we use differentiated embeddings to guide the feature editing obtained by (5)
In this way, the noise vectors can be moved in both positive and negative direction along the feature axes because the value range of the differentiated embeddings is
. For the features which have a similar probability value in the text embeddings and the image embeddings, their probability value is cancelled out and they will not be rendered repeatedly in the output images. This operation is shown in Fig.2. For each feature, according to its probability value level in and , the movement direction can be positive, negative or cancelled out.
Note that to minimize interference of the unspecified features in the text descriptions, we will not apply the differentiation operation to such features and we keep their value as zero in the differentiated embeddings.
Reweighting. In the differentiated embeddings, the labels whose value approaching -1 or 1 are the specified features where the text descriptions may specify in a positive or negative way. Apart from these labels, there may be some other labels whose value are between -1 and 1 which interfere with the desired feature rendering. Therefore, we need to give higher weights to the values of the specified features. To do this, we propose to map the differentiated embeddings value range from to . Then we compute the tangent value of every factor of the mapped differentiated embeddings. As a result, the value approaching the ends of the value range will get a higher weight. In our scenario, the weighed value range is .
Normalisation. As the noise vectors are sampled from a normal distribution, they have a higher probability to be sampled near the origin of the axes where the probability density is high. However, the more steps we move the vectors along different feature axes, the larger the distance may become between the vectors and the origin, which will lead to more artifacts in the generated images. That is why we need to renormalise the vectors after every movement along the axes. This distance can be denoted as distance. Therefore, for the noise vector , we get with (LABEL:eq6)
Feature lock. To make the face morphing process more stable, we have a feature lock step every time we move the vectors along a certain axis. In other words, the model only uses the axes along which the vectors have been moved as the basis axes to disentangle the following feature axis. While for other axes of unspecified attributes in the textual descriptions, the movement direction and step size along such axes are not fixed to ensure a diversity of generated images. In this way, the noise vectors are locked only in terms of the features mentioned in the descriptions.
Iii-E High Resolution Generator
The generator we use is a pre-trained model of StyleGAN2 . On the basis of mapping the noise vectors which are sampled from the normal distribution to the intermediate latent space, StyleGAN2 improves the small artifacts by revisiting the structure of the network. With this generator, not only can the model synthesise high-resolution images, but it can also render the desired features from the manipulated input vectors.
Iv Experiments Evaluation
Dataset. The dataset we use is CelebA  which contains over 200k face images. For each sample, there is a paired one-shot label vector whose length is 40. In addition, there is another paired text description corpus set in which every description has almost 10 sentences. There may be some redundant sentences in some of them, but every description includes all the features the paired label vector indicates. We use this dataset to fine-tune the pre-trained multi-label text classifier and the pre-trained image encoder.
Experimental setting. In our evaluation experiments, we randomly choose 100 text descriptions. With each of them, the model will randomly generate 10 images. Therefore, the test set has 1000 images in total. As the experiments show, there will be significant image morphing when the noise vector moves twice along certain feature disentangled axis. Thus, we set the step size as 1.2, which multiplies the reweighted output of the differentiated vector. This guarantees a final weight which is used to move along the axis of around 2 ().
Iv-a Qualitative Evaluation
Image quality. Fig.1 also shows the paired descriptions in each group. We can see that most of the generated images are correctly rendered with described features.
Image diversity. To show the proposed method has great feature generalisation ability, we conduct the image synthesis conditioned on the single-sentence description. In other words, apart from the key features that the sentence refers to, the model should generalise the other features in the output. As Fig.4 shows, for each single-sentence description, the proposed model can produce images with high diversity.
Iv-B Quantitative Evaluation
In this section, we use three metrics to evaluate the above three criteria respectively. They are Inception Score (IS)  which is used in many previous works, Learned Perceptual Image Patch Similarity (LPIPS) 
which is for evaluating the diversity of the generated images, and Cosine Similarity which is widely used to evaluate the similarity of two chunks of a corpus in natural language processing. Due to the lack of the source code for most of the works in the TTF area such as T2F 2.0, we compare our experimental results with the TTF implementation of AttnGAN .
|*Maximum for each group.|
Iv-C Ablation Study
In Section 3, we propose four operations to manipulate the noise vector to get the desired features. In this subsection, we conduct the ablation study and discuss the effects of the different operations applied.
To conduct the ablation study, we have 5 experiment settings. We choose one face description and produce 100 random images under each experimental setting respectively. Then, we use the above three metrics to evaluate the effect of different operations.
Fig. 5 shows the morphing process of the generated images. We can see that with the proposed four manipulating operations, Group A can finally obtain an output with all desired features. While for other groups, the final morphing images all suffer from the artifact issue on the rendering of the face and the background. This is because with too many feature axis moving steps, the noise vector has been moved to a low-density region of the latent space distribution, which also leads to a mode collapse problem.
|*Maximum for each group|
Table.II shows the quantitative evaluation metrics on different groups of TTF-HD. We can see that Group A has the best diversity score as well as the second-best performance in terms of IS and CS score. This suggests that applying all operations leads to a good trade-off between image quality, text-to-face similarity and diversity.
In this paper, we set three main goals in the text-to-face image synthesis task: 1) High image resolution, 2) Good text-to-image consistency, and 3) High image diversity. To this end, we propose a model, named TTF-HD, comprising a multi-label text classifier, an image encoder, a high-resolution image generator, and feature-disentangled axes. From the qualitative and quantitative experiment results, we can see the generated images have good image quality, text-to-image similarity, and image diversity.
However, the model is still not entirely robust. There are always some images in a batch that are far more consistent with the text descriptions. This is possibly caused by insufficient accuracy of the text classifier and the image encoder due to lack of training data. In addition, features in the latent space are still not well disentangled, so that when you are moving the noise vector along one feature axis, other features which are highly correlated with it may change too. These issues need to be addressed in future research.
-  Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y. Generative adversarial nets. In Advances in neural information processing systems 2014 (pp. 2672-2680).
Isola P, Zhu JY, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition 2017 (pp. 1125-1134).
-  Zhu JY, Park T, Isola P, Efros AA. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision 2017 (pp. 2223-2232).
-  T. Wang, T. Zhang, L. Liu, A. Wiliem and B. Lovell, “Cannygan: Edge-Preserving Image Translation with Disentangled Features,” 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 2019, pp. 514-518, doi: 10.1109/ICIP.2019.8803828.
Zhang T, Wiliem A, Yang S, Lovell B. TV-Gan: Generative adversarial network based thermal to visible face recognition. In 2018 international conference on biometrics (ICB) 2018 Feb 20 (pp. 174-181). IEEE.
-  Reed S, Akata Z, Yan X, Logeswaran L, Schiele B, Lee H. Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396. 2016 May 17.
-  Zhang H, Xu T, Li H, Zhang S, Wang X, Huang X, Metaxas DN. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In Proceedings of the IEEE international conference on computer vision 2017 (pp. 5907-5915).
-  Xu T, Zhang P, Huang Q, Zhang H, Gan Z, Huang X, He X. Attngan: Fine-grained text to image generation with attentional generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition 2018 (pp. 1316-1324).
-  Devlin J, Chang MW, Lee K, Toutanova K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. 2018 Oct 11.
-  Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861. 2017 Apr 17.
Liu Z, Luo P, Wang X, Tang X. Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision 2015 (pp. 3730-3738).
-  Karras T, Laine S, Aittala M, Hellsten J, Lehtinen J, Aila T. Analyzing and improving the image quality of stylegan. arXiv preprint arXiv:1912.04958. 2019 Dec 3.
-  Nilsback ME, Zisserman A. Automated flower classification over a large number of classes. In 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing 2008 Dec 16 (pp. 722-729). IEEE.
-  Welinder P, Branson S, Mita T, Wah C, Schroff F, Belongie S, Perona P. Caltech-UCSD birds 200.
-  Lin TY, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL. Microsoft coco: Common objects in context. In European conference on computer vision 2014 Sep 6 (pp. 740-755). Springer, Cham.
-  Gatt A, Tanti M, Muscat A, Paggio P, Farrugia RA, Borg C, Camilleri KP, Rosner M, Van der Plas L. Face2Text: collecting an annotated image description corpus for the generation of rich face descriptions. arXiv preprint arXiv:1803.03827. 2018 Mar 10.
-  A. Karnewar. blog: https://medium.com/@animeshsk3/t2f-text-to-face-generation-using-deep-learning-b3b6ba5a5a93
-  Karras T, Aila T, Laine S, Lehtinen J. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196. 2017 Oct 27
-  Karnewar A, Iyengar RS. MSG-GAN: Multi-Scale Gradients GAN for more stable and synchronized multi-scale image synthesis. arXiv preprint arXiv:1903.06048. 2019 Mar 14.
-  Nasir OR, Jha SK, Grover MS, Yu Y, Kumar A, Shah RR. Text2FaceGAN: Face Generation from Fine Grained Textual Descriptions. In 2019 IEEE Fifth International Conference on Multimedia Big Data (BigMM) 2019 Sep 11 (pp. 58-67). IEEE.
-  Karras T, Laine S, Aila T. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2019 (pp. 4401-4410).
Zhang R, Isola P, Efros AA, Shechtman E, Wang O. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2018 (pp. 586-595).
-  Chen X, Qing L, He X, Luo X, Xu Y. FTGAN: A Fully-trained Generative Adversarial Networks for Text to Face Generation. arXiv preprint arXiv:1904.05729. 2019 Apr 11.
-  Chen X, Duan Y, Houthooft R, Schulman J, Sutskever I, Abbeel P. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in neural information processing systems 2016 (pp. 2172-2180).
-  Salimans T, Goodfellow I, Zaremba W, Cheung V, Radford A, Chen X. Improved techniques for training gans. In Advances in neural information processing systems 2016 (pp. 2234-2242).