Inducing Optimal Attribute Representations for Conditional GANs

03/13/2020 ∙ by Binod Bhattarai, et al. ∙ Imperial College London 6

Conditional GANs are widely used in translating an image from one category to another. Meaningful conditions to GANs provide greater flexibility and control over the nature of the target domain synthetic data. Existing conditional GANs commonly encode target domain label information as hard-coded categorical vectors in the form of 0s and 1s. The major drawbacks of such representations are inability to encode the high-order semantic information of target categories and their relative dependencies. We propose a novel end-to-end learning framework with Graph Convolutional Networks to learn the attribute representations to condition on the generator. The GAN losses, i.e. the discriminator and attribute classification losses, are fed back to the Graph resulting in the synthetic images that are more natural and clearer in attributes. Moreover, prior-arts are given priorities to condition on the generator side, not on the discriminator side of GANs. We apply the conditions to the discriminator side as well via multi-task learning. We enhanced the four state-of-the art cGANs architectures: Stargan, Stargan-JNT, AttGAN and STGAN. Our extensive qualitative and quantitative evaluations on challenging face attributes manipulation data set, CelebA, LFWA, and RaFD, show that the cGANs enhanced by our methods outperform by a large margin, compared to their counter-parts and other conditioning methods, in terms of both target attributes recognition rates and quality measures such as PSNR and SSIM.



There are no comments yet.


page 1

page 13

page 14

page 18

page 19

page 21

page 22

page 23

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Someone buying bread is likely to buy butter, blue sky comes with a sunny day. Similarly, some of the attributes of the faces co-occur more frequently than others. Fig. 1

shows co-occurring probabilities of facial attributes. We see some set of attributes such as

wearing lipsticks and male are least co-occurring (0.01) and male and bald are highly co-related (1.0).

Face attribute manipulation using GAN [9, 16, 26, 42, 48, 7, 46, 3, 23] is one of the challenging and popular research problem. Since the advent of conditional GAN [31], several variants of conditional GANs (cGANs) have been proposed. As conditions, existing methods rely on target domain one-hot vectors [9, 16, 25, 33], synthetic model parameters of target attributes [11], facial action units [39], or key point landmarks [31], to mention a few of them. Recently,  [26] proposed to use the difference of target and source attributes one-hot vectors [9]. This trick alone boosts Target Attributes Recognition Rate (TARR) on synthetic data compared to [9] by a large margin. Another recent study on GAN’s [35] identified conditioning on GAN is co-related with its performance. The major limitation of existing cGANs for arbitrary multiple face attributes manipulation is [9, 16, 26, 37, 25] are: hard coded 1 and 0 form, treating every attribute equally different and ignoring the co-existence of the attributes. In reality, as we can see in Fig. 1 some attributes are more co-related than others. Moreover, the existing methods are giving less attention to conditioning on the discriminator side except minimising the cross-entropy loss of target attributes.

Recently, [28] identified the problem of artefacts on synthetic examples due to unnatural transition from source to target. This problem arises due the ignorance of existing GANs regarding the co-existence of certain sub set of attributes. To overcome this, they propose a hard-coded method to condition both target attribute (aging) and its associated attributes (gender, race) on generator and also on discriminator in order to faithfully retain them after translation. However, this approach is limited to a single attribute and infeasible to hard code such rules in the case like ours where multiple arbitrary attributes are manipulated simultaneously. Recent study on GAN [5] identifies the forgetting problem of discriminator due to the non-stationary nature of the data from the generator. Applying simple structural identification loss (rotation angle) helped improve the performance and stability of GAN.

Figure 1: Co-occurrence matrix of facial attributes (zoom in the view).

To address the above mentioned challenges of cGANs, we investigate few continuous representations including semantic label embedding (word2vec) [30], attributes model parameters (attrbs-weights) (see Sec. 4). The advantages of conditioning with such representations instead of 0s and 1s form are mainly two folds: i) carries high-order semantic information, ii) establishes relative relationship between the attributes. These representations are, however, still sub-optimal and less natural as these are computed offline and also do not capture simultaneous existing tendency of different face attributes. Thus, we propose a novel conditioning method for cGAN to induce higher order semantic representations of target attributes, automatically embedding inter-attributes co-occurrence and also sharing the information based on degree of interaction. To this end, we propose to exploit the attributes co-occurrence probability and apply Graph Convolutional Network (GCN) [24] to condition GAN. GCN is capable to generate dense vectors distilling the higher dimensional data and also capable of convolving over the un-ordered data [15]. The conditioning parameters i.e. GCN are optimised via the discriminator and attribute classification losses of GAN in an end-to-end fashion. In order to maintain such semantic structural relationship of the attributes at the discriminator side as well, we adapted on-line multitask learning objectives [4] constrained by the same co-occurrence matrix. The experiments show that the proposed method substantially improve state-of-the-art cGAN methods and other conditioning methods in terms of target attributes classification rates and PSNR/SSIM. The synthesised images by our method exhibit associated multi-attributes and or clearer target attributes. Details of the method are explained in Sec. 3, following the literature review in Sec. 2, and experimental results and conclusions are shown in Sec. 4 and Sec. 5.

2 Related Works

Conditional GANs After the seminal work from Mirza et al. [31] on Conditional GANs (cGANs), several variants such as, conditioning target category labels in addition to the input [35, 6, 9, 16, 26, 50], semantic text or layout representations conditioning [40, 49, 17, 51], image conditioning [19, 18, 27], facial landmarks [47], have been proposed to solve different vision tasks. These works highlight the importance of semantic representations of target domain as a condition. In this work, we focus on conditioning target category labels especially by continuous and semantic representations.  [20] proposes multiple strategies for random continuous representation to encode target label but limits to single attributes.  [10] extended similar approaches for arbitrary attributes but lacks semantics.  [21]

proposes to use decision tree to generate hierarchical codes to control the target attributes. These are some of the works related to ours in terms of inducing continuous representations. Recent works on cGANs for face attribute manipulations 

[9, 37, 25, 16] encodes in the form of 0s and 1s or their difference [26]. These representations are hard-coded. STGAN [26] also proposes conditional control of the information flow from source to target in the intermediate layers of GANs. This is one of the closest works in terms of the adaptive conditioning target information. Other cGANs, such as StyleGAN [23] propose to provide condition to the intermediate layers of the generator. Progressive GAN [22] proposed to gradually increase the parameters of the generator and discriminator successively to generate high quality images. Our method is orthogonal to this line of methods, and can be extended to these works for a higher quality. Recently, attribute aware age progression GAN [28] proposes to condition both associated attributes and target attributes at both generator and discriminator side. This work is closest in terms of conditioning at both the sides and retaining the auxiliary attributes in addition to target attribute. This approach limits to single attribute manipulation i’e aging, whereas, our method supports multiple attributes. Also their method is hard-coded whereas our method is automatic and directly inferred from the co-occurrence matrix.
Graph Convolutional Network (GCN) [24] is being popular in several tasks including link prediction [12], clustering [38], node classification [45]. Recent works on image classification [8] and face attributes classification [34]

propose to use GCN to induce more discriminative representations of attributes by sharing information between the co-occurring attributes. Unlike these works, we propose to apply GCN to induce such higher-order representations of target categories for the generative neural networks and optimise it via end-to-end adversarial learning. From our best knowledge, this is the first work to use such embedding as conditions in cGANS. For more details on the work based on Graph CNN, we suggest reader to refer to 

Regularizing/Conditioning the Discriminator. Conditioning on the discriminator side has been shown useful in generating more realistic and diverse images [33, 32, 5, 36][35] maximises the distribution of target label in addition to source distribution to improve the quality of synthetic images. [5] introduced rotation loss on the discriminator side to mitigate the forgetting problem of the discriminator. Projecting the target conditional vector to the penultimate representation of the input at discriminator side [33] substantially improved the quality of synthetic images. Another work on Spectral normalisation of weight parameters [32] of every layer of the discriminator stabilises the training of GANs. Recent works on face attribute manipulations [9, 16, 26] minimise the target label cross entropy loss on discriminator. In this work, we introduce to condition discriminator with multi-task learning framework while minimising the target attributes cross entropy loss.

3 Proposed Method

3.1 Overview on the Pipeline

Fig. 2 shows the schematic diagram of the proposed method, where both the generator and discriminator are conditioned. As mentioned in Sec. 1, existing cGANs arts such as Stargan [9], AttGAN [16] or STGAN [26] condition the generator, which can be done either at the encoder or the decoder part of , to synthesise the image with intended attributes. But the problem with their conditions is that they ignore the intrinsic properties of the attributes and their relationships. They use single digit (0 or 1) for an attribute, and treat every attributes are equally similar to each other. In Fig. 2, the graph on the generator side represents the attributes and their relationships. Each node in the graph represents higher-order semantic representations of attributes, and the edges between them represent their relationship. We propose to induce the attribute representations which encode attribute properties including their relations, based on how they co-occur in the real world scenario. To induce such representations, we propose to apply GCN [24]

with convolutional layers on the generator side. The graph is optimised via end-to-end learning of the entire system of networks. The discriminator and the attribute classifier guide the graph learning such that the learnt conditional representations help the generator synthesise more realistic images and preserve target attributes. Such semantically rich and higher-order conditional representations of the target attributes play an important role in the natural transitioning to the target attribute. This helps to syntheise images with less artefacts, improved quality and better contrast, as also partially observed in StackGAN 


We also condition the parameters of attributes on the discriminator side using multi-task learning framework, similar to [4], based on the co-occurrence matrix. Using the learnt representation i.e. the graph to condition the discriminator might also be possible via EM-like alternating optimisation, however due to the complexity and instability, is not considered in this work. Conditioning both target and its associated attributes on generator and on discriminator enabled GAN to retain the target as well as the associated attributes faithfully [28]. Unlike [28] which is hard-coded,limited to a single attribute, our method is automatic, and supports arbitrary multiple attributes. See Section 3.3 for more details. Before diving into in the details of the proposed method, we first introduce attributes co-occurrence matrix which is exploited in both the generator and discriminator of the proposed method.

Figure 2: Schematic diagram showing the pipeline of the proposed method. Each node of the Graph represents an attribute and the edge between them encode their co-occurrence defined on . GCN induces the higher-order representations of the attributes () which are further scaled by (t-s) to generate . We concatenate with the latent representations of input image and feed to the decoder of (). At discriminator, we apply Multi-Task Learning (MTL) to share the weights between the tasks constrained on . During end-to-end learning, we back-propagate the error to the induced representations () to fine-tune their representations. We maintain the color codes among the attributes (best viewed in color).

Co-occurrence matrix: To capture the relationship between the attributes based on how frequently they go together, we constructed a co-occurrence matrix, , where is the total number of attributes. The value at position in the matrix gives us an idea about probability of attributes occurring given the attribute . Fig. 1 shows the co-occurrence matrix. We approximate this probability from the training data set as in Eqn. 1.


3.2 Graph Convolution and Generator

As stated before, we propose to learn the representations via GCN [24], which we simultaneously use to condition the generator. They are in the form of a Graph, . In our case there are different facial attributes, thus total nodes in the graph will be . We represent each node, also called a vertex of the graph, by their initial representations of the attributes and the edges between the graph encode their relationship. In our case, this is the co-occurrence probability. Since , the co-occurrence matrix, is asymmetric in nature. The graph is constructed from and initial continuous representations of the attributes . In Fig. 2, we show a single un-directed edge between the two nodes for clarity. The thickness of edges is proportional to the probability of co-occurring.

The goal of GCN [24, 52] is to learn a function on a graph , which takes initial node representations and an adjacency matrix (), which we derive from co-occurrence matrix as inputs. And, it updates the node features as after passing them through every convolutional layers. Every GCN layer can be formulated as


where is a transformation matrix learned during training and , is a diagonal matrix and denotes a non-linear operation, which is LeakyReLU [29] for our purpose. The induced representations of the attributes from the final convolutional layer of GCN, denoted as are the condition at the generator side as a cue to synthesise new synthetic images. Graph convolutions enable combining and averaging information from first-order neighbourhood [24, 14] to generate higher-order representations of the attributes. From Eqn. 2, we can see that the node representations at layer is induced by adding and averaging the representations of a node itself and its neighbours from layer . The sharing of information from the neighbouring nodes are controlled by the co-occurrence matrix.
Generator. The higher-order representations of the target attributes induced by graph convolution operations are fed into the generator along with the input image. Recent study [26] has shown that the difference of target and source one-hot vector of attributes helps to generate synthetic images with higher rate of target attributes preservation in comparison to the standard target one-hot vectors [9, 16]. We propose to feed the generator with the graph induced representations of attributes scaled by the difference of target, and source, attributes one-hot vectors as: , where is a matrix containing the final representations of the target attributes which we feed to the generator as shown in Fig. 2. Given an input image and the matrix containing continuous representations of target attributes , we learn the parameters, of the generator to generate a new image in an adversarial manner. The generator usually consists of both encoder and decoder or only decoder with few convolutional layers. Also conditions are either concatenated with image at encoder side or concatenated with the image latent representations on the input of decoder side. As we mentioned, our approach is agnostic to the architectures of GANs. Hence, the induced representations from our approach can be fed into the encoder [9] or decoder [16, 26] of the generator. In Fig. 2 we present a diagram where target attributes conditioning representations are fed from the decoder part of the generator similar to that of Attgan [16] and STGAN [26]. We flatten and concatenate it with the latent representations of the input image generated from the encoder and feed it to the decoder. In contrast, in Stargan [9] case, each of the columns in is duplicated separately and overlaid to match the dimension of input RGB image and concatenated with RGB channels.

Loss Functions and End-to-end Learning. The overall loss for the generator is

where are the adversarial loss, the classification loss and the reconstruction loss respectively and represent the hyper-parameters. We minimise the adversarial loss to make the generated image indistinguishable from the real data. The generator and discriminator compete to each other in an adversarial manner. Here, are the parameters of convolutional layers shared by the discriminator and attribute classifier, is of penultimate layers of the classifier, and are the parameters of the discriminator. In our case, we use WGAN-GP [1, 13]:

where .

The classification loss here is the standard binary cross-entropy loss in target category label: , where and form the attribute classifier. The reconstruction loss is computed setting the target attributes equal to that of the source. This results into zero vector and ultimately,

turns to the zero matrix.

The above combined loss,  trains the generator , the discriminator , and the graph CNN in an end-to-end fashion. The optimal attribute representations are learnt to help generate realistic images (by the discriminator loss), and preserve target attributes (by the classification loss). In the process, multi-attribute relations are also embedded to the representations. The networks would consider more natural i.e. realistic when the output image has the presence of associated other attributes as well as the target attribute.

Figure 3:

Heat map showing Cosine similarity between initial (left) and final (right) representations of the nodes of the GCN

3.3 Online Multitask Learning for Discriminator

While minimising the target attribute classification loss on the discriminator side, we propose to share weights between the co-occurring attributes model parameters. We adapted online multitask learning for training multiple linear classifiers [4] to achieve this. The rate of the weights shared between the model parameters of attributes is constrained by the attributes interaction matrix. We derive the interaction matrix from the co-occurrence matrix .

As before, are the parameters of convolutional layers and is of the penultimate layers of the classifier. We minimize the objective given in Eqn. 3 for target attribute classification with respect to discriminator, where is Kronecker product and

is the identity matrix. The first term in Eqn. 

3 is the standard binary cross entropy loss. The second term is a regularizer which enforces to maintain similar model parameters of frequently co-occurring attributes by sharing the weights.


Note the multi-task loss is computed on real data. Such updates induce similar model parameters of the attributes which are frequently co-occurring as defined in the co-occurrence matrix.

The multitask attribute classification loss in the above is exploited instead of the conventional single task loss without sharing the parameters. The sharing of parameters between the tasks has advantages over conventional methods: it enforces the discriminator to remember the semantic structural relationship between the attributes defined on the co-occurrence matrix. Such kind of constrains on the discriminator also helps to minimize the risk of forgetting [5] and also retain associated attributes [28]. We can also draw analogy between our method and Label Smoothing. Our difference from one-sided Label Smoothing [41] is randomly softening the labels while our approach is constrained with the meaningful co-occurrence matrix and regularises the parameters of the attributes by sharing the weights. We train the whole system i.e. , , and the graph in end-to-end as in the previous section, by replacing the binary classification loss with the multitask loss. Conditioning at the discriminator side also helps improve the generator.

4 Experiments

Data Sets and Evaluation Metrics.

To evaluate the proposed method we carried out our major experiments on CelebA which has around images annotated with different attributes. In our experiments, we took attributes similar to that of [26] on face attribute editing. Similarly, LFWA is another benchmark. This data set contains around images and each image is annotated with the same 40 different attributes as CelebA. We took images to train the model and report the performance on the remaining examples. Finally, we use RaFD data set annotated with the expressions to do attributes transfer from CelebA. This data set consists of images annotated with different facial expressions.

For quantitative evaluations, we employed Target Attributes Recognition Rate (TARR), PSNR, SSIM which are commonly used quantitative metrics for conditional GANs [9, 26, 16, 43]. For cGANs, it is not sufficient just to have synthetic realistic, them being recognisable as the target class is also highly important [43]. Thus we choose to compute TARR similar to that of existing works [9, 26, 16]. TARR computes the recognition rate of attributes on synthetic data by a model trained on real data. We took a publicly available pre-trained attribute prediction network [26] (acts as Inception Network for image classification) with a mean accuracy of on

different attributes on CelebA. Similarly, PSNR (Peak Signal to Noise Ratio) and SSIM (Structural Similarity) are other two popular evaluation metrics for GANs.

Compared Methods. To validate our idea, we compare the performance of our GCN induced representations (gcn-reprs) with wide ranges of both categorical and continuous types of target attributes encoding methods.
One-hot vector: As mentioned from the beginning, this is the most commonly and widely used conditioning technique for cGANs [9, 16, 25, 26]. Here, the presence and absence of a target attribute () is encoded by i’e and respectively.
Latent Representations (latent-reprs): [10] proposed to represent presence/absence of a target expression by a positive/negative -dimensional normal random vector for expression manipulation.
Word2Vec: Words embedding [30] to encode target domain information are successfully applied to synthesise image from text  [40]. We represented target attributes by the embedding of the attributes labels.
Co-occurrence: We use co-occurrence vectors as representations of target label attributes to obtain an approximate performance comparison to [28]. As, [28] rules were hard coded which is not feasible in our arbitrary attributes manipulation case.
Attrbs-weights: We use attribute model parameters obtained from [26] to represent the target attributes.
As [26] demonstrated the effectiveness of conditioning the difference of target and source attributes one-hot vector (t-s) compared to target attributes one-hot vector (t) alone, we mostly perform our experiments on the difference set up. For our convenience, we called conditioning difference of target and source as Difference mode (Diff) and conditioning target only as Standard (Std) mode. We employed several types of attribute encoding on both the modes and report their performances on multiple state-of-the-art GAN architectures (GAN Archs): Stargan [9], Attgan [16], STGAN [26], Stargan-JNT [9]. In addition to these conditioning on the generator side, we also proposed to apply Multi-task Learning (MTL) on the discriminator side.
Implementation Details. We initialise the nodes of the graph with model parameters of attributes (weight vectors) obtained from pre-trained attribute classifiers [26]. The dimensions of input and output nodes of the graph are and respectively. GCN has 2 convolution layers. For all the data sets in our experiment, we pre-process and crop image to the size of .

Condition Mode

Condition Type

Std Diff


one-hot vec 78.6
one-hot vec 80.2
co-occurrence 78.6
word2vec 81.3
attrbs-weights 81.9
gcn-reprs 84.0
Table 1: Comparison of Avg. TARR due to various types of attribute representations. For all the experiments, the architecture of GAN is Stargan [9]


C. Type Bald Bangs Black Hair Blonde Hair Brown Hair Bushy Eyebrows Eyeglasses Mlt Slt. Open Moustache No beard Pale Skin Male Young Average
one hot vec 24.4 92.3 59.4 68.9 55.7 50.1 95.7 96.1 18.8 66.6 84 77.1 83.9 67.2
+ MTL 22.7 95.4 63 62.3 51.9 58 99.2 98.7 24 52.2 90.5 83.7 86.8 68.3
Table 2: TARR on Stargan [9] synthetic data with and with out MTL at discriminator.

Quantitative Evaluations.
Ablation Studies:
We train Stargan [9] on both Std and Diff mode with various types of attribute encoding methods. We computed TARR on 5 different target attributes (hair color (black, blond, brown), gender (male/female), and age (young/old) on CelebA, similar to the original paper. Tab. 1 summarises the performance comparison. Among the four compared conditions type (one-hot vec, co-occurrence, word2vec, attrbs-weights), attrbs-weights obtain the best performance. Thus, we chose attrb-weights to initialise nodes of GCN. Please note, nodes can be initialised with any other type of representations. Referring to the same Tab., we observe Graph CNN induced representations (gcn-reprs) out-performing all other compared methods.
Discussion: Semantic representations of attributes: word2vec and attrbs-weights outperformed one-hot vec. Co-occurrence also lagged behind the semantic representations as it has no visual information and not optimised for arbitrary attribute manipulations. Please note, [28] designed similar representation for single target attribute. As we know, word2vec are learned from the large corpus and bears syntactic and semantic relationships between the words [30]. These are also being useful for attributes classification [2]. Attrbs-weights are equipped with higher-order visual information including the spatial location of the attributes (See Fig. 4 from [44]). Finally, gcn-reprs benefited from both semantic representations and co-occurrence relationship. Thus, it is essential that conditions hold semantically rich higher-order target attributes characteristics.
MTL at discriminator: We compare our idea to apply MTL at discriminator side. It is evident that, in MTL, interaction between the tasks is important. Thus, we extend the number of target attributes to for this experiments. Tab. 2 shows the performance comparison between the the baseline w/ and w/o MTL. We observe an overall increase of () in performance over baseline. On category level, MTL on the discriminator is outperforming in different attributes out of attributes.
Comparison With Existing Arts: To further validate our idea, we compare our method with several state-of-the-art GAN architectures to date in three different quantitative measurements viz. TARR, PSNR and SSIM on three different benchmarks.

C. Mode Attributes

GAN Arch.

Condition Type





Black Hair

Blonde Hair

Brown Hair

Bushy Eyebrows


Mlt Slt. Open


No beard

Pale Skin




IcGAN  [37] one-hot vec 19.4 74.2 40.6 34.6 19.7 14.7 82.4 78.8 5.5 22.6 41.8 89 37.6 43.2
FaderNet[25] one-hot vec 1.5 5 27 20.9 15.6 24.2 87.4 44 10 27.2 11.1 48.3 20.3 29.8
Stargan [9] one-hot vec 24.4 92.3 59.4 68.9 55.7 50.1 95.7 96.1 18.8 66.6 84 77.1 83.9 67.2
one-hot vec 41.9 93.6 74.7 75.2 67.4 65.9 99 95.3 26.8 64.3 86.2 89 89.3 74.5
latent-reprs 18.4 93.8 68.5 60.9 62.5 69.4 97.0 97.7 14.0 34.4 91.3 78.5 76.7 66.4
latent-reprs 32.5 93.2 68.9 79.5 71.5 55.3 97.2 98.4 30.0 58.5 85.1 84.0 75.1 71.4
2-18 gcn-reprs 28.2 99.4 76.5 77.1 70.9 74.2 99.5 99.4 37.3 89.6 92 93.4 94.9 79.4
+ MTL gcn-reprs 34.4 98.4 73.3 78.6 70.8 85.5 99.5 99.1 44.2 90 92.3 95.6 91.7 81.0
+ MTL + End2End gcn-reprs 56.6 98.2 76 81.1 80 73.4 99.4 99.5 50.9 89.1 94.2 92 95 83.4
STGAN [26] one-hot vec 40.7 92.5 69.5 69.7 59.4 65.2 99.2 95 26.8 69.7 90.8 70 52.8 69.3
one-hot vec 59.9 97.7 93 79 89.9 88.3 99.7 96.7 38.9 93.4 97.0 98.5 86.7 86.1
2-18 + MTL + End2End gcn-reprs 82.0 95.5 92.6 85.4 82.0 86.2 99.9 99.4 55.3 98.4 96.0 98.1 86.9 89.1
AttGAN [16] one-hot vec 22.5 93 46.3 40.4 51 49.2 98.6 97 30.3 81.3 84.4 83.3 67.9 65
one-hot vec 69.1 97.5 78.8 84.4 76.5 73.4 99.6 95.8 34.2 85.8 96.8 95.8 92.9 83
2-18 + MTL + End2End gcn-reprs 72.8 97.9 93.9 94.4 92.3 86.8 99.8 98.6 48.4 97.1 97.1 98.5 96.4 90.3
Table 3: Comparison of Target Attributes Recognition Rate (TARR) on CelebA with different existing cGANs architectures with different target attribute label conditioning.

Target Attribute Recognition Rate (TARR): We computed TARR on both CelebA and LFWA and compared it with several existing arts. Tab. 3 compares the TARR on CelebA.  In the Tab., the top block shows the performance of two earlier works: IcGAN [37] and FaderNet [25]. These methods relied on one-hot vec type of the target attributes in Std mode as their conditions. The TARR of these methods are modest. In the Second block, we compare the performance of our gcn-repres with the default conditioning type of Stargan [9] and also the recently proposed latent-reprs [10] on both the conditioning modes. Simply switching the default type from Std mode to Diff mode improves the TARR from to . This is the best performance of Stargan reported by [26]. The encoding principle of latent-reprs [10] is similar to that of  one hot vec, as positive and negative random vectors are used instead of 1 and 0. Thus, this type of conditioning did not improve performance.

Figure 4: TARR comparison on LFWA.

We trained our GCN network to induce gcn-reprs and replaced the default conditioning on the generator of Stargan [9] by gcn-reprs, the performance improved to . Another experiment with the same set-up on the generator and MTL on the discriminator improves the performance to . Training GCN and GAN on multi-stage fashion make the representations sub-optimal. Thus, we train GCN and GAN and apply MTL simultaneously and attained an average accuracy of , the highest performance reported on Stargan [9]. We also applied our method to two other best performing GAN architecture [26, 16] for face attribute manipulations. Last two blocks of the results show the performance comparison on these architectures. Similar to that with Stargan [9], we simply replaced their method of conditioning target attributes and applied ours to both the generator and discriminator and train the model from scratch in an end-to-end fashion. After we applied our method on STGAN [26], which is the state-of-the-art method to date, we improve the mean average performance from to . Similarly, on Attgan [16] we outperformed the best reported performance by and attain the new state-of-the-art performance on CelebA. This is above the current state-of-the-art [26]. These results show that our method is agnostic to the cGAN architecture. We also evaluated our method on LFWA. We applied our method to Stargan and compared it to the default conditioning type, one-hot vec on Diff mode. Fig. 4 shows the TARR. From this Fig., we can see our approach outperforming the baseline. If we carefully check the performance of individual attributes on both the benchmarks (Tab. 3, Fig. 4 ), our method is substantially outperforming existing arts in attributes such as bald, moustache, young, male. These attributes follows law of nature and it is essential to make natural transition to better retain the target label.

PSNR/SSIM: We compute PSNR and SSIM scores between real and synthetic images and compare the performance between with the counter-parts. In Tab. 4, the two columns Before and After show the scores of GANs before and after applying our method respectively. Our approach consistently improves the performance of the counter-parts.

GAN Arch. Before After Before After Data Set
Stargan-JNT [9] 23.82 28.0 0.867 0.944 RaFD+CelebA
StarGAN [9] 22.80 27.20 0.819 0.897 CelebA
StarGAN [9] 24.65 27.96 0.856 0.912 LFWA
AttGAN [16] 24.07 26.68 0.841 0.858 CelebA
Table 4: Comparison of PSNR and SSIM with existing arts

Qualitative Evaluations:

Figure 5: Qualitative comparison of Attgan [16] (default conditioned with one-hot vector) trained on diff mode vs Attgan trained with gcn-reprs (ours) trained on diff-mode on CelebA (best viewed on color).

To further validate our idea, we performed extensive qualitative analysis. We compare our method over the existing arts on two different scenarios i’e in and across data set attributes transfer.
In Data Set Attribute Transfer: In this scenario, we train a model on train set and evaluate the performance on the test set of the same data set. Fig. 5 compares the qualitative outcomes of Attgan [16] conditioned with one-hot vec on Diff mode with gcn-reprs on the same mode. The left block in the figure shows the result of single target attribute manipulation whereas the right block shows that of multi-attributes manipulation. From the results, we can clearly see that our method is able to generate images with less artefacts and better contrast (see the background). In addition to this, our method is also able to manipulate multiple attributes simultaneously, whenever it is meaningful to do so, to give a natural transition from source to target. For example, for male-to-female transition, our method is able to put on lipsticks, high cheekbones, arched eyebrows but the baseline fails to do so. Similarly, wrinkles on face with few remaining grey hair gives natural transition to bald instead just completely removing the hairs from head. As it is highly likely that a person gets bald on his/her old age. Turning grey hair to black hair is making the guy comparatively younger as black hair is associated with young attribute. Due to such unique strengths of our method, enabled by GCN on the generator and MTL on the discriminator, we observe substantial improvements over the baselines especially in the recognition of certain attributes: Young, Male, Bald, Moustache where a natural transition is essential as these are naturally occurring attributes.

Cross Data Set Attributes Transfer: Stargan-JNT [9] propose to train a GAN with multiple data sets having disjoint sets of attributes simultaneously to improve the quality of the cross-data set attribute transfer. We applied our conditioning method to train the network on CelebA and RaFD simultaneously.

Figure 6: Qualitative comparison of Stargan-JNT with default condition vs Stargan-JNT conditioned with gcn-reprs (ours) on RaFD (best viewed on color).

And, we compare the performance with their default conditioning method which is one-hot vec. Fig. 6 shows a few test examples from RaFD and their synthetic images when target attributes are from CelebA. Please note attribute annotations such as Black Hair, Blonde Hair, Pale Skin are absent on RaFD train set. From the same Fig., we can clearly see that the synthetic images generated by our method are with less artefacts, better contrast and better preservation of the target attributes.

5 Conclusions

We propose a Graph CNN enabled novel method to induce target attributes representations for Conditional GANs on the Generator part. In addition, we proposed a MTL based structural regularisation mechanism at the discriminator of the GAN. For both of these, we exploit the empirical co-occurrence distribution of the attributes. Finally, we propose a framework to learn them in an end-to-end fashion. We applied our method over multiple existing cGANs and also evaluated on multiple benchmarks for face attribute manipulations. From our both extensive quantitative and qualitative evaluations, we observed a substantial improvement over the existing arts, attaining new state-of-the-art performance. As a future work, we plan to design a framework to dynamically evolve the co-occurrence distribution of the attributes to synthesize more realistic attributes manipulated images.

6 Acknowledgements

Authors would like to thank EPSRC Programme Grant ‘FACER2VM’(EP/N007743/1) for generous support. We would also like to thank Prateek Manocha, undergraduate student from IIT Guwahati for setting up few baseline experiments during his summer internship at Imperial College London.


  • [1] M. Arjovsky, S. Chintala, and L. Bottou (2017) Wasserstein gan. arXiv preprint arXiv:1701.07875. Cited by: §3.2.
  • [2] B. Bhattarai, R. Bodur, and T. Kim (2020) AugLabel: exploiting word representations to augment labels for face attribute classification. In ICASSP, Cited by: §4.
  • [3] J. Cao, H. Huang, Y. Li, J. Liu, R. He, and Z. Sun (2019)

    Biphasic learning of gans for high-resolution image-to-image translation

    CVPR. Cited by: §1.
  • [4] G. Cavallanti, N. Cesa-Bianchi, and C. Gentile (2010) Linear algorithms for online multitask classification. JMLR. Cited by: §1, §3.1, §3.3.
  • [5] T. Chen, X. Zhai, M. Ritter, M. Lucic, and N. Houlsby (2019) Self-supervised gans via auxiliary rotation loss. In CVPR, Cited by: §1, §2, §3.3.
  • [6] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel (2016) Infogan: interpretable representation learning by information maximizing generative adversarial nets. In NIPS, Cited by: §2.
  • [7] Y. Chen, H. Lin, M. Shu, R. Li, X. Tao, X. Shen, Y. Ye, and J. Jia (2018) Facelet-bank for fast portrait manipulation. In CVPR, Cited by: §1.
  • [8] Z. Chen, X. Wei, P. Wang, and Y. Guo (2019) Multi-label image recognition with graph convolutional networks. In CVPR, Cited by: §2.
  • [9] Y. Choi, M. Choi, M. Kim, J-W. Ha, S. Kim, and J. Choo (2018) StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation. In CVPR, Cited by: §1, §2, §3.1, §3.2, Table 1, Table 2, Table 3, Table 4, §4, §4, §4, §4, §4.
  • [10] H. Ding, K. Sricharan, and R. Chellappa (2018) Exprgan: facial expression editing with controllable expression intensity. In AAAI, Cited by: §2, §4, §4.
  • [11] B. Gecer, B. Bhattarai, J. Kittler, and T. Kim (2018) Semi-supervised adversarial learning to generate photorealistic face images of new identities from 3d morphable model. In ECCV, Cited by: §1.
  • [12] A. Grover and J. Leskovec (2016) Node2vec: scalable feature learning for networks. In SIGKDD, Cited by: §2.
  • [13] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville (2017) Improved training of wasserstein gans. In NIPS, Cited by: §3.2.
  • [14] W. Hamilton, Z. Ying, and J. Leskovec (2017) Inductive representation learning on large graphs. In NIPS, Cited by: §3.2.
  • [15] W. L. Hamilton, R. Ying, and J. Leskovec (2017) Representation learning on graphs: methods and applications. arXiv preprint arXiv:1709.05584. Cited by: §1.
  • [16] Z. He, W. Zuo, M. Kan, S. Shan, and X. Chen (2019) Attgan: facial attribute editing by only changing what you want. IEEE TIP. Cited by: §1, §2, §3.1, §3.2, Figure 5, Table 3, Table 4, §4, §4, §4.
  • [17] S. Hong, D. Yang, J. Choi, and H. Lee (2018) Inferring semantic layout for hierarchical text-to-image synthesis. In CVPR, Cited by: §2.
  • [18] X. Huang, M. Liu, S. Belongie, and J. Kautz (2018) Multimodal unsupervised image-to-image translation. In ECCV, Cited by: §2.
  • [19] P. Isola, J. Zhu, T. Zhou, and A. A. Efros (2017)

    Image-to-image translation with conditional adversarial networks

    In CVPR, Cited by: §2.
  • [20] T. Kaneko, K. Hiramatsu, and K. Kashino (2017) Generative attribute controller with conditional filtered generative adversarial networks. In CVPR, Cited by: §2.
  • [21] T. Kaneko, K. Hiramatsu, and K. Kashino (2018) Generative adversarial image synthesis with decision tree latent controller. In CVPR, Cited by: §2.
  • [22] T. Karras, T. Aila, S. Laine, and J. Lehtinen (2018) Progressive growing of gans for improved quality, stability, and variation. ICLR. Cited by: §2.
  • [23] T. Karras, S. Laine, and T. Aila (2019) A style-based generator architecture for generative adversarial networks. In CVPR, Cited by: §1, §2.
  • [24] T. N. Kipf and M. Welling (2016) Semi-supervised classification with graph convolutional networks. In ICLR, Cited by: §1, §2, §3.1, §3.2, §3.2.
  • [25] G. Lample, N. Zeghidour, N. Usunier, A. Bordes, L. Denoyer, et al. (2017) Fader networks: manipulating images by sliding attributes. In NIPS, Cited by: §1, §2, Table 3, §4, §4.
  • [26] M. Liu, Y. Ding, M. Xia, X. Liu, E. Ding, W. Zuo, and S. Wen (2019) STGAN: a unified selective transfer network for arbitrary image attribute editing. In CVPR, Cited by: §1, §2, §3.1, §3.2, Table 3, §4, §4, §4, §4.
  • [27] M. Liu, T. Breuel, and J. Kautz (2017) Unsupervised image-to-image translation networks. In NeurIPS, Cited by: §2.
  • [28] Y. Liu, Q. Li, and Z. Sun (2019) Attribute-aware face aging with wavelet-based generative adversarial networks. In CVPR, Cited by: §1, §2, §3.1, §3.3, §4, §4.
  • [29] A. L. Maas, A. Y. Hannun, and A. Y. Ng (2013) Rectifier nonlinearities improve neural network acoustic models. In ICML, Cited by: §3.2.
  • [30] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean (2013) Distributed representations of words and phrases and their compositionality. In NurIPS, Cited by: §1, §4, §4.
  • [31] M. Mirza and S. Osindero (2014) Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784. Cited by: §1, §2.
  • [32] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida (2018) Spectral normalization for generative adversarial networks. In ICLR, Cited by: §2.
  • [33] T. Miyato and M. Koyama (2018) CGANs with projection discriminator. In ICLR, Cited by: §1, §2.
  • [34] F. Nian, X. Chen, S. Yang, and G. Lv (2019) Facial attribute recognition with feature decoupling and graph convolutional networks. IEEE Access. Cited by: §2.
  • [35] A. Odena, J. Buckman, C. Olsson, T. B. Brown, C. Olah, C. Raffel, and I. Goodfellow (2018) Is generator conditioning causally related to gan performance?. In ICML, Cited by: §1, §2.
  • [36] A. Odena, C. Olah, and J. Shlens (2017) Conditional image synthesis with auxiliary classifier gans. In ICML, Cited by: §2.
  • [37] G. Perarnau, J. Van De Weijer, B. Raducanu, and J. M. Álvarez (2016) Invertible conditional gans for image editing. In NIPSW, Cited by: §1, §2, Table 3, §4.
  • [38] B. Perozzi, R. Al-Rfou, and S. Skiena (2014) Deepwalk: online learning of social representations. In SIGKDD, Cited by: §2.
  • [39] A. Pumarola, A. Agudo, A. M. Martinez, A. Sanfeliu, and F. Moreno-Noguer (2018) Ganimation: anatomically-aware facial animation from a single image. In ECCV, Cited by: §1.
  • [40] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee (2016) Generative adversarial text to image synthesis. Cited by: §2, §4.
  • [41] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen (2016) Improved techniques for training gans. In NIPS, Cited by: §3.3.
  • [42] W. Shen and R. Liu (2017) Learning residual images for face attribute manipulation. In CVPR, Cited by: §1.
  • [43] K. Shmelkov, C. Schmid, and K. Alahari (2018) How good is my gan?. In ECCV, Cited by: §4.
  • [44] F. Taherkhani, N. M. Nasrabadi, and J. Dawson (2018) A deep face identification network enhanced by facial attributes prediction. In CVPRW, Cited by: §4.
  • [45] J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, and Q. Mei (2015) Line: large-scale information network embedding. In WWW, Cited by: §2.
  • [46] T. Xiao, J. Hong, and J. Ma (2018) Elegant: exchanging latent encodings with gan for transferring multiple face attributes. In ECCV, Cited by: §1.
  • [47] E. Zakharov, A. Shysheya, E. Burkov, and V. Lempitsky (2019) Few-shot adversarial learning of realistic neural talking head models. arXiv preprint arXiv:1905.08233. Cited by: §2.
  • [48] G. Zhang, M. Kan, S. Shan, and X. Chen (2018) Generative adversarial network with spatial attention for face attribute editing. In ECCV, Cited by: §1.
  • [49] H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. N. Metaxas (2017) Stackgan: text to photo-realistic image synthesis with stacked generative adversarial networks. In CVPR, Cited by: §2, §3.1.
  • [50] Z. Zhang, Y. Song, and H. Qi (2017)

    Age progression/regression by conditional adversarial autoencoder

    In CVPR, Cited by: §2.
  • [51] B. Zhao, L. Meng, W. Yin, and L. Sigal (2019) Image generation from layout. In CVPR, Cited by: §2.
  • [52] J. Zhou, G. Cui, Z. Zhang, C. Yang, Z. Liu, and M. Sun (2018) Graph neural networks: a review of methods and applications. arXiv:1812.08434. Cited by: §2, §3.2.

7 Additional Results

Input Attgan(Diff mode) Attgan(Ours)

Black Hair
Blonde Hair

Figure 7: Qualitative comparison on simultaneous multi-attribute manipulations. First, Second and Third column show the input image, synthetic image from the compared method (Attgan with its default conditioning on Diff mode), and Attgan with our approach respectively. The order of the target attributes are: female, first three rows, followed by bald, black hair, and blonde hair

Input Attgan(Diff mode) Attgan(Ours)
Black Hair
Black Hair
Brown Hair
Brown Hair
Blonde Hair

Figure 8: Qualitative comparison on single attribute translation: First, Second and third column show the input, synthesized image from Attgan with its default conditioning on Diff mode, and Attgan trained with our approach, respectively. The order of target attributes are (from top to bottom): black hair, black hair, brown hair, moustache, brown hair, blonde hair, brown hair

We present our additional results and insights of our work.

Simultaneous Multiple Attribute Manipulation: One of the key characteristics of our method in comparison to existing arts is to make simultaneous manipulation of frequently co-occurring auxiliary attributes to give a natural translation of the target attribute. Fig. 7 shows few more qualitative comparison of our method and its counter-part (Attgan on Diff-mode). As mentioned on main paper, cGANs trained on Diff-mode outperforms the Std-mode. In the Fig. 7, the first, the second, and the third columns show the input image, synthetic image from the compared method, and synthetic image from our method respectively. Here, the compared method is Attgan with one-hot vec as target attributes on their Diff-mode. First three rows show the images when target attribute is female. In all these three cases, wearing lipstick is clearly visible in our case whereas the counter-part method failed to do so. from the co-occurrence matrix. Similarly, cheekbones are higher in all of the synthetic images from our method. Again from Co-occurrence Tab., the probability of a female having high cheekbones is nearly . Also in the second row, the beard is thinned a lot by our method but the counter-part completely ignores it. In the third row, the intensity of colour of the hair is more brown than original image. In the Fourth row, face is wrinkled along with removing hair from head when the target attribute is set bald. In the fifth row, our method made the synthetic image look younger when we apply black hair on an old man with grey hair. In the final row, heavy make up turning skin to pale can be observed when we set the target attribute as blonde hair. The above mentioned pair of attributes are highly co-occurring attributes. We have captured such information on co-occurrence matrix and this information has been quite helpful to make such a realistic translation to target. Such meaningful translation is ultimately helping us to obtain the high target attributes recognition rate (please check experimental results on main paper).

Single Attribute Manipulation: In addition to doing simultaneous multiple attribute manipulation for natural translation of attributes, our method retains the target attributes in more distinct manner with less artefacts and better contrast. Fig. 8 compares the synthetic images from our method and the counter part method. From the Fig. 8, we can clearly see that our method has preserved the target attributes with higher intensities (clearer target hair colours: black, brown and blonde, and thicker moustache) and better contrasts (please compare the background too) in comparison to the counter-part method. Our method retains the co-related attributes from the source and also does not arbitarly manipulate other un-related attributes. This is demonstrated by the last three examples. In the examples, our method is able to retain lipsticks and heavy make up while setting the target attributes to brown hair, blonde hair and brown hair respectively. Whereas, the counter-part fainted the lipsticks and make up.

Additional Qualitative Results: Fig. 10 shows the few other synthetic examples from our method. From the Fig., we see different sets of multiple attributes manipulated when target attribute is bald (wrinkle on face, grey hair), female (lip sticks, long hair, arched eye brows), old (wrinkles and grey hair) and bangs (comparatively younger). And these sets of attributes are highly co-occurring. Thus, our method optimises generator to do simultaneous alternation of attributes for realistic translation to target domain.

Figure 9: Heat-map showing the cosine similarity between the initial (left) and final nodes representations by GCN (right)

Comparison with Existing Arts: Fig. 11 shows visual comparison of synthetic images with existing arts. From the Fig., we can observe that our method is able to do various auxiliary attribute manipulation depending on the target attributes, e.g. bald (wrinkle on face), blonde (heavy make up), female (high cheek bones, lipsticks, arched eye brows), and also exhibits clearer target attributes. Whereas amongst the existing arts, Stargan on Diff-mode and FaderNet changes auxiliary attributes when target attribute is female (lipsticks). Stgan on Diff-mode and Attgan on Diff-mode manages to do so only mildly. Such multiple attribute manipulation has helped existing methods to obtain better TARR on such categories. Please Refer Tab. 3 on main paper for empirical performance. From the Tab., FaderNet has accuracy on male vs female although the mean TARR is . Similarly, Stargan on Diff mode obtains performance on male vs female above mean TARR. This suggests simultaneously manipulating auxiliary along with main attribute is beneficial to obtain high TARR.

Attributes Indices and Relation between Induced Representations: Tab. 5 shows the indices of the attributes. Fig. 9 shows the co-sine similarity between the representations of the attributes before and after GCN. Referring these two to compare the relationship between attributes, we can clearly see some of the categorical level meaningful positive and negative co-relations. For the attribute (Young), attributes with positive co-relations are (attractive), (heavy makeup, high cheekbones, male, mouth slightly open), (no beard), (smiling), (wavy hair), (Wearing Lipstick). Whereas, (bald), (big lips), (double chin), (eyeglasses) are negatively co-related. Such relative relations are not distinct before the GCN (Please ref. Fig. 9, top). Similarly, (bald) is highly un-corelated to young () which entails its positive co-relation with negation of young i’e old. Such induction are constrained by co-occurrence matrix we computed from the training examples on CelebA (See Fig. 1 on main paper) and also most of the co-relations are clearly visible on our synthetic images.

Index Attribute Index Attribute
0 5 O’Clock Shadow 20 Male
1 Arched Eyebrows 21 Mouth Slightly Open
2 Attractive 22 Mustache
3 Bags Under Eyes 23 Narrow Eyes
4 Bald 24 No Beard
5 Bangs 25 Oval Face
6 Big Lips 26 Pale Skin
7 Big Nose 27 Pointy Nose
8 Black Hair 28 Receding Hairline
9 Blond Hair 29 Rosy Cheeks
10 Blurry 30 Sideburns
11 Brown Hair 31 Smiling
12 Bushy Eyebrows 32 Straight Hair
13 Chubby 33 Wavy Hair
14 Double Chin 34 Wearing Earrings
15 Eyeglasses 35 Wearing Hat
16 Goatee 36 Wearing Lipstick
17 Gray Hair 37 Wearing Necklace
18 Heavy Makeup 38 Wearing Necktie
19 High Cheekbones 39 Young
Table 5: Indices of Attributes

Input Bald Bangs Female Old

Figure 10: Additional Qualitative results on multi-attribute manipulation after applying our method on Attgan-diff architecture. Blue boxed images are input images and rests of the images are synthetic. The order of target attributes are: bald, bangs, female, old
Figure 11: Qualitative comparison of synthetic images with the existing arts for face attributes editing