Neural Design Network: Graphic Layout Generation with Constraints

12/19/2019 ∙ by Hsin-Ying Lee, et al. ∙ 32

Graphic design is essential for visual communication with layouts being fundamental to composing attractive designs. Layout generation differs from pixel-level image synthesis and is unique in terms of the requirement of mutual relations among the desired components. We propose a method for design layout generation that can satisfy user-specified constraints. The proposed neural design network (NDN) consists of three modules. The first module predicts a graph with complete relations from a graph with user-specified relations. The second module generates a layout from the predicted graph. Finally, the third module fine-tunes the predicted layout. Quantitative and qualitative experiments demonstrate that the generated layouts are visually similar to real design layouts. We also construct real designs based on predicted layouts for a better understanding of the visual quality. Finally, we demonstrate a practical application on layout recommendation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Graphic design surrounds us on a daily basis, from image advertisements, movie posters, and book covers to more functional presentation slides, websites, and mobile applications. Graphic design is a process of using text, images, and symbols to visually convey messages. Even for experienced graphic designers, the design process is iterative and time-consuming with many false starts and dead ends. This is further exacerbated by the proliferation of platforms and users with significantly different visual requirements and desires.

In graphic design, layout – the placement and sizing of components (e.g., title, image, logo, banner, etc.) – plays a significant role in dictating the flow of the viewer’s attention and, therefore, the order by which the information is received. Creating an effective layout requires understanding and balancing the complex and interdependent relationships amongst all of the visible components. Variations in the layout change the hierarchy and narrative of the message.

In this work, we focus on the layout generation problem that places components based on the component attributes, relationships among components, and user-specified constraints. Figure Neural Design Network: Graphic Layout Generation with Constraints illustrates examples where users specify a collection of assets and constraints, then the model would generate a design layout that satisfies all input constraints, while remaining visually appealing.

Generative models have seen a success in rendering realistic natural images [6, 16, 27]. However, learning-based graphic layout generation remains less explored. Existing studies tackle layout generation based on templates [2, 11]

or heuristic rules 

[25], and more recently using learning-based generation methods [15, 23, 32]. However, these approaches are limited in handling relationships among components. High-level concepts such as mutual relationships of components in a layout are less likely to be captured well with conventional generative models in pixel space. Moreover, the use of generative models to account for user preferences and constraints is non-trivial. Therefore, effective feature representations and learning approaches for graphic layout generation remain challenging.

In this work, we introduce neural design network (NDN), a new approach of synthesizing a graphic design layout given a set of components with user-specified attributes and constraints. We employ directional graphs as our feature representation for components and constraints since the attributes of components (node) and relations among components (edge) can be naturally encoded in a graph. NDN takes as inputs a graph constructed by desired components as well as user-specified constraints, and then outputs a layout where bounding boxes of all components are predicted.

NDN consists of three modules. First, the relation prediction module takes as input a graph with partial edges, representing components and user-specified constraints, and infers a graph with complete relationships among components. Second, in the layout generation module, the model predicts bounding boxes for components in the complete graph in an iterative manner. Finally, in the refinement module, the model further fine-tunes the bounding boxes to improve the alignment and visual quality.

We evaluate the proposed method qualitatively and quantitatively on three datasets under various metrics to analyze the visual quality. The three experimental datasets are RICO [3, 24], Magazine [32], and an image banner advertisement dataset collected in this work. These datasets reasonably cover several typical applications of layout design with common components such as images, texts, buttons, toolbars and relations such as above, larger, around, etc. We construct real designs based on the generated layouts to assess the quality. We also demonstrate the efficacy of the proposed model by introducing a practical layout recommendation application.

We make the following contributions in this work:

  • We propose a new approach that can generate high-quality design layouts for a set of desired components and user-specified constraints.

  • We validate that our method performs favorably against existing models in terms of realism, alignment, and visual quality on three datasets.

  • We demonstrate real use cases that construct designs from generated layouts and a layout recommendation application. Furthermore, we collect a real-world advertisement layout dataset to broaden the variety of existing layout benchmarks.

2 Related Work

Natural scene layout generation.

Layout is often used as the intermediate representation in image generation task conditioned on text [8, 10, 31] or scene graph [14]. Instead of directly learning the mapping from the source domain (e.g., text and scene graph) to the image domain, these methods model the operation as a two-stage framework. They first predict layouts conditioned on the input sources, and then generate images based on the predicted layouts. Recently, Jyothi et al. propose the LayoutVAE [15], which is a generative framework that can synthesize scene layout given a set of labels. However, a graphic design layout has several fundamental differences to a natural scene layout. The demands for relationship and alignment among components are strict in graphic design. A few pixels offsets of components can either cause a difference in visual experience or even ruin the whole design. The graphic design layout does not only need to look realistic but also needs to consider the aesthetic perspective.

Figure 1: Framework illustration. Neural design network consists of three modules: relation prediction, bounding box prediction, and refinement. We illustrate the process with a three-component example. In relation prediction

module, the model takes as inputs a graph with partial relations along with a latent vector (encoded from the graph with complete relations during training, sampled from prior during testing), and outputs a graph with complete relations. Only the graph with location relations is shown in the figure for brevity. In

layout generation module, the model takes a graph with complete relations as inputs, and predicts the bounding boxes of components in an iterative manner. In refinement module, the model further fine-tune the layout.

Graphic design layout generation.

Early work on design layout or document layout mostly relies on templates [2, 11], exemplars [20], or heuristic design rules [25, 30]. These methods rely on predefined templates and heuristic rules, for which professional knowledge is required. Therefore, they are limited in capturing complex design distributions. Other work leverages saliency maps [1] and attention mechanisms [26] to capture the visual importance of graphic designs and to trace the user’s attention. Recently, generative models are applied to graphic design layout generation [23, 32]. The LayoutGAN model [23] can generate layouts consisting of graphic elements like rectangles and triangles. However, the LayoutGAN model generates layout from input noises and fails to handle layout generation given a set of components with specified attributes, which is the common setting in graphic design. The Layout Generative Network [32] is a content-aware layout generation framework that can render layouts conditioned on attributes of components. While the goals are similar, the conventional GAN-based framework cannot explicitly model relationships among components and user-specified constraints.

Graph neural networks in vision.

Graph Neural Networks (GNNs) 

[5, 7, 29] aim to model dependence among nodes in a graph via message passing. GNNs are useful for data that can be formulated in a graph data structure. Recently, GNNs and related models have been applied to classification [19], scene graph [14, 33], motion modeling [12], and molecular property prediction [4, 13], to name a few. In this work, we model a design layout as a graph and apply GNNs to capture the dependency among components.

3 Graphic Layout Generation

Our goal is to generate design layouts given a set of design components with user-specified constraints. For example, in image ads creation, the designers can input the constraints such as “logo at bottom-middle of canvas”, “call-to-action button of size (100px, 500px)”, “call-to-action-button is below logo”, etc. The goal is to synthesize a set of design layouts that satisfy both the user-specified constraints as well as common rules in image ads layouts. Unlike layout templates, these layouts are dynamically created and can serve as inspirations for designers.

We introduce the neural design network using graph neural network and conditional variational auto-encoder (VAE) [18, 28] with the goal of capturing better representations of design layouts. Figure 1 illustrates the process of generating a three-component design with the proposed neural design network. In the rest of this section, we first describe the problem overview in Section 3.1. Then we detail three modules in NDN: the relation prediction (Section 3.2) and layout generation modules (Section 3.3), and refinement module (Section 3.4).

3.1 Problem Overview

The inputs to our network are a set of design components and user-specified constraints. We model the inputs as a graph, where each design component is a node and their relationships are edges. In this paper, we study two common relationships between design components: location and size.

Define , where is a set of components with each coming from a set of categories . We use to denote the canvas that is fixed in both location and size, and to denote other design components that need to be placed on the canvas, such as logo, button. and are sets of directed edges with and , where and . Here, specifies the relative size of the component, such as smaller or bigger, and can be left, right, above, below, upper-left, lower-left, etc. In addition, if anchoring on the canvas , we extend the to capture the location that is relative to the canvas, e.g., upper-left of the canvas.

Furthermore, in reality, designers often do not specify all the constraints. This results in an input graph with missing edges. Figure 1 shows an example of a three-component design with only one specified constraint “(, above, )” and several unknown relations “”. To this end, we augment and to include an additional unknown category, and represent graphs that contain unknown size or location relations as and , respectively, to indicate they are the partial graphs. In Section 3.2, we describe how to predict the unknown relations in the partial graphs.

Finally, we denote the output layout of the neural design network as a set of bounding boxes , where represents the location and shape.

In all modules, we apply the graph convolution networks on graphs. The GNN takes as the input the features of nodes and edges, and outputs updated features. The input features can be one-hot vectors representing the categories or any embedded representations. More implementation details can be found in the supplementary material.

3.2 Relation Prediction

In this module, we aim to infer the unknown relations in the user-specified constraints. Figure 1 shows an example where a three-component graph is given and we need to predict the missing relations between , , and . For brevity, we denote the graphs with complete relations as , and the graphs with partial relations as , which can be either or . Note that since the augmented relations include the unknown category, both and are complete graphs in practice. We also use to refer to either or .

We model the prediction process as a paired graph-to-graph translation task: from to

. Since the translation is multimodal, i.e., a graph with partial relations can be translated to many possible graphs with complete relations. Therefore, we adopt a similar framework to the multimodal image-to-image translation 

[21, 34] and treat as the source domain and as the target domain. Similar to [34], the translation is a conditional generation process that maps the source graph, along with a latent code, to the target graph. The latent code is encoded from the corresponding target graph to achieve reconstruction during training, and is sampled from a prior during testing. The conditional translation encoding process is modeled as:

(1)

where and are graph convolution networks, and is a relation predictor. In addition, is the set of edges in the target graph. Note that since the graph is a complete graph.

The model is trained with the reconstruction loss on the relation categories, where the indicates cross-entropy function, and a KL loss on the encoded latent vectors to facilitate sampling in inference time: , where . The objective of the relation prediction module is:

(2)

The reconstruction loss captures the knowledge that the predicted relations should agree with the existing relations in , and fill in any missing edge with the most likely relation discovered in the training data.

Datasets Ads Magazine RICO
FID Alignment FID Alignment FID Alignment
sg2im-none 116.63 0.0063 95.81 0.0097 269.60 0.0014
LayoutVAE 138.11 0.01210.0008 81.5636.78 0.03140.0011 192.1129.97 0.01190.0039
NDN-none 129.6832.12 0.00910.0007 69.4332.92 0.02510.0009 143.5122.36 0.00910.0032
sg2im 230.44 0.0069 102.35 0.0178 190.68 0.007
NDN-all 168.4421.83 0.00610.0005 82.7716.24 0.01510.0009 64.7811.60 0.00320.0002
Accuracy Alignment Accuracy Alignment Accuracy Alignment
LayoutVAE-loo 0.0710.002 0.00480.0001 0.0590.002 0.01410.0002 0.0450.0021 0.00390.0002
NDN-loo 0.0430.001 0.00360.001 0.0240.0002 0.01300.0001 0.0180.002 0.00140.0001
real data - 0.0034 - 0.0126 - 0.0012
Table 1: Quantitative comparisons. We compare the proposed method to other works on three datasets using three settings: no-constraint setting that no prior constraint is provided (first row), all-constraint setting that all relations are provided (second row), and leave-one-out setting that aims to predict the bounding box of a component with ground-truth bounding boxes of other components provided. The FID metric measures the realism and diversity, the alignment metric measures the alignment among components, and the accuracy metric measures the prediction accuracy in the leave-one-out setting.

3.3 Layout Generation

Given a graph with complete relations, this module aims to generate the design layout by predicting the bounding boxes for all nodes in the graph.

Let be the graph with complete relations constructed from and , the output of the relation prediction module. We model the generation process using a graph-based iterative conditional VAE model. We first obtain the features of each component by

(3)

where is a graph convolution network. These features capture the relative relations among all components. We then predict bounding boxes in an iterative manner starting from an empty canvas (i.e., all bounding boxes are unknown). As shown in Figure 1, the prediction of each bounding box is conditioned on the initial features as well as the current canvas, i.e., predicted bounding boxes from previous iterations. At iteration , the condition can be modeled as:

(4)

where is another graph convolution network. is a tuple of features and current canvas at iteration , and is a vector. Then we apply conditional VAE on the current bounding box conditioned on .

(5)

where and represent encoders and decoders consisting of fully connected layers. We train the model with conventional VAE loss: a reconstruction loss and a KL loss . The objective of the layout generation module is:

(6)

The model is trained with teacher forcing where the ground truth bounding box at step will be used as the input for step . At test time, the model will use the actual output boxes from previous steps. In addition, the latent vector will be sampled for a conditional prior distribution , where is a prior encoder.

Bounding boxes with predefined shapes.

In many design use cases, it is often required to constrain some design components to fixed size. For example, the logo size needs be fixed in the ad design. To achieve this goal, we augment the original layout generation module with an additional VAE encoder to ensure the encoded latent vectors can be decoded to bounding boxes with desired widths and heights. Similar to (5), given a ground-truth bounding box , we obtain the reconstructed bounding box with and . Then, instead of applying reconstruction loss on whole bounding boxes tuples, we only enforce the reconstruction of width and height with

(7)

The objective of the augmented layout generation module is given by:

(8)

3.4 Layout Refinement

We predict bounding boxes in an iterative manner that requires to fix the predicted bounding boxes from the previous iteration. As a result, the overall bounding boxes might not be optimal, as shown in the layout generation module in Figure 1. To tackle this issue, we fine-tune the bounding boxes for better alignment and visual quality in the final layout refinement module. Given a graph with ground-truth bounding boxes , we simulate the misalignment by randomly apply offsets on , where

is a uniform distribution. We obtain misaligned bounding boxes

= . We apply a graph convolution network for finetuning:

(9)

The model is trained with reconstruction loss .

Unary
size ()
Binary
size ()
Unary
location ()
Binary
location ()
Refinement FID Alignment
0 0 0 0 143.5122.36 0.00910.0032
20 20 0 0 141.6420.01 0.00870.003
0 0 20 20 129.9223.76 0.00810.003
20 20 20 20 126.1823.11 0.00740.002
20 20 20 20 125.4121.68 0.00700.002
100 100 100 100 70.5512.68 0.00360.002
100 100 100 100 64.7811.60 0.00320.0002
Table 2: Ablation on partial constraints. We measure the FID and alignment of the proposed method taking different percentages of prior constraints as inputs using the RICO dataset. We also show that the refinement module can further improve the visual quality as well as the alignment.

4 Experiments and Analysis

Datasets

We perform the evaluation on three datasets:

  • Magazine [32]. The dataset contains images of magazine pages and categories (texts, images, headlines, over-image texts, over-image headlines, backgrounds).

  • RICO [3, 24]. The original dataset contains images of the Android apps interface and categories. We choose most frequent categories (toolbars, images, texts, icons, buttons, inputs, list items, advertisements, pager indicators, web views, background images, drawers, modals) and filter the number of components within an image to be less than , totaling images.

  • Image banner ads. We collect image banner ads of the size via image search using keywords such as “car ads”. We annotate bounding boxes of categories: images, regions of interest, logos, brand names, texts, and buttons. Examples can be found in the supplementary materials.

Implementation Details.

In this work, , , and consists of fully-connected layers. In addition, , , , and consist of graph convolution layers. The dimension of latent vectors in the relation prediction and layout generation module is . For training, we use the Adam optimizer [17] with batch size of , learning rate of , and . In all experiments, we set the hyper-parameters as follows: , , , and . We use a predefined order of component sets in all experiments. More implementation details can be found in the supplementary material.

Evaluated methods.

We perform the evaluation on the following algorithms:

  • sg2im [14]. The model is proposed to generate a natural scene layout from a given scene graph. The sg2im method takes as inputs graphs with complete relations in the setting where all constraints are provided. When we compare with this method in the setting where no constraint is given, we simplify the input scene graph by removing all relations. We refer the simplified model as sg2im-none.

  • LayoutVAE [15]. This model takes a label set as input, and predicts the number of components for each label as well as the locations of each component. We compare with the second stage of the LayoutVAE model (i.e., the bounding box prediction stage) by giving the number of components for each label. In addition, we refer to LayoutVAE-loo as the model that predicts the bounding box of a single component when all other components are provided and fixed (the leave-one-out setting).

  • Neural Design Network. We refer to NDN-none when the input contains no prior constraint, NDN-all in the same setting as sg2im when all constraints are provided, and NDN-loo in the same setting as LayoutVAE-loo.

Figure 2: Qualitative comparison. We evaluate the proposed method with the LayoutVAE and Sg2im methods in both no-constraint and all-constraint setting. The proposed method can better model the relations among components and generate layouts of better visual quality.

4.1 Quantitative Evaluation

Realism and accuracy.

We evaluate the visual quality following Fréchet Inception Distance (FID) [9] by measuring how close the distribution of generated layout is to the real ones. Since there is no existing generally applicable feature extractor for arbitrary graph, similar to [22]

, we train a binary layout classifier to discriminate between good and bad layouts. The bad layouts are generated by randomly moving component locations of good layouts. The classifier consists of four graph convolution layers and three fully connected layers. We extract the features of the second from the last fully connected layer to measure FID.

We measure FID in two settings. First, a model predicts bounding boxes without any constraints. That is, only the number and the category of components are provided. We compare with LayoutVAE and sg2im-none in this setting. Second, a model predicts bounding boxes with all constraints provided. We compare with sg2im in this setting since LayoutVAE cannot take constraints as inputs. The first two rows in Table 1 present the results of these two settings. Since LayoutVAE and the proposed method are both stochastic models, we generate 100 samples for each testing design in each trial. The results are averaged over trials. In both no-constraint and all-constraint settings, the proposed method performs favorably against the other schemes.

We also measure the prediction accuracy in the leave-one-out setting, i.e., predicting the bounding box of a component when bounding boxes of other components are provided. The third row of Table 1 shows the comparison to the LayoutVAE-loo method in this setting. The proposed method gains better accuracy with statistical significance ( 95%), indicating that the graph-based framework encodes better relations among components.

Alignment.

Alignment is an important principle in design creation. In most of good designs, components need to be either in center alignment or edge alignment (e.g., left- or right-aligned). Therefore, in addition to realism, we explicitly measure the alignment among components using:

(10)

where is the number of generated layouts, is the component of the layout. In addition, , , and are alignment functions where the distances between the left, center, and right of components are measured, respectively.

Table 1 presents the results in the no-constraint, all-constraint, and leave-one-out settings. The results are also averaged over trials. The proposed method performs favorably against other methods. The sg2im-none method gets better alignment score since it tends to predict bounding boxes in several fixed locations when no prior constraint is provided, which leads to worse FID. For similar reasons, the sg2im method gain a slightly higher alignment score on RICO.

Partial constraints.

Previous experiments are conducted under the settings of either no constraints or all constraints provided. Now, we demonstrate the efficacy of the proposed method on handling partial constraints. Table 2 shows the results of layout prediction with different percentages of prior constraints provided. We evaluate partial constraints setting using the RICO dataset, which is the most difficult dataset in terms of diversity and complexity. Ideally, the FID and alignment score should be similar regardless of the percentage of constraints given. However, in the challenging RICO dataset, the prior information of size and location still greatly improves the visual quality, as shown in Table 2, The location constraints contribute to more improvement since they explicitly provide guidance from the ground-truth layouts. As for the alignment score, layouts in all settings perform similarly. Furthermore, the refinement module can slightly improve the alignment score as well as FID.

Figure 3: Layout generation with partial user-specified constraints. We generate layouts according to different user-specified constraints on location and size. Furthermore, we construct designs with real assets based on the generated layouts to better visualize the quality of our model.

4.2 Qualitative Evaluation

We compare the proposed method with related work in Figure 2. In the all-constraint setting, both the sg2im method and the proposed model can generate reasonable layouts similar to the ground-truth layouts. However, the proposed model can better tackle alignment and overlapping issues. In the no-constraint setting, the sg2im-none method tends to place components of the same categories at the same location, like the “text”s in the second row and the “text”s and “text button”s in the third row. The LayoutVAE method, on the other hand, cannot handle relations among components well without using graphs. The proposed method can generate layouts with good visual quality, even with no constraint provided.

Partial constraints.

In Figure 3, we present the results of layout generation given several randomly selected constraints on size and location. Our model generates design layouts that are both realistic and follows user-specified constraints. To better visualize the quality of the generated layouts, we present designs with real assets generated from the predicted layouts. Furthermore, we can constrain the size of specific components to desired shapes (e.g., we fix the image and logo size in the first row of Figure 3.) using the augmented layout generation module.

Figure 4: Layout Recommendation. We show examples of layout recommendations where locations of targets are recommended given the current layout.

Layout recommendation.

The proposed model can also help designers decide the best locations of a specific design component (e.g., logo, button, or headline) when a partial design layout is provided. This can be done by building graphs with partial location and size relations based on the current canvas and set the relations to target components as unknown. We then complete this graph using the relation prediction module. Finally, conditioned on the predicted graph as well as current canvas, we perform iterative bounding boxes prediction with the layout generation module. Figure 4 shows examples of layout recommendations.

5 Conclusion

In this work, we propose a neural design network to handle design layout generation given user-specified constraints. The proposed method can generate layouts that are visually appealing and follow the constraints with a three-module framework, including a relation prediction module, a layout generation module, and a refinement module. Extensive quantitative and qualitative experiments demonstrate the efficacy of the proposed model. We also present examples of constructing real designs based on generated layouts, and an application of layout recommendation.

References

  • [1] Z. Bylinskii, N. W. Kim, P. O’Donovan, S. Alsheikh, S. Madan, H. Pfister, F. Durand, B. Russell, and A. Hertzmann (2017)

    Learning visual importance for graphic designs and data visualizations

    .
    In UIST, Cited by: §2.
  • [2] N. Damera-Venkata, J. Bento, and E. O’Brien-Strain (2011) Probabilistic document model for automated document composition. In DocEng, Cited by: §1, §2.
  • [3] B. Deka, Z. Huang, C. Franzen, J. Hibschman, D. Afergan, Y. Li, J. Nichols, and R. Kumar (2017) Rico: a mobile app dataset for building data-driven design applications. In UIST, Cited by: §1, 2nd item.
  • [4] D. K. Duvenaud, D. Maclaurin, J. Iparraguirre, R. Bombarell, T. Hirzel, A. Aspuru-Guzik, and R. P. Adams (2015) Convolutional networks on graphs for learning molecular fingerprints. In NIPS, Cited by: §2.
  • [5] C. Goller and A. Kuchler (1996)

    Learning task-dependent distributed representations by backpropagation through structure

    .
    In ICNN, Cited by: §2.
  • [6] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In NIPS, Cited by: §1.
  • [7] M. Gori, G. Monfardini, and F. Scarselli (2005) A new model for learning in graph domains. In IJCNN, Cited by: §2.
  • [8] T. Gupta, D. Schwenk, A. Farhadi, D. Hoiem, and A. Kembhavi (2018) Imagine this! scripts to compositions to videos. In ECCV, Cited by: §2.
  • [9] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter (2017) GANs trained by a two time-scale update rule converge to a local nash equilibrium. In NIPS, Cited by: §4.1.
  • [10] S. Hong, D. Yang, J. Choi, and H. Lee (2018) Inferring semantic layout for hierarchical text-to-image synthesis. In CVPR, Cited by: §2.
  • [11] N. Hurst, W. Li, and K. Marriott (2009) Review of automatic document formatting. In DocEng, Cited by: §1, §2.
  • [12] A. Jain, A. R. Zamir, S. Savarese, and A. Saxena (2016)

    Structural-rnn: deep learning on spatio-temporal graphs

    .
    In CVPR, Cited by: §2.
  • [13] W. Jin, K. Yang, R. Barzilay, and T. Jaakkola (2019) Learning multimodal graph-to-graph translation for molecular optimization. In ICLR, Cited by: §2.
  • [14] J. Johnson, A. Gupta, and L. Fei-Fei (2018) Image generation from scene graphs. In CVPR, Cited by: §2, §2, 1st item.
  • [15] A. A. Jyothi, T. Durand, J. He, L. Sigal, and G. Mori (2019) LayoutVAE: stochastic scene layout generation from a label set. In ICCV, Cited by: §1, §2, 2nd item.
  • [16] T. Karras, S. Laine, and T. Aila (2019) A style-based generator architecture for generative adversarial networks. In CVPR, Cited by: §1.
  • [17] D. Kingma and J. Ba (2015) Adam: a method for stochastic optimization. In ICLR, Cited by: §4.
  • [18] D. P. Kingma and M. Welling (2014) Auto-encoding variational bayes. In ICLR, Cited by: §3.
  • [19] T. N. Kipf and M. Welling (2017) Semi-supervised classification with graph convolutional networks. In ICLR, Cited by: §2.
  • [20] R. Kumar, J. O. Talton, S. Ahmad, and S. R. Klemmer (2011) Bricolage: example-based retargeting for web design. In SIGCHI, Cited by: §2.
  • [21] H. Lee, H. Tseng, J. Huang, M. K. Singh, and M. Yang (2018) Diverse image-to-image translation via disentangled representations. In ECCV, Cited by: §3.2.
  • [22] H. Lee, X. Yang, M. Liu, T. Wang, Y. Lu, M. Yang, and J. Kautz (2019) Dancing to music. In NeurIPS, Cited by: §4.1.
  • [23] J. Li, J. Yang, A. Hertzmann, J. Zhang, and T. Xu (2019) LayoutGAN: generating graphic layouts with wireframe discriminators. In ICLR, Cited by: §1, §2.
  • [24] T. F. Liu, M. Craft, J. Situ, E. Yumer, R. Mech, and R. Kumar (2018) Learning design semantics for mobile apps. In UIST, Cited by: §1, 2nd item.
  • [25] P. O’Donovan, A. Agarwala, and A. Hertzmann (2014) Learning layouts for single-pagegraphic designs. TVCG. Cited by: §1, §2.
  • [26] X. Pang, Y. Cao, R. W. Lau, and A. B. Chan (2016) Directing user attention via visual flow on web designs. ACM TOG (Proc. SIGGRAPH). Cited by: §2.
  • [27] A. Razavi, A. v. d. Oord, and O. Vinyals (2019) Generating diverse high-fidelity images with vq-vae-2. arXiv preprint arXiv:1906.00446. Cited by: §1.
  • [28] D. J. Rezende, S. Mohamed, and D. Wierstra (2014) Stochastic backpropagation and approximate inference in deep generative models. In ICML, Cited by: §3.
  • [29] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini (2008) The graph neural network model. TNN. Cited by: §2.
  • [30] S. Tabata, H. Yoshihara, H. Maeda, and K. Yokoyama (2019) Automatic layout generation for graphical design magazines. In SIGGRAPH, Cited by: §2.
  • [31] F. Tan, S. Feng, and V. Ordonez (2019) Text2scene: generating abstract scenes from textual descriptions. In CVPR, Cited by: §2.
  • [32] Y. C. Xinru Zheng and R. W.H. Lau (2019) Content-aware generative modeling of graphic design layouts. SIGGRAPH. Cited by: §1, §1, §2, 1st item.
  • [33] J. Yang, J. Lu, S. Lee, D. Batra, and D. Parikh (2018) Graph r-cnn for scene graph generation. In ECCV, Cited by: §2.
  • [34] J. Zhu, R. Zhang, D. Pathak, T. Darrell, A. A. Efros, O. Wang, and E. Shechtman (2017) Toward multimodal image-to-image translation. In NIPS, Cited by: §3.2.