Retrieve-Then-Adapt: Example-based Automatic Generation for Proportion-related Infographics

07/31/2020 ∙ by Chunyao Qian, et al. ∙ Peking University 0

Infographic is a data visualization technique which combines graphic and textual descriptions in an aesthetic and effective manner. Creating infographics is a difficult and time-consuming process which often requires significant attempts and adjustments even for experienced designers, not to mention novice users with limited design expertise. Recently, a few approaches have been proposed to automate the creation process by applying predefined blueprints to user information. However, predefined blueprints are often hard to create, hence limited in volume and diversity. In contrast, good infogrpahics have been created by professionals and accumulated on the Internet rapidly. These online examples often represent a wide variety of design styles, and serve as exemplars or inspiration to people who like to create their own infographics. Based on these observations, we propose to generate infographics by automatically imitating examples. We present a two-stage approach, namely retrieve-then-adapt. In the retrieval stage, we index online examples by their visual elements. For a given user information, we transform it to a concrete query by sampling from a learned distribution about visual elements, and then find appropriate examples in our example library based on the similarity between example indexes and the query. For a retrieved example, we generate an initial drafts by replacing its content with user information. However, in many cases, user information cannot be perfectly fitted to retrieved examples. Therefore, we further introduce an adaption stage. Specifically, we propose a MCMC-like approach and leverage recursive neural networks to help adjust the initial draft and improve its visual appearance iteratively, until a satisfactory result is obtained. We implement our approach on proportion-related infographics, and demonstrate its effectiveness by sample results and expert reviews.



There are no comments yet.


page 1

page 3

page 4

page 5

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.


  • [1] N. S. Alrwele. Effects of infographics on student achievement and students’ perceptions of the impacts of infographics. Journal of Education and Human Development, 6(3):104–117, 2017.
  • [2] H. Bahng, S. Yoo, W. Cho, D. Keetae Park, Z. Wu, X. Ma, and J. Choo.

    Coloring with words: Guiding image colorization through text-based palette generation.


    Proceedings of the european conference on computer vision

    , pages 431–447, 2018.
  • [3] S. Bateman, R. L. Mandryk, C. Gutwin, A. Genest, D. McDine, and C. Brooks. Useful junk? the effects of visual embellishment on comprehension and memorability of charts. In Proceedings of the SIGCHI conference on human factors in computing systems, pages 2573–2582, 2010.
  • [4] Z. Bylinskii, S. Alsheikh, S. Madan, A. Recasens, K. Zhong, H. Pfister, F. Durand, and A. Oliva. Understanding infographics through textual and visual tag prediction.
  • [5] S. P. Chang and B. A. Myers. Webcrystal: understanding and reusing examples in web authoring. 2012.
  • [6] Z. Chen, Y. Wang, Q. Wang, Y. Wang, and H. Qu.

    Towards automated infographic design: Deep learning-based auto-extraction of extensible timeline.

    IEEE transactions on visualization and computer graphics, 26(1):917–926, 2019.
  • [7] W. Cui, X. Zhang, Y. Wang, H. Huang, B. Chen, L. Fang, H. Zhang, J.-G. Lou, and D. Zhang. Text-to-viz: Automatic generation of infographics from proportion-related natural language statements. IEEE transactions on visualization and computer graphics, 2019.
  • [8] C. Doersch. Tutorial on variational autoencoders. arXiv preprint arXiv:1606.05908, 2016.
  • [9] A. Gadi Patil, O. Ben-Eliezer, O. Perel, and H. Averbuch-Elor. Read: Recursive autoencoders for document layout generation. arXiv preprint arXiv:1909.00302, 2019.
  • [10] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
  • [11] S. Haroz, R. Kosara, and S. L. Franconeri. Isotype visualization: Working memory, performance, and engagement with pictographs. In Proceedings of the 33rd annual ACM conference on human factors in computing systems, pages 1191–1200, 2015.
  • [12] J. Harper and M. Agrawala. Deconstructing and restyling d3 visualizations. In Proceedings of the 27th annual ACM symposium on User interface software and technology, pages 253–262, 2014.
  • [13] J. Harper and M. Agrawala. Converting basic d3 charts into reusable style templates. IEEE transactions on visualization and computer graphics, 24(3):1274–1286, 2017.
  • [14] S. J. Harrington, J. F. Naveda, R. P. Jones, P. Roetling, and N. Thakkar. Aesthetic measures for automated document layout. In Proceedings of the 2004 ACM symposium on Document engineering, pages 109–111, 2004.
  • [15] S. R. Herring, C.-C. Chang, J. Krantzler, and B. P. Bailey. Getting inspired! understanding how and why examples are used in creative design practice. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 87–96, 2009.
  • [16] E. Hoque and M. Agrawala. Searching the visual style and structure of d3 visualizations. IEEE transactions on visualization and computer graphics, 26(1):1236–1245, 2019.
  • [17] N. W. Kim, E. Schweickart, Z. Liu, M. Dontcheva, W. Li, J. Popovic, and H. Pfister. Data-driven guides: Supporting expressive design for information graphics. IEEE transactions on visualization and computer graphics, 23(1):491–500, 2016.
  • [18] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
  • [19] R. Kumar, J. O. Talton, S. Ahmad, and S. R. Klemmer. Bricolage: example-based retargeting for web design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 2197–2206, 2011.
  • [20] B. Lee, S. Srivastava, R. Kumar, R. Brafman, and S. R. Klemmer. Designing with interactive example galleries. 2010.
  • [21] H.-Y. Lee, W. Yang, L. Jiang, M. Le, I. Essa, H. Gong, and M.-H. Yang. Neural design network: Graphic layout generation with constraints. arXiv preprint arXiv:1912.09421, 2019.
  • [22] J. Li, J. Yang, A. Hertzmann, J. Zhang, and T. Xu. Layoutgan: Generating graphic layouts with wireframe discriminators. arXiv preprint arXiv:1901.06767, 2019.
  • [23] M. Li, A. G. Patil, K. Xu, S. Chaudhuri, O. Khan, A. Shamir, C. Tu, B. Chen, D. Cohen-Or, and H. Zhang.

    Grains: Generative recursive autoencoders for indoor scenes.

    ACM Transactions on Graphics (TOG), 38(2):12, 2019.
  • [24] S. Lin, J. Fortuna, C. Kulkarni, M. Stone, and J. Heer. Selecting semantically-resonant colors for data visualization. In Computer Graphics Forum, pages 401–410, 2013.
  • [25] M. Lu, C. Wang, J. Lanir, N. Zhao, H. Pfister, D. Cohen-Or, and H. Huang. Exploring visual information flows in infographics.
  • [26] P. O’Donovan, A. Agarwala, and A. Hertzmann. Learning layouts for single-pagegraphic designs. IEEE transactions on visualization and computer graphics, 20(8):1200–1213, 2014.
  • [27] P. O’Donovan, A. Agarwala, and A. Hertzmann. Designscape: Design with interactive layout suggestions. In Proceedings of the 33rd annual ACM conference on human factors in computing systems, pages 1221–1224. ACM, 2015.
  • [28] H. C. Purchase, K. Isaacs, T. Bueti, B. Hastings, A. Kassam, A. Kim, and S. van Hoesen. A classification of infographics. In International Conference on Theory and Application of Diagrams, pages 210–218. Springer, 2018.
  • [29] S. Qi, Y. Zhu, S. Huang, C. Jiang, and S.-C. Zhu. Human-centric indoor scene synthesis using stochastic grammar. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pages 5899–5908, 2018.
  • [30] A. Satyanarayan and J. Heer. Lyra: An interactive visualization design environment. Computer Graphics Forum, 33(3):351–360, 2014.
  • [31] W. V. Siricharoen. Infographics: the new communication tools in digital age. In The international conference on e-technologies and business on the web (ebw2013), pages 169–174, 2013.
  • [32] R. Socher, C. C. Lin, C. Manning, and A. Y. Ng. Parsing natural scenes and natural language with recursive neural networks. In

    Proceedings of the 28th international conference on machine learning

    , pages 129–136, 2011.
  • [33] T.-H. Sun, C.-H. Lai, S.-K. Wong, and Y.-S. Wang. Adversarial colorization of icons based on contour and color conditions. In Proceedings of the 27th ACM International Conference on Multimedia, pages 683–691, 2019.
  • [34] K. Wang, M. Savva, A. X. Chang, and D. Ritchie. Deep convolutional priors for indoor scene synthesis. ACM Transactions on Graphics (TOG), 37(4):70, 2018.
  • [35] Y. Wang, H. Zhang, H. Huang, X. Chen, Q. Yin, Z. Hou, D. Zhang, Q. Luo, and H. Qu. Infonice: Easy creation of information graphics. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pages 1–12, 2018.
  • [36] N. Zhao, N. W. Kim, L. M. Herman, H. Pfister, R. W. Lau, J. Echevarria, and Z. Bylinskii. Iconate: Automatic compound icon generation and ideation. To appear in the ACM Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 2020.
  • [37] Y. Zhao and S.-C. Zhu. Image parsing with stochastic scene grammar. In Advances in Neural Information Processing Systems, pages 73–81, 2011.
  • [38] X. Zheng, X. Qiao, Y. Cao, and R. W. Lau. Content-aware generative modeling of graphic design layouts. ACM Transactions on Graphics, 38(4):133, 2019.