Synthetically Trained Icon Proposals for Parsing and Summarizing Infographics

by   Spandan Madan, et al.
Harvard University

Widely used in news, business, and educational media, infographics are handcrafted to effectively communicate messages about complex and often abstract topics including `ways to conserve the environment' and `understanding the financial crisis'. Composed of stylistically and semantically diverse visual and textual elements, infographics pose new challenges for computer vision. While automatic text extraction works well on infographics, computer vision approaches trained on natural images fail to identify the stand-alone visual elements in infographics, or `icons'. To bridge this representation gap, we propose a synthetic data generation strategy: we augment background patches in infographics from our Visually29K dataset with Internet-scraped icons which we use as training data for an icon proposal mechanism. On a test set of 1K annotated infographics, icons are located with 38 (the best model trained with natural images achieves 14 recall). Combining our icon proposals with icon classification and text extraction, we present a multi-modal summarization application. Our application takes an infographic as input and automatically produces text tags and visual hashtags that are textually and visually representative of the infographic's topics respectively.


page 2

page 3

page 5

page 6

page 8

page 11

page 12

page 13


Understanding Art through Multi-Modal Retrieval in Paintings

In computer vision, visual arts are often studied from a purely aestheti...

Understanding Infographics through Textual and Visual Tag Prediction

We introduce the problem of visual hashtag discovery for infographics: e...

Knowing the Distance: Understanding the Gap Between Synthetic and Real Data For Face Parsing

The use of synthetic data for training computer vision algorithms has be...

Weakly Supervised Annotations for Multi-modal Greeting Cards Dataset

In recent years, there is a growing number of pre-trained models trained...

RoentGen: Vision-Language Foundation Model for Chest X-ray Generation

Multimodal models trained on large natural image-text pair datasets have...

Predicting Visual Importance Across Graphic Design Types

This paper introduces a Unified Model of Saliency and Importance (UMSI),...

VizExtract: Automatic Relation Extraction from Data Visualizations

Visual graphics, such as plots, charts, and figures, are widely used to ...

Please sign up or login with your details

Forgot password? Click here to reset