Synthetically Trained Icon Proposals for Parsing and Summarizing Infographics

07/27/2018
by   Spandan Madan, et al.
2

Widely used in news, business, and educational media, infographics are handcrafted to effectively communicate messages about complex and often abstract topics including `ways to conserve the environment' and `understanding the financial crisis'. Composed of stylistically and semantically diverse visual and textual elements, infographics pose new challenges for computer vision. While automatic text extraction works well on infographics, computer vision approaches trained on natural images fail to identify the stand-alone visual elements in infographics, or `icons'. To bridge this representation gap, we propose a synthetic data generation strategy: we augment background patches in infographics from our Visually29K dataset with Internet-scraped icons which we use as training data for an icon proposal mechanism. On a test set of 1K annotated infographics, icons are located with 38 (the best model trained with natural images achieves 14 recall). Combining our icon proposals with icon classification and text extraction, we present a multi-modal summarization application. Our application takes an infographic as input and automatically produces text tags and visual hashtags that are textually and visually representative of the infographic's topics respectively.

READ FULL TEXT

page 2

page 3

page 5

page 6

page 8

page 11

page 12

page 13

research
04/24/2019

Understanding Art through Multi-Modal Retrieval in Paintings

In computer vision, visual arts are often studied from a purely aestheti...
research
09/26/2017

Understanding Infographics through Textual and Visual Tag Prediction

We introduce the problem of visual hashtag discovery for infographics: e...
research
03/27/2023

Knowing the Distance: Understanding the Gap Between Synthetic and Real Data For Face Parsing

The use of synthetic data for training computer vision algorithms has be...
research
12/01/2022

Weakly Supervised Annotations for Multi-modal Greeting Cards Dataset

In recent years, there is a growing number of pre-trained models trained...
research
11/23/2022

RoentGen: Vision-Language Foundation Model for Chest X-ray Generation

Multimodal models trained on large natural image-text pair datasets have...
research
08/07/2020

Predicting Visual Importance Across Graphic Design Types

This paper introduces a Unified Model of Saliency and Importance (UMSI),...
research
12/07/2021

VizExtract: Automatic Relation Extraction from Data Visualizations

Visual graphics, such as plots, charts, and figures, are widely used to ...

Please sign up or login with your details

Forgot password? Click here to reset