MMED: A Multi-domain and Multi-modality Event Dataset

04/04/2019
by   Zhenguo Yan, et al.
0

In this work, we construct and release a multi-domain and multi-modality event dataset (MMED), containing 25,165 textual news articles collected from hundreds of news media sites (e.g., Yahoo News, Google News, CNN News.) and 76,516 image posts shared on Flickr social media, which are annotated according to 412 real-world events. The dataset is collected to explore the problem of organizing heterogeneous data contributed by professionals and amateurs in different data domains, and the problem of transferring event knowledge obtained from one data domain to heterogeneous data domain, thus summarizing the data with different contributors. We hope that the release of the MMED dataset can stimulate innovate research on related challenging problems, such as event discovery, cross-modal (event) retrieval, and visual question answering, etc.

READ FULL TEXT

page 3

page 4

page 6

research
01/14/2019

Learning Shared Semantic Space with Correlation Alignment for Cross-modal Event Retrieval

In this paper, we propose to learn shared semantic space with correlatio...
research
09/20/2021

Modality and Negation in Event Extraction

Language provides speakers with a rich system of modality for expressing...
research
07/14/2019

TWEETQA: A Social Media Focused Question Answering Dataset

With social media becoming increasingly pop-ular on which lots of news a...
research
06/14/2022

Multimodal Event Graphs: Towards Event Centric Understanding of Multimodal World

Understanding how events described or shown in multimedia content relate...
research
01/10/2020

Linking Social Media Posts to News with Siamese Transformers

Many computational social science projects examine online discourse surr...
research
04/30/2021

GeoWINE: Geolocation based Wiki, Image,News and Event Retrieval

In the context of social media, geolocation inference on news or events ...

Please sign up or login with your details

Forgot password? Click here to reset