CISum: Learning Cross-modality Interaction to Enhance Multimodal Semantic Coverage for Multimodal Summarization

02/20/2023
by   Litian Zhang, et al.
0

Multimodal summarization (MS) aims to generate a summary from multimodal input. Previous works mainly focus on textual semantic coverage metrics such as ROUGE, which considers the visual content as supplemental data. Therefore, the summary is ineffective to cover the semantics of different modalities. This paper proposes a multi-task cross-modality learning framework (CISum) to improve multimodal semantic coverage by learning the cross-modality interaction in the multimodal article. To obtain the visual semantics, we translate images into visual descriptions based on the correlation with text content. Then, the visual description and text content are fused to generate the textual summary to capture the semantics of the multimodal content, and the most relevant image is selected as the visual summary. Furthermore, we design an automatic multimodal semantics coverage metric to evaluate the performance. Experimental results show that CISum outperforms baselines in multimodal semantics coverage metrics while maintaining the excellent performance of ROUGE and BLEU.

READ FULL TEXT
research
12/16/2021

Hierarchical Cross-Modality Semantic Correlation Learning Model for Multimodal Summarization

Multimodal summarization with multimodal output (MSMO) generates a summa...
research
06/19/2019

Multimodal Abstractive Summarization for How2 Videos

In this paper, we study abstractive summarization for open-domain videos...
research
07/06/2023

CFSum: A Coarse-to-Fine Contribution Network for Multimodal Summarization

Multimodal summarization usually suffers from the problem that the contr...
research
11/04/2022

Evaluating and Improving Factuality in Multimodal Abstractive Summarization

Current metrics for evaluating factuality for abstractive document summa...
research
08/11/2021

Abstractive Sentence Summarization with Guidance of Selective Multimodal Reference

Multimodal abstractive summarization with sentence output is to generate...
research
06/01/2023

PV2TEA: Patching Visual Modality to Textual-Established Information Extraction

Information extraction, e.g., attribute value extraction, has been exten...
research
02/06/2023

MuG: A Multimodal Classification Benchmark on Game Data with Tabular, Textual, and Visual Fields

Multimodal learning has attracted the interest of the machine learning c...

Please sign up or login with your details

Forgot password? Click here to reset