DeepAI AI Chat
Log In Sign Up

Models and Datasets for Cross-Lingual Summarisation

by   Laura Perez-Beltrachini, et al.

We present a cross-lingual summarisation corpus with long documents in a source language associated with multi-sentence summaries in a target language. The corpus covers twelve language pairs and directions for four European languages, namely Czech, English, French and German, and the methodology for its creation can be applied to several other languages. We derive cross-lingual document-summary instances from Wikipedia by combining lead paragraphs and articles' bodies from language aligned Wikipedia titles. We analyse the proposed cross-lingual summarisation task with automatic metrics and validate it with a human study. To illustrate the utility of our dataset we report experiments with multi-lingual pre-trained models in supervised, zero- and few-shot, and out-of-domain scenarios.


page 1

page 2

page 3

page 4


WikiMulti: a Corpus for Cross-Lingual Summarization

Cross-lingual summarization (CLS) is the task to produce a summary in on...

Cross-lingual Dataless Classification for Languages with Small Wikipedia Presence

This paper presents an approach to classify documents in any language in...

EUR-Lex-Sum: A Multi- and Cross-lingual Dataset for Long-form Summarization in the Legal Domain

Existing summarization datasets come with two main drawbacks: (1) They t...

A Corpus for Multilingual Document Classification in Eight Languages

Cross-lingual document classification aims at training a document classi...

Deep Feelings: A Massive Cross-Lingual Study on the Relation between Emotions and Virality

This article provides a comprehensive investigation on the relations bet...

μPLAN: Summarizing using a Content Plan as Cross-Lingual Bridge

Cross-lingual summarization consists of generating a summary in one lang...

A Survey of Recent Abstract Summarization Techniques

This paper surveys several recent abstract summarization methods: T5, Pe...