Evaluating Interactive Summarization: an Expansion-Based Framework

by   Ori Shapira, et al.

Allowing users to interact with multi-document summarizers is a promising direction towards improving and customizing summary results. Different ideas for interactive summarization have been proposed in previous work but these solutions are highly divergent and incomparable. In this paper, we develop an end-to-end evaluation framework for expansion-based interactive summarization, which considers the accumulating information along an interactive session. Our framework includes a procedure of collecting real user sessions and evaluation measures relying on standards, but adapted to reflect interaction. All of our solutions are intended to be released publicly as a benchmark, allowing comparison of future developments in interactive summarization. We demonstrate the use of our framework by evaluating and comparing baseline implementations that we developed for this purpose, which will serve as part of our benchmark. Our extensive experimentation and analysis of these systems motivate our design choices and support the viability of our framework.


End-to-end Semantics-based Summary Quality Assessment for Single-document Summarization

ROUGE is the de facto criterion for summarization research. However, its...

ESBM: An Entity Summarization BenchMark

Entity summarization is the problem of computing an optimal compact summ...

Summary Explorer: Visualizing the State of the Art in Text Summarization

This paper introduces Summary Explorer, a new tool to support the manual...

IntentVizor: Towards Generic Query Guided Interactive Video Summarization Using Slow-Fast Graph Convolutional Networks

The target of automatic Video summarization is to create a short skim of...

ElasticPlay: Interactive Video Summarization with Dynamic Time Budgets

Video consumption is being shifted from sit-and-watch to selective skimm...

Faithful or Extractive? On Mitigating the Faithfulness-Abstractiveness Trade-off in Abstractive Summarization

Despite recent progress in abstractive summarization, systems still suff...

Evaluation of Summarization Systems across Gender, Age, and Race

Summarization systems are ultimately evaluated by human annotators and r...