Unsupervised Summarization Re-ranking

12/19/2022
by   Mathieu Ravaut, et al.
13

With the rise of task-specific pre-training objectives, abstractive summarization models like PEGASUS offer appealing zero-shot performance on downstream summarization tasks. However, the performance of such unsupervised models still lags significantly behind their supervised counterparts. Similarly to the supervised setup, we notice a very high variance in quality among summary candidates from these models whereas only one candidate is kept as the summary output. In this paper, we propose to re-rank summary candidates in an unsupervised manner, aiming to close the performance gap between unsupervised and supervised models. Our approach improves the pre-trained unsupervised PEGASUS by 4.37 summarization benchmarks, and achieves relative gains of 7.51 averaged over 30 transfer setups.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset