Generating Query Focused Summaries without Fine-tuning the Transformer-based Pre-trained Models

03/10/2023
by   Deen Abdullah, et al.
0

Fine-tuning the Natural Language Processing (NLP) models for each new data set requires higher computational time associated with increased carbon footprint and cost. However, fine-tuning helps the pre-trained models adapt to the latest data sets; what if we avoid the fine-tuning steps and attempt to generate summaries using just the pre-trained models to reduce computational time and cost. In this paper, we tried to omit the fine-tuning steps and investigate whether the Marginal Maximum Relevance (MMR)-based approach can help the pre-trained models to obtain query-focused summaries directly from a new data set that was not used to pre-train the models. First, we used topic modelling on Wikipedia Current Events Portal (WCEP) and Debatepedia datasets to generate queries for summarization tasks. Then, using MMR, we ranked the sentences of the documents according to the queries. Next, we passed the ranked sentences to seven transformer-based pre-trained models to perform the summarization tasks. Finally, we used the MMR approach again to select the query relevant sentences from the generated summaries of individual pre-trained models and constructed the final summary. As indicated by the experimental results, our MMR-based approach successfully ranked and selected the most relevant sentences as summaries and showed better performance than the individual pre-trained models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/24/2022

No More Fine-Tuning? An Experimental Evaluation of Prompt Tuning in Code Intelligence

Pre-trained models have been shown effective in many code intelligence t...
research
05/04/2022

Efficient Few-Shot Fine-Tuning for Opinion Summarization

Abstractive summarization models are typically pre-trained on large amou...
research
07/14/2023

Rank Your Summaries: Enhancing Bengali Text Summarization via Ranking-based Approach

With the increasing need for text summarization techniques that are both...
research
06/30/2020

SE3M: A Model for Software Effort Estimation Using Pre-trained Embedding Models

Estimating effort based on requirement texts presents many challenges, e...
research
02/22/2023

On the contribution of pre-trained models to accuracy and utility in modeling distributed energy resources

Despite their growing popularity, data-driven models of real-world dynam...
research
05/11/2018

Using Stastical and Semantic Models for Multi-Document Summarization

We report on series of experiments with different semantic models on top...
research
05/11/2018

Using Statistical and Semantic Models for Multi-Document Summarization

We report a series of experiments with different semantic models on top ...

Please sign up or login with your details

Forgot password? Click here to reset