Paraphrasing with Large Language Models

by   Sam Witteveen, et al.

Recently, large language models such as GPT-2 have shown themselves to be extremely adept at text generation and have also been able to achieve high-quality results in many downstream NLP tasks such as text classification, sentiment analysis and question answering with the aid of fine-tuning. We present a useful technique for using a large language model to perform the task of paraphrasing on a variety of texts and subjects. Our approach is demonstrated to be capable of generating paraphrases not only at a sentence level but also for longer spans of text such as paragraphs without needing to break the text into smaller chunks.


page 1

page 2

page 3

page 4


Does QA-based intermediate training help fine-tuning language models for text classification?

Fine-tuning pre-trained language models for downstream tasks has become ...

PatternGPT :A Pattern-Driven Framework for Large Language Model Text Generation

Large language models(LLMS) have shown excellent text generation capabil...

Are You Robert or RoBERTa? Deceiving Online Authorship Attribution Models Using Neural Text Generators

Recently, there has been a rise in the development of powerful pre-train...

Fast Quantum Algorithm for Attention Computation

Large language models (LLMs) have demonstrated exceptional performance a...

Analyzing Semantic Faithfulness of Language Models via Input Intervention on Conversational Question Answering

Transformer-based language models have been shown to be highly effective...

Training Large Language Models Efficiently with Sparsity and Dataflow

Large foundation language models have shown their versatility in being a...

Uncertainty and Surprisal Jointly Deliver the Punchline: Exploiting Incongruity-Based Features for Humor Recognition

Humor recognition has been widely studied as a text classification probl...

Please sign up or login with your details

Forgot password? Click here to reset