Response Generation with Context-Aware Prompt Learning

by   Xiaodong Gu, et al.

Pre-trained language models (PLM) have marked a huge leap in neural dialogue modeling. While PLMs are pre-trained on large-scale text corpora, they are usually fine-tuned on scarce dialogue data with specific domain knowledge and dialogue styles. However, tailoring the language models while fully utilizing prior knowledge in large pre-trained models remains a challenge. In this paper, we present a novel approach for pre-trained dialogue modeling that casts the dialogue generation problem as a prompt-learning task. Instead of fine-tuning on limited dialogue data, our approach, DialogPrompt, learns continuous prompt embeddings optimized for dialogue contexts, which appropriately elicit knowledge from the large pre-trained model. To encourage the model to better utilize the prompt embeddings, the prompt encoders are designed to be dynamically generated based on the dialogue context. Experiments on popular conversation datasets show that our approach significantly outperforms the fine-tuning baseline and the generic prompt-learning methods. Furthermore, human evaluations strongly support the superiority of DialogPrompt in regard to response generation quality.



There are no comments yet.


page 1

page 2

page 3

page 4


Are Pre-trained Language Models Knowledgeable to Ground Open Domain Dialogues?

We study knowledge-grounded dialogue generation with pre-trained languag...

Task-Oriented Dialogue System as Natural Language Generation

In this paper, we propose to formulate the task-oriented dialogue system...

Language Model as an Annotator: Exploring DialoGPT for Dialogue Summarization

Current dialogue summarization systems usually encode the text with a nu...

CONFIT: Toward Faithful Dialogue Summarization with Linguistically-Informed Contrastive Fine-tuning

Factual inconsistencies in generated summaries severely limit the practi...

SideControl: Controlled Open-domain Dialogue Generation via Additive Side Networks

Transformer-based pre-trained language models boost the performance of o...

Empathetic Dialogue Generation with Pre-trained RoBERTa-GPT2 and External Knowledge

One challenge for dialogue agents is to recognize feelings of the conver...

StyleDGPT: Stylized Response Generation with Pre-trained Language Models

Generating responses following a desired style has great potentials to e...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.


Pre-trained language models (PLMs) such as BERT (Devlin et al., 2018) and GPT-2 (Radford et al., 2019)

have achieved remarkable success in various natural language processing tasks 

(Sun et al., 2019). As such, there is a growing trend of using pre-trained language models for conversation modeling (Budzianowski and Vulić, 2019; Zhang et al., 2019; Feng et al., 2021). For example, Zhang et al. (2019) proposed DialoGPT, a dialogue generation model that trains an extended GPT-2 (Radford et al., 2019) on large dialogue corpus. Feng et al. (2021) further explore the usage of DialoGPT for dialogue summarization. These pre-trained dialogue models are often pre-trained on large text corpora and fine-tuned on smaller dialogue datasets Zhang et al. (2019).

One limitation of PLM-based dialogue modeling, and even for other PLM tasks, is the trade-off between pre-training and fine-tuning (Ben-David et al., 2021). That is, the task-specific data used for fine-tuning is usually scarce and costly. As such, the reusability of prior knowledge learned in the pre-training phase can be limited during fine-tuning, hence some dialogue models are simply trained from scratch on the limited task-specific data.

Consequently, recent works have resorted to prompt learning, a lightweight alternative to fine-tuning. Prompt learning keeps the PLM parameters frozen but optimizes only a small portion of task-specific prompts or related modules (Liu et al., 2021a; Shin et al., 2020; Liu et al., 2021b; Li and Liang, 2021). For example, Liu et al. (2021b) propose p-tuning, which preprends trainable prompt tokens to the input of a PLM. The trainable prompt embeddings are optimized while the PLM parameters are kept frozen. Prompt learning allows few-shot or nearly zero-shot learning for pre-trained models in new tasks with little or unlabeled data and it has been demonstrated to be substantially effective over fine-tuning in many tasks Liu et al. (2021a); Qin and Eisner (2021).

However, applying prompt learning directly to conversation modeling is challenging. The general prompt-learning models assign universal prompt tokens to all inputs in the same task Liu et al. (2021b)

. For example, prompts used for sentiment analysis share the same embeddings that are inferred from the training data 

Liu et al. (2021b). In contrast, conversations are context-sensitive. Dialogue responses are affected by contextual information, such as the topic of discussion, pre-dialogue context, and participant personalities. “Blanket” prompts can restrict the expressiveness of prompt learning due to the lack of context-awareness, leading to sub-optimal performance in response generation.

In this work, we present DialogPrompt, a novel prompt-based paradigm for response generation on top of large pre-trained language models. DialogPrompt prepends a sequence of prompt tokens to each dialogue context for eliciting response from large pre-trained language models. In order to construct context-aware prompts, we propose a dynamic prompt encoder on top of the Transformer (Vaswani et al., 2017). The prompt tokens are initially encoded conditionally on the dialogue context. The resulting prompt encoding is then taken as the initial hidden state of the large PLM to generate responses. Compared to fine-tuning, DialogPrompt is encouraged to search proper prompts which controls the large PLMs into producing higher-quality responses directly.

We evaluate DialogPrompt on popular multi-turn conversation datasets such as DailyDialog and MultiWOZ. Results show that DialogPrompt outperforms fine-tuning counterparts and other prompt tuning methods in terms of automated evaluation measures and the average length of generated responses. Human evaluation supports the superiority of our approach in generating informative and knowledgeable responses.

Our contributions are summarized as follows:

  • To the best of our knowledge, we are the first to propose prompt-based learning for general dialogue generation. Our approach can better reuse knowledge from existing large-scale PLMs and produce more knowledgeable responses.

  • We design a novel dynamic prompt encoder for encouraging context-aware prompt learning.

  • We extensively evaluated our approach on popular multi-turn conversation datasets and demonstrated the superiority of our approach in terms of quantitative automatic evaluations and qualitative human evaluations.

Related Work

This work is closely related to (1) pre-trained models for conversations, and (2) prompt learning for pre-trained language models.

Pre-trained Models for Dialogue Generation. Recently, an emerging trend in dialogue generation explores the adaptation of large pre-trained language models on dialogue corpora Golovanov et al. (2019); Zhang et al. (2019). For example, Golovanov et al. (2019) studied how pre-trained architectures can be adapted for natural language generation, comparing a number of architectural and training schemes. The state-of-the-art DialoGPT Zhang et al. (2019) pre-trains a GPT-2 model on large-scale conversation datasets and achieves a giant leap in performance against traditional conversation models.

Another line of work related to exploiting the use of pre-trained models for task-oriented dialogues. For example, Budzianowski and Vulić (2019) proposed a task-oriented dialogue model that operates solely on text input. Their model is built on top of the TransferTransfo framework Golovanov et al. (2019) that effectively bypasses explicit policy and language generation modules. TOD-BERT proposed by Wu et al. (2020) bridges the difference of general text and task-oriented dialogue by unifying nine human-human and multi-turn task-oriented dialogue datasets for language modeling. The model also incorporates user and system tokens into the masked language modeling and proposes a contrastive objective function to simulate the response selection task.

Compared to these related works which directly fine-tune the dialogue model based on a pre-trained model, DialogPrompt is a novel paradigm for pre-trained dialogue models which elicits knowledge from PLMs directly through minimal optimizing of prompt tokens.

Prompt Learning for Pre-trained Language Models. There is a growing trend of automatically finding prompts to adapt pre-trained language models to downstream tasks Shin et al. (2020); Li and Liang (2021); Liu et al. (2021b). For example, Shin et al. (2020) proposed AutoPrompt which automatically optimizes prompts using a gradient signal. Unlike our method, AutoPrompt searches for hard prompts, thus it may be less versatile than the continuous methods. Instead, Liu et al. (2021b) proposed a continuous prompt tuning model named p-tuning. p-tuning optimizes fill-in-the-blank prompts in a continuous space, tested on GPT-2 and BERT models. A similar idea was proposed by Li and Liang (Li and Liang, 2021) who considered the tuning of prompts using a textual prefix. Specifically, they prepended a few task-specific “soft tokens” (prefix) to the source text and tuned the hidden states of only these tokens (at all Transformer layers). Similarly, Lester et al. (2021) prepended a sequence of prompt tokens to the source text, but only the word embeddings of these tokens are optimized. Qin and Eisner (2021) proposed prompt-based learning on relation extraction tasks using data-dependent mixtures of prompt templates and parameters.

Our method differs from existing prompt-based tuning methods in that we propose a novel context-aware prompt tuning mechanism that can optimize prompt encodings conditioned on dialogue contexts. Our work also differs from a very recent work by Zheng et al. which explores prompt-based learning for grounded dialogue generation Zheng and Huang (2021).


In this section, we present the implementation of DialogPrompt. The overall framework is shown in Figure 1. First, we present the standard autoregressive pre-trained dialogue model as the backbone. Then, we will introduce a naive idea of applying prompt learning for dialogue modeling, followed by our novel context-aware prompt tuning model for dialogues.

Figure 1: Overview of fine-tuning and (context-aware) prompt-tuning for response generation.

Response Generation via Autoregressive Transformer Models

Let = [] denote a dialogue of words which is comprised of a context = [] followed by a response = [

]. The goal of response generation is to produce the response given the dialogue context, namely, estimating the conditional probability of


As shown in Figure 1 (top), an autoregressive Transformer model such as GPT-2 (Radford et al., 2019) solves this problem by sequentially estimating the probability of each word in the target response conditioned on historical words in the dialogue:


where denotes the trainable parameters of the autoregressive Transformer model.

In detail, let ) = [] be the embeddings of the dialogue tokens. These input embeddings are then fed into the pre-trained Transformer to obtain the contextual representations = [], where each is a function of and the past representations of its left context:


Then, each is used to compute the distribution for the next token: = softmax() and is a pre-trained matrix that maps

to logits over the vocabulary.

In the conventional fine-tuning framework, we initialize with the pre-trained parameters and perform gradient descend on the following objective:


where denotes the trainable language model.

Prompt Learning for Conversations

Based on intuition from prompting (Liu et al., 2021b; Li and Liang, 2021), we believe that having a proper prompt of context can adapt the large pre-trained language model to the conversation domain without re-training all its parameters from scratch (Ben-David et al., 2021).

One intuitive baseline approach can be proposed to simply adopt previous work in prompt learning (e.g., prefix-tuning on GPT-2 Li and Liang (2021)) for conversations. More specifically, we can prepend a prompt utterance of tokens = [] to each dialogue context to obtain = [; ; ], as shown in Figure 1 (middle). A fully connected prompt encoder can be designed to transform the prompt utterance into a sequence of hidden states, namely,



denotes the fully connected neural network and

denotes the trainable parameters.

We follow the same recurrence relation in Equation 2, except that the hidden states of the prompt utterance are taken as past hidden states of the Transformer:


Now the transformer hidden states depend on the prompt encodings, because the prompt utterance is always located to the left of the context and hence affect hidden states of the context in the autoregressive Transformer.

The training objective is to only optimize the prompt parameters while keeping the pre-trained Transformer parameters frozen, namely,


where denotes the only trainable parameters of the fully connected prompt encoder; represents the frozen parameters of the pre-trained language model.

Dynamic Prompt Learning for Context-Aware Prompt Adaptation

In the previous model, the encoding of the prompt utterance is independent of dialog context . That means, prompts for all conversations share the same encoding. However, the latent space of dialogue context is more complicated and difficult to be represented with such a unified encoding. Intuitively, the context can influence the encoding of prompt by guiding what to extract from the PLM. We want to find a prompt encoding that steers the LM to the current context.

Extending this intuition beyond generating a unified prompt encoding, we propose a dynamic prompt encoder, as shown in Figure 1 (bottom). Given the dialogue context =[] with a prompt utterance =[], the prompt encodings [] are dynamically generated conditioning on the context using another autoregressive Transformer:


where are computed using Equation 2 and are taken as past hidden states for the new Transformer to generate the prompt encodings.

Now we update the hidden states of the pre-trained Transformer based on the new prompt encodings:


The final hidden states [

] are taken as input the the pre-trained language model to generate the response. Our training objective now becomes to minimize the following loss function:


where represents the parameters for the prompt encoder; denotes the frozen parameters of the pre-trained language model.

Experimental Setup


We evaluate all models on two popular response generation datasets, namely, DailyDialog111 and MultiWOZ222 Table 1 shows the statistics of these datasets.

Dataset DailyDialog MultiWOZ
dialogues 13,118 8,438
train samples 76,052 106,794
valid samples 7,069 12,902
test samples 6,740 12,914
Table 1: Overview of the datasets.

The DailyDialog is a manually labeled multi-turn dialogue dataset that contains daily conversations in English. As was originally designed for English learners, DailyDialog has a more chit-chat style compared to other datasets. The MultiWOZ (Budzianowski et al., 2018) is a fully-labeled collection of human-human written conversations spanning over multiple domains and topics such as attraction, hotel, hospital, police, restaurant, train, and taxi. Compared to DailyDialog, MultiWOZ is more challenging due to the diverse domains and language styles.

Implementation Details

We used GPT-2 (Radford et al., 2019) as the backbone PLM for all models. GPT-2 has been widely employed for generating dialogues Zhang et al. (2019). We did not take the more advanced GPT-3 due to the restriction of our computational resources. Besides, GPT-3 relies heavily on super large models and training corpora, so the effect of prompt learning could be overwhelmed. Our implementation was based on the Huggingface Transformer repository (Wolf et al., 2019)

. For the sake of computational efficiency, we limit each context to have at most 4 utterances, with each containing less than 20 words. The batch size for all models was set to 32. In the generation phase, we used the top-1 sampling for response decoding. The hyperparameters we tuned include the prompt size and learning rate. We search for the best hyperparameters using NAVER Smart Machine Learning (NSML) 

Sung et al. (2017); Kim et al. (2018); Park et al. (2019). The prompt size was empirically set to 5 and 20 for DailyDialog and MultiWOZ, respectively. All models were optimized using the AdamW (Loshchilov and Hutter, 2018) optimizer using initial learning rates of 1-3 and 5-5 for DailyDialog and MultiWOZ, respectively. We used a linear learning rate scheduler with 5,000 warm-up steps. We trained all models on a Linux server with Ubuntu 16.04 and a GPU of Nvidia Tesla V100. The training processes were early stopped when there was no progress on the validation loss. Then, the corresponding checkpoint is used to evaluate the performance on the test set. We run each experiment for five times and reported the average scores.

Baseline Models

We compare our approach with popular fine-tuning and prompt learning methods, namely,
(i) Fine-Tuning: the default training method for adapting pre-trained models to conversations. As we use GPT-2 as our backbone model, we implement this baseline model by directly fine-tuning GPT-2 (Radford et al., 2019) on the conversation datasets; (ii) P-Tuning (Liu et al., 2021b): a well-known prompt learning method which searches continuous prompts by adding prompt tokens. In our implementation, we modify the GPT-2 input by prepending prompt tokens before each utterance in the context. Specifically, we construct the following template for GPT-2, where denotes the number of prompt tokens for each utterance. We empirically set to 3 in our experiments. (iii) Prefix-Tuning (Li and Liang, 2021): which prepends a prefix of prompt tokens before the source sequence and optimizes hidden states of all Transformer layers for the prompt tokens. Prefix-tuning is a similar approach to our DialogPrompt except that it uses a unified prompt encoding for all input contexts. Therefore, we implemented the common modules using the same configuration as in DialogPrompt; and (iv) Soft-Prompt-Tuning (Lester et al., 2021): is similar to prefix-tuning in architecture. The difference between two methods is in that softprompt-tuning only optimizes the embeddings of prompt tokens, while prefix-tuning optimizes parameters of all hidden layers.

Fine-Tuning 12.65 17.55 7.04 8.01 15.60
P-Tuning 10.36 12.54 5.09 5.91 15.28
SoftPrompt-Tuning 10.25 13.22 5.20 6.20 14.46
Prefix-Tuning 12.29 16.64 6.55 7.55 15.49
DialogPrompt (ours) 13.94 19.07 7.61 8.51 16.90
Table 2: Comparison between DialogPrompt and baseline models on the DailyDialog dataset.
Fine-Tuning 20.31 47.82 16.54 17.34 18.70
P-Tuning 16.66 24.60 7.57 8.93 18.03
SoftPrompt-Tuning 16.64 24.40 7.18 8.78 18.08
Prefix-Tuning 20.24 48.68 16.25 17.24 18.21
DialogPrompt (ours) 20.96 52.94 17.62 18.22 18.91
Table 3: Comparison between DialogPrompt and baseline models on the MultiWOZ dataset.

Evaluation Metrics

We evaluate all models using five commonly used metrics in NLG, namely, BLEU (Papineni et al., 2002), NIST (Doddington, 2002), METEOR (Lavie and Agarwal, 2007), ROUGE-L (Lin, 2004), and the average length of generated responses.

BLEU evaluates how many n-grams in the generated response match those in the human reference. We report the average of BLEU 1-4 scores in our experiments using the NLTK toolkit

333 NIST (Doddington, 2002) is similar to BLEU but assigns different weights to n-gram matches according to their information gain. We use the implementation in the NLTK toolkit444 METEOR (Lavie and Agarwal, 2007) is based on unigram matching (surface forms, stemmed forms, and meanings) between the generated response and human reference. ROUGE-L (Lin, 2004) measures the longest common subsequence (LCS) between the generated response and human reference. Finally, the average length of generated responses is also a critical metric to measure the quality of generated responses Gu et al. (2019); Zhang et al. (2019). Research has shown that dialogue models can produce safe responses that are usually short and uninformative (e.g., I do not know) Gu et al. (2019). We simply average the length of generated responses for all test examples.

Evaluation Results

Automatic Evaluation

Table 2 and 3 shows the performance of each method on the two datasets respectively. Broadly, DialogPrompt achieves the best performance across all automatic metrics. Compared to the fine-tuning baseline, DialogPrompt is superior across all metrics with a large margin. For example, the BLEU score is increased by 10% on the DailyDialog dataset. Such improvement is consistent on both datasets, affirming the superiority of the prompt based pre-trained dialogue model. This indicates that our approach by eliciting knowledge from pre-trained GPT-2 is more effective than the fine-tuning counterparts.

The prefix-tuning baseline model achieves similar performance to that of fine-tuning, which indicates that simply applying previous prompt learning methods to dialogues does not bring better performance to PLM-based dialogue models.

Among the three prompt learning models (i.e., p-tuning, softprompt-tuning, and prefix-tuning), the prefix-tuning achieves the best performance on both datasets. This is probably because the prefix-tuning optimizes activations of all layers, which is more compatible to autoregressive Transformers such as GPT-2. Besides, the prefix-tuning optimizes more parameters compared to simply optimizing prompt embeddings Li and Liang (2021); Liu et al. (2021b).

The improvement of DialogPrompt on the MultiWOZ dataset is less significant in terms of BLEU and average response length compared to that on the DailyDialog dataset. This is probably because the MultiWOZ dataset was originally prepared for task-oriented dialogues and contains specific domain knowledge. Such task-specific knowledge might not contained in the pre-trained GPT-2. Another possible reason for this phenomenon is that MultiWOZ has a larger amount of data, which brings more efficacy for fine-tuning methods which are usually data hungry.

Overall, these results show that DialogPrompt can utilize pre-trained language models more effectively than general prompt learning methods.

Ablation Study

Figure 2: Effects of prompt size on the DailyDialog dataset. For ease of rendering, we omit the results in terms of NIST.
Fine-Tuning (standard) 12.65 17.55 7.04 8.01 15.60
DialogPrompt (standard) 13.94 19.07 7.61 8.51 16.90
Fine-Tuning (medium) 14.72 22.27 8.75 10.16 15.93
DialogPrompt (medium) 14.98 22.79 9.44 10.75 16.17
Fine-Tuning (large) 15.77 23.84 10.59 11.41 15.97
DialogPrompt (large) 16.60 26.61 11.39 12.35 16.76
Table 4: Comparison between DialogPrompt and baseline models with different model sizes in the DailyDialog dataset.

One of the key hyperparameters of our approach is the prompt size, namely, the number of prompt tokens for each context. We conducted an ablation study to assess the sensitivity of prompt size to the performance. We trained DialogPrompt on the DailyDialog dataset with various numbers of prompt tokens in the prompt utterance. As shown in Figure 2, the number of prompt tokens has little effect on the performance. The model can achieve satisfactory performance with only 5 prompt tokens. Increasing the prompt size does not bring significant improvement to the performance. Considering the balance between performance and complexity, the optimal number of prompts on the DailyDialog dataset is around 5.

We also conducted an ablation study on the model size of the pre-trained models. We trained DialogPrompt with different GPT sizes, namely, standard (L=12, H=12, D=768), medium (L=24, H=16, D=1024), and large (L=36, H=20, D=1280). Results show that DialogPrompt outperforms the fine-tuning counterpart with all three model sizes. That means that our method is effective on different scales of backbone pre-trained models. However, as the model size increases, the improvement of our method becomes less significant, which indicates that DialogPrompt is more effective in smaller pre-trained models. We conjecture that larger pre-trained models contain massive parameters and were pre-trained with enormous data which may overwhelm the efficacy of prompt tuning. Nevertheless, we found that the model size has a positive correlation to the performance: larger GPT-2 backbones tend to achieve better performance.

Comparison Coherence Informativeness Fluency
Win Tie Loss Win Tie Loss Win Tie Loss
Ours vs. Fine-Tuning 58.13% 18.79% 23.07% 56.43% 23.16% 20.41% 57.42% 20.45% 22.13%
Ours vs. P-Tuning 60.60% 19.59% 19.82% 58.82% 24.15% 17.03% 59.38% 21.15% 19.48%
Ours vs. Prefix-Tuning 58.13% 18.79% 23.07% 55.89% 22.94% 21.16% 57.36% 20.43% 22.21%
Table 5: Human evaluation on the test set of DailyDialog. All results have statistical significance of .

Human Evaluation

To further verify the effectiveness of DialogPrompt, we conducted a human evaluation on the Amazon Mechanical Turk platform. We chose DailyDialog as the evaluation corpus since it is in the style of daily chit-chats and can be easier for annotators to rate their preference. We randomly sampled 200 dialogues from the test set of DailyDialog. For each of the samples, we present the dialogue context, followed by a pair of responses from our model or a baseline model (without order), to three different workers. Each worker was asked to evaluate the responses in terms of the three criteria, namely, coherence, informativeness, and fluency. Coherence measures how relevant the generated responses are to the context. Informativeness measures how well the generated response includes non-trivial information. Fluency measures how well the generated responses are human readable. Finally, the workers blindly rated their preference using a 3-point Likert scale: “win” (ours is better), “loss” (the other is better) and “tie” (equally good or bad).

The overall results is summarized in Table 5. A strong preference can be observed for DialogPrompt over baseline models in terms of all metrics. Interestingly, fine-tuning and prefix-tuning receive similar preferences, which is consistent to the results in the automatic evaluation.

Context: A: Good evening. What’ll you have?
B: I’d like a beer. What kind of beer do you have?
fine-tuning: I want a light to medium beer.
p-tuning: I will be glad to help you out once your stay here gets a little hot.
Thank you.
prefix-tuning: Pardon?
DialogPrompt: We have the Imperial Porter. It’s our standard beer.
Human: Draft beer and cans beer. Which do you prefer?
Context: A: how long will it take us to drive to London?
B: I think it’ s a distance of 180 kilometers from here to London,
    so it should be a two-hour drive on the motorway.
A: that’ s unless there is a traffic jam. It could take three hours.
fine-tuning: I think it will be difficult, but I think we’ ll enjoy ourselves.
p-tuning: I’ t think I want to stay here.
prefix-tuning: it can be dangerous. You could be killed.
DialogPrompt: I think it would be super time-consuming. It would also be bad
for business. There could be a car accident, especially if
something goes wrong.
Human: You’ re right. We will be able to travel at high speeds at the
beginning and end.
Table 6: Sample conversations from multiple models with human reference.

Case Study

Table 6 presents two sample responses generated by different models in the DailyDialog dataset. As the samples indicate, DialogPrompt generates more coherent responses than other models, which is consistent with the results of the automatic and human evaluation. According to the first sample, instead of giving a safe response, DialogBERT produces a more contentful response. For example, the response contains specific information such as “imperial porter” and “standard”. This is consistent with the results of informativeness in the human study. The second sample shows more clear strength of DialogPrompt which produces the longest response among all the results. Besides, the response generated by DialogPrompt contains knowledgeable, human-like keywords, such as ‘time-consuming’, ‘business’, and ‘car accident’. This is presumably because DialogPrompt reuses much knowledge from pre-trained models which have already seen large amounts of domain-specific data. DialogPrompt also shows a more fluent responses than baseline models. For example, the baseline model p-tuning generates a response that contains grammar errors such as ‘I’t think’ and ‘gets a little hot’, while our approach generates error-free and human-like responses.

Our observations suggest that DialogPrompt is better in modeling multi-turn conversations than fine-tuning counterparts and naive adaptations of existing prompt learning models.


Why does DialogPrompt work better than fine-tuning?

One possible reason for the improvement is the intensified learning of reusing knowledge from PLMs by freezing the autoregressive decoder. The fine-tuning baseline model attributes all variations of data to the autoregressive decoder. A sufficiently high-capacity autoregressive decoder can model the conditional density directly, ignoring the relations p() between contexts and responses McCarthy et al. (2020). By freezing the decoder, our model restricts the optimization to the local prompt Transformer. Hence, it focuses more on how to reuse knowledge from the pre-trained model instead of decoding the target response autoregressively.

Threats to Validity

Our model is built on top of the GPT-2 model. Although GPT-2 is one of the most typical pre-trained models and has been shown to be effective in response generation Zhang et al. (2019), it still remains to be verified whether or not the proposed prompt model is applicable to other pre-trained models. We leave prompt learning on other pre-trained models for future directions.


In this paper, we propose DialogPrompt, a novel prompt based response generation model. DialogPrompt prepends a prompt utterance to the dialogue context and only optimizes the prompt encoder. In order to adapt to different contexts, we propose a dynamic prompt encoder that updates prompt activation based on the hidden states of context before response generation. Results on two popular conversation datasets, namely, DailyDialog and MultiWOZ show that DialogPrompt significantly outperforms fine-tuning counterparts and other prompt based models on both automatic and human evaluations. In the future, we will investigate prompt-based dialogue modeling based on more pre-trained language models.


The author would thank Jung-Woo Ha at NAVER AI Lab for his support and valuable comments on this project.


  • E. Ben-David, N. Oved, and R. Reichart (2021) PADA: a prompt-based autoregressive approach for adaptation to unseen domains. arXiv preprint arXiv:2102.12206. Cited by: Introduction, Prompt Learning for Conversations.
  • P. Budzianowski and I. Vulić (2019) Hello, it’s gpt-2-how can i help you? towards the use of pretrained language models for task-oriented dialogue systems. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pp. 15–22. Cited by: Introduction, Related Work.
  • P. Budzianowski, T. Wen, B. Tseng, I. Casanueva, S. Ultes, O. Ramadan, and M. Gašić (2018) Multiwoz-a large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling. arXiv preprint arXiv:1810.00278. Cited by: Dataset.
  • J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: Introduction.
  • G. Doddington (2002) Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. In Proceedings of the second international conference on Human Language Technology Research, pp. 138–145. Cited by: Evaluation Metrics, Evaluation Metrics.
  • X. Feng, X. Feng, L. Qin, B. Qin, and T. Liu (2021) Language model as an annotator: exploring dialogpt for dialogue summarization. arXiv preprint arXiv:2105.12544. Cited by: Introduction.
  • S. Golovanov, R. Kurbanov, S. Nikolenko, K. Truskovskyi, A. Tselousov, and T. Wolf (2019)

    Large-scale transfer learning for natural language generation

    In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 6053–6058. Cited by: Related Work, Related Work.
  • X. Gu, K. Cho, J. Ha, and S. Kim (2019) DialogWAE: multimodal response generation with conditional wasserstein auto-encoder. In International Conference on Learning Representations, Cited by: Evaluation Metrics.
  • H. Kim, M. Kim, D. Seo, J. Kim, H. Park, S. Park, H. Jo, K. Kim, Y. Yang, Y. Kim, et al. (2018) Nsml: meet the mlaas platform with a real-world case study. arXiv preprint arXiv:1810.09957. Cited by: Implementation Details.
  • A. Lavie and A. Agarwal (2007) Meteor: an automatic metric for mt evaluation with high levels of correlation with human judgments. In Proceedings of the Second Workshop on Statistical Machine Translation, StatMT ’07, USA, pp. 228–231. Cited by: Evaluation Metrics, Evaluation Metrics.
  • B. Lester, R. Al-Rfou, and N. Constant (2021) The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691. Cited by: Related Work, Baseline Models.
  • X. L. Li and P. Liang (2021) Prefix-tuning: optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190. Cited by: Introduction, Related Work, Prompt Learning for Conversations, Prompt Learning for Conversations, Baseline Models, Automatic Evaluation.
  • C. Lin (2004) ROUGE: a package for automatic evaluation of summaries. In Text Summarization Branches Out, Barcelona, Spain, pp. 74–81. External Links: Link Cited by: Evaluation Metrics, Evaluation Metrics.
  • P. Liu, W. Yuan, J. Fu, Z. Jiang, H. Hayashi, and G. Neubig (2021a) Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586. Cited by: Introduction.
  • X. Liu, Y. Zheng, Z. Du, M. Ding, Y. Qian, Z. Yang, and J. Tang (2021b) GPT understands, too. arXiv preprint arXiv:2103.10385. Cited by: Introduction, Introduction, Related Work, Prompt Learning for Conversations, Baseline Models, Automatic Evaluation.
  • I. Loshchilov and F. Hutter (2018) Decoupled weight decay regularization. In International Conference on Learning Representations, Cited by: Implementation Details.
  • A. D. McCarthy, X. Li, J. Gu, and N. Dong (2020)

    Addressing posterior collapse with mutual information for improved variational neural machine translation

    In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 8512–8525. Cited by: Why does DialogPrompt work better than fine-tuning?.
  • K. Papineni, S. Roukos, T. Ward, and W. Zhu (2002) BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pp. 311–318. Cited by: Evaluation Metrics.
  • H. Park, J. Kim, M. Kim, J. Kim, J. Choo, J. Ha, and N. Sung (2019) VisualHyperTuner: visual analytics for user-driven hyperparameter tuning of deep neural networks. In Demo at SysML Conference, Cited by: Implementation Details.
  • G. Qin and J. Eisner (2021) Learning how to ask: querying lms with mixtures of soft prompts. arXiv preprint arXiv:2104.06599. Cited by: Introduction, Related Work.
  • A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever (2019) Language models are unsupervised multitask learners. Open AI Blog 1, pp. 9. Cited by: Introduction, Response Generation via Autoregressive Transformer Models, Implementation Details, Baseline Models.
  • T. Shin, Y. Razeghi, R. L. Logan IV, E. Wallace, and S. Singh (2020) Autoprompt: eliciting knowledge from language models with automatically generated prompts. arXiv preprint arXiv:2010.15980. Cited by: Introduction, Related Work.
  • Y. Sun, S. Wang, Y. Li, S. Feng, H. Tian, H. Wu, and H. Wang (2019) Ernie 2.0: a continual pre-training framework for language understanding. arXiv preprint arXiv:1907.12412. Cited by: Introduction.
  • N. Sung, M. Kim, H. Jo, Y. Yang, J. Kim, L. Lausen, Y. Kim, G. Lee, D. Kwak, J. Ha, et al. (2017) Nsml: a machine learning platform that enables you to focus on your models. arXiv preprint arXiv:1712.05902. Cited by: Implementation Details.
  • A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: Introduction.
  • T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, and J. Brew (2019) HuggingFace’s transformers: state-of-the-art natural language processing. ArXiv abs/1910.03771. Cited by: Implementation Details.
  • C. Wu, S. C. Hoi, R. Socher, and C. Xiong (2020) TOD-bert: pre-trained natural language understanding for task-oriented dialogue. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 917–929. Cited by: Related Work.
  • Y. Zhang, S. Sun, M. Galley, Y. Chen, C. Brockett, X. Gao, J. Gao, J. Liu, and B. Dolan (2019) DialoGPT: large-scale generative pre-training for conversational response generation. arXiv preprint arXiv:1911.00536. Cited by: Introduction, Related Work, Implementation Details, Evaluation Metrics, Threats to Validity.
  • C. Zheng and M. Huang (2021) Exploring prompt-based few-shot learning for grounded dialog generation. arXiv preprint arXiv:2109.06513. Cited by: Related Work.