DeepAI
Log In Sign Up

Building a Personalized Dialogue System with Prompt-Tuning

06/11/2022
by   Tomohito Kasahara, et al.
0

Dialogue systems without consistent responses are not fascinating. In this study, we build a dialogue system that can respond based on a given character setting (persona) to bring consistency. Considering the trend of the rapidly increasing scale of language models, we propose an approach that uses prompt-tuning, which has low learning costs, on pre-trained large-scale language models. The results of automatic and manual evaluations in English and Japanese show that it is possible to build a dialogue system with more natural and personalized responses using less computational resources than fine-tuning.

READ FULL TEXT

page 1

page 2

page 3

page 4

01/21/2022

A Comparative Study on Language Models for Task-Oriented Dialogue Systems

The recent development of language models has shown promising results by...
11/04/2021

Response Generation with Context-Aware Prompt Learning

Pre-trained language models (PLM) have marked a huge leap in neural dial...
09/11/2021

Empirical Analysis of Training Strategies of Transformer-based Japanese Chit-chat Systems

In recent years, several high-performance conversational systems have be...
06/27/2020

Video-Grounded Dialogues with Pretrained Generation Language Models

Pre-trained language models have shown remarkable success in improving v...
04/30/2022

Building a Role Specified Open-Domain Dialogue System Leveraging Large-Scale Language Models

Recent open-domain dialogue models have brought numerous breakthroughs. ...
04/17/2021

Neural Path Hunter: Reducing Hallucination in Dialogue Systems via Path Grounding

Dialogue systems powered by large pre-trained language models (LM) exhib...