Prompted LLMs as Chatbot Modules for Long Open-domain Conversation

05/08/2023
by   Gibbeum Lee, et al.
0

In this paper, we propose MPC (Modular Prompted Chatbot), a new approach for creating high-quality conversational agents without the need for fine-tuning. Our method utilizes pre-trained large language models (LLMs) as individual modules for long-term consistency and flexibility, by using techniques such as few-shot prompting, chain-of-thought (CoT), and external memory. Our human evaluation results show that MPC is on par with fine-tuned chatbot models in open-domain conversations, making it an effective solution for creating consistent and engaging chatbots.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/26/2023

Few-shot Fine-tuning vs. In-context Learning: A Fair Comparison and Evaluation

Few-shot fine-tuning and in-context learning are two alternative strateg...
research
08/16/2023

LLM4TS: Two-Stage Fine-Tuning for Time-Series Forecasting with Pre-Trained LLMs

In this work, we leverage pre-trained Large Language Models (LLMs) to en...
research
06/06/2023

Büyük dil modellerinin Türkçe verisetleri ile eğitilmesi ve ince ayarlanması

Large language models have advanced enormously, gained vast attraction a...
research
09/24/2021

Leveraging Pretrained Models for Automatic Summarization of Doctor-Patient Conversations

Fine-tuning pretrained models for automatically summarizing doctor-patie...
research
08/16/2023

MemoChat: Tuning LLMs to Use Memos for Consistent Long-Range Open-Domain Conversation

We propose MemoChat, a pipeline for refining instructions that enables l...
research
05/23/2023

Effortless Integration of Memory Management into Open-Domain Conversation Systems

Open-domain conversation systems integrate multiple conversation skills ...
research
11/28/2022

Arguments to Key Points Mapping with Prompt-based Learning

Handling and digesting a huge amount of information in an efficient mann...

Please sign up or login with your details

Forgot password? Click here to reset