Low-Resource Adaptation of Open-Domain Generative Chatbots

08/13/2021
by   Greyson Gerhard-Young, et al.
3

Recent work building open-domain chatbots has demonstrated that increasing model size improves performance. On the other hand, latency and connectivity considerations dictate the move of digital assistants on the device. Giving a digital assistant like Siri, Alexa, or Google Assistant the ability to discuss just about anything leads to the need for reducing the chatbot model size such that it fits on the user's device. We demonstrate that low parameter models can simultaneously retain their general knowledge conversational abilities while improving in a specific domain. Additionally, we propose a generic framework that accounts for variety in question types, tracks reference throughout multi-turn conversations, and removes inconsistent and potentially toxic responses. Our framework seamlessly transitions between chatting and performing transactional tasks, which will ultimately make interactions with digital assistants more human-like. We evaluate our framework on 1 internal and 4 public benchmark datasets using both automatic (Perplexity) and human (SSA - Sensibleness and Specificity Average) evaluation metrics and establish comparable performance while reducing model parameters by 90

READ FULL TEXT
research
10/20/2020

Local Knowledge Powered Conversational Agents

State-of-the-art conversational agents have advanced significantly in co...
research
08/19/2020

FinChat: Corpus and evaluation setup for Finnish chat conversations on everyday topics

Creating open-domain chatbots requires large amounts of conversational d...
research
09/12/2022

FiBiNet++:Improving FiBiNet by Greatly Reducing Model Size for CTR Prediction

Click-Through Rate(CTR) estimation has become one of the most fundamenta...
research
01/27/2020

Towards a Human-like Open-Domain Chatbot

We present Meena, a multi-turn open-domain chatbot trained end-to-end on...
research
12/14/2021

MDD-Eval: Self-Training on Augmented Data for Multi-Domain Dialogue Evaluation

Chatbots are designed to carry out human-like conversations across diffe...
research
01/12/2022

Human Evaluation of Conversations is an Open Problem: comparing the sensitivity of various methods for evaluating dialogue agents

At the heart of improving conversational AI is the open problem of how t...

Please sign up or login with your details

Forgot password? Click here to reset