DeepAI
Log In Sign Up

Hybrid Supervised Reinforced Model for Dialogue Systems

11/04/2020
by   Carlos Miranda, et al.
13

This paper presents a recurrent hybrid model and training procedure for task-oriented dialogue systems based on Deep Recurrent Q-Networks (DRQN). The model copes with both tasks required for Dialogue Management: State Tracking and Decision Making. It is based on modeling Human-Machine interaction into a latent representation embedding an interaction context to guide the discussion. The model achieves greater performance, learning speed and robustness than a non-recurrent baseline. Moreover, results allow interpreting and validating the policy evolution and the latent representations information-wise.

READ FULL TEXT

page 7

page 8

03/31/2017

Frames: A Corpus for Adding Memory to Goal-Oriented Dialogue Systems

This paper presents the Frames dataset (Frames is available at http://da...
11/29/2017

End-to-End Optimization of Task-Oriented Dialogue Model with Deep Reinforcement Learning

In this paper, we present a neural network based task-oriented dialogue ...
05/16/2022

Taming Continuous Posteriors for Latent Variational Dialogue Policies

Utilizing amortized variational inference for latent-action reinforcemen...
10/30/2019

A Framework for Building Closed-Domain Chat Dialogue Systems

This paper presents PyChat, a framework for developing closed-domain cha...
11/28/2018

Few-Shot Generalization Across Dialogue Tasks

Machine-learning based dialogue managers are able to learn complex behav...
12/14/2022

Mitigating Negative Style Transfer in Hybrid Dialogue System

As the functionality of dialogue systems evolves, hybrid dialogue system...
08/30/2019

Modeling Multi-Action Policy for Task-Oriented Dialogues

Dialogue management (DM) plays a key role in the quality of the interact...