Hybrid Supervised Reinforced Model for Dialogue Systems

11/04/2020
by   Carlos Miranda, et al.
13

This paper presents a recurrent hybrid model and training procedure for task-oriented dialogue systems based on Deep Recurrent Q-Networks (DRQN). The model copes with both tasks required for Dialogue Management: State Tracking and Decision Making. It is based on modeling Human-Machine interaction into a latent representation embedding an interaction context to guide the discussion. The model achieves greater performance, learning speed and robustness than a non-recurrent baseline. Moreover, results allow interpreting and validating the policy evolution and the latent representations information-wise.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 8

03/31/2017

Frames: A Corpus for Adding Memory to Goal-Oriented Dialogue Systems

This paper presents the Frames dataset (Frames is available at http://da...
07/07/2021

DORA: Toward Policy Optimization for Task-oriented Dialogue System with Efficient Context

Recently, reinforcement learning (RL) has been applied to task-oriented ...
04/29/2020

Modeling Long Context for Task-Oriented Dialogue State Generation

Based on the recently proposed transferable dialogue state generator (TR...
05/01/2018

Memory-augmented Dialogue Management for Task-oriented Dialogue Systems

Dialogue management (DM) decides the next action of a dialogue system ac...
11/28/2018

Few-Shot Generalization Across Dialogue Tasks

Machine-learning based dialogue managers are able to learn complex behav...
08/30/2019

Modeling Multi-Action Policy for Task-Oriented Dialogues

Dialogue management (DM) plays a key role in the quality of the interact...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.