BBQ-Networks: Efficient Exploration in Deep Reinforcement Learning for Task-Oriented Dialogue Systems

11/15/2017
by   Zachary Lipton, et al.
0

We present a new algorithm that significantly improves the efficiency of exploration for deep Q-learning agents in dialogue systems. Our agents explore via Thompson sampling, drawing Monte Carlo samples from a Bayes-by-Backprop neural network. Our algorithm learns much faster than common exploration strategies such as ϵ-greedy, Boltzmann, bootstrapping, and intrinsic-reward-based ones. Additionally, we show that spiking the replay buffer with experiences from just a few successful episodes can make Q-learning feasible when it might otherwise fail.

READ FULL TEXT
research
11/30/2017

Uncertainty Estimates for Efficient Neural Network-based Dialogue Policy Optimisation

In statistical dialogue management, the dialogue manager learns a policy...
research
10/24/2022

MEET: A Monte Carlo Exploration-Exploitation Trade-off for Buffer Sampling

Data selection is essential for any data-based optimization technique, s...
research
07/14/2021

Mixing Human Demonstrations with Self-Exploration in Experience Replay for Deep Reinforcement Learning

We investigate the effect of using human demonstration data in the repla...
research
05/05/2023

Rescue Conversations from Dead-ends: Efficient Exploration for Task-oriented Dialogue Policy Optimization

Training a dialogue policy using deep reinforcement learning requires a ...
research
02/11/2018

Sample Efficient Deep Reinforcement Learning for Dialogue Systems with Large Action Spaces

In spoken dialogue systems, we aim to deploy artificial intelligence to ...
research
04/21/2023

On the Importance of Exploration for Real Life Learned Algorithms

The quality of data driven learning algorithms scales significantly with...
research
06/02/2019

Budgeted Policy Learning for Task-Oriented Dialogue Systems

This paper presents a new approach that extends Deep Dyna-Q (DDQ) by inc...

Please sign up or login with your details

Forgot password? Click here to reset