Reward-Balancing for Statistical Spoken Dialogue Systems using Multi-objective Reinforcement Learning

by   Stefan Ultes, et al.
University of Cambridge

Reinforcement learning is widely used for dialogue policy optimization where the reward function often consists of more than one component, e.g., the dialogue success and the dialogue length. In this work, we propose a structured method for finding a good balance between these components by searching for the optimal reward component weighting. To render this search feasible, we use multi-objective reinforcement learning to significantly reduce the number of training dialogues required. We apply our proposed method to find optimized component weights for six domains and compare them to a default baseline.


page 1

page 2

page 3

page 4


An Application of Reinforcement Learning to Dialogue Strategy Selection in a Spoken Dialogue System for Email

This paper describes a novel method by which a spoken dialogue system ca...

Deep Reinforcement Learning for Multi-Domain Dialogue Systems

Standard deep reinforcement learning methods such as Deep Q-Networks (DQ...

On-line Active Reward Learning for Policy Optimisation in Spoken Dialogue Systems

The ability to compute an accurate reward function is essential for opti...

Automating Staged Rollout with Reinforcement Learning

Staged rollout is a strategy of incrementally releasing software updates...

Predicting optimal value functions by interpolating reward functions in scalarized multi-objective reinforcement learning

A common approach for defining a reward function for Multi-objective Rei...

Improving Interaction Quality Estimation with BiLSTMs and the Impact on Dialogue Policy Learning

Learning suitable and well-performing dialogue behaviour in statistical ...

1 Introduction

In a Spoken Dialogue System (SDS), one of the main problems is to find appropriate system behaviour for any given situation. This problem is often modelled using reinforcement learning (RL) where the task is to find an optimal policy which maps the current belief state

—an estimate of the user goal— to the next system action

. To do this, RL algorithms seek to optimize an objective function, the reward , using sample dialogues. In contrast to other RL tasks (like AlphaGo Silver et al. (2016)), the reward used in goal-oriented dialogue systems usually consists of more than one objective (e.g., task success and dialogue length Levin et al. (1998); Lemon et al. (2006); Young et al. (2013)).

However, balancing these rewards is rarely considered and the goal of this paper is to propose a structured method for finding the optimal weights for a multiple objective reward function. Finding a good balance between multiple objectives is usually domain-specific and not straight-forward. For example, in the case of task success and dialogue length, if the reward for success is too high, the learning algorithm is insensitive to potentially irritating actions such as repeat provided that the dialogue is ultimately successful. Conversely, if the reward for success is too small, the resulting policy may irritate users by offering inappropriate solutions before fully illiciting the user’s requirements.

In this paper, we propose to find a suitable reward balance by searching through the space of reward component weights. Doing this with conventional RL techniques is infeasible as a policy must be trained for each candidate balance and this requires an enormous number of training dialogues. To alleviate this, we propose to use multi-objective RL (MORL) which is specifically designed for this task (among others Roijers et al. (2013)). Then, only one policy needs to be trained which may be evaluated with several candidate balances. To the best of our knowledge, this is the first time MORL has been applied to dialogue policy optimization.

In contrast to previous work which explicitly selects component weights to maximize user satisfaction Walker (2000) explicitly, the proposed method enables optimisation of an implicit goal by allowing the interplay each reward component to be explored at low computational cost.

Several different algorithms have previously been used for MORL Castelletti et al. (2013); Van Moffaert et al. (2015); Pirotta et al. (2015); Mossalam et al. (2016). In this work, we propose a novel MORL algorithm based on Gaussian processes. This is described in Section 2 along with a brief introduction to MORL. In Section 3, the proposed method for finding a good reward balance with MORL is presented. Section 4 describes the application and evaluation of the balancing method on six different domains. Finally conclusions are drawn in Section 5.

2 Multi-objective Reinforcement Learning with Gaussian Processes

In this Section we present our proposed extension of the GPSARSA algorithm for MORL after giving a brief introduction to single- and multi-objective RL and the GPSARSA algorithm itself.

Reinforcement Learning

Reinforcement learning (RL) is used in a sequential decision-making process where a decision-model (the policy ) is trained based on sample data and a potentially delayed objective signal (the reward Sutton and Barto (1998). Implementing the Markov assumption, the policy selects the next action based on the current system belief state to optimise the accumulated future reward at time :


Here, denotes the number of future steps, a discount factor and the reward at time .

The -function models the expected accumulated future reward when taking action in belief state and then following policy :



For most real-world problems, finding the exact optimal -values is not feasible. Instead, Engel et al. (2005) have proposed the GPSARSA algorithm which uses Gaussian processes (GP) to approximate the -function. Gašić and Young (2014) have shown that this works well when applied to the problem of spoken dialogue policy optimisation. GPSARSA is a Bayesian on-line learning algorithm which models the -function as a zero-mean GP which is fully defined by a mean and a kernel function :


where the kernel models the correlation between data points. Based on sample data, the GP is trained to approximate

such that the variance derived from the kernel represents the uncertainty of the approximation.

In dialogue management, the following kernel has been successfully used:


It consists of a linear kernel for the continuous belief representation and the -kernel for the discrete system action .

Multi-objective Reinforcement Learning

In multi-objective reinforcement learning (MORL), the objective function does not consist of only one but of many dimensions. Thus, the reward

becomes a vector

, where is the number of objectives.

To define the contribution of each objective, a scalarization function is introduced which uses weights for the different objectives to map the vector representation to a scalar value. The solution to a MORL problem is a set of optimal policies containing an optimal policy for any given weight configuration.

In MORL, the -function may either be modelled as a vector of -functions or directly as the expectation of the scalarized vector of :


In practice, the scalarization function is often modelled as a linear function (the weighted sum):


Multi-objective GPSARSA

The proposed multi-objective (MO) GPSARSA is based on Equation 5. By approximating the scalarized -function directly using a GP, the GPSARSA algorithm may be applied for MORL. The GP (and thus the -function) is extended by one parameter—the weight vector : .

Approximating the -function with a GP relies on the fact that the accumulated future reward (Eq. 1) may be decomposed as


Accordingly, for using a GP to directly estimate the scalarized reward in MO-GPSARSA, the equation


must hold. This is true in case of using a linear scalarization function (Eq. 6).

To alter the kernel accordingly, a linear kernel for is added to the state kernel111A similar type of kernel extension has been proposed previously in a different context, e.g., Casanueva et al. (2015). resulting in


Since a linear scalarization function is applied, the correlations with other data points are also assumed to be linear.

To train a policy using multi-objective GPSARSA, a new weight configuration is sampled randomly for each training dialogue. An example of the training process being applied to dialogue policy optimization with the two objectives task success and dialogue length is depicted in Algorithm LABEL:alg:mogpsarsa.


3 Reward Balancing using MORL

The main contribution of this paper is to provide a structured method for finding a good balance between multiple rewards for learning dialogue policies. For the two-objective problem of having a task success reward and a dialogue length reward , , the scalarized reward is


where is the number of turns and iff the dialogue is successful, zero otherwise.

To find a good reward balance, we adopt the following procedure:

  1. Set initial reward values and along with the initial weight configuration.

  2. Apply MORL to train a policy for a given number of training dialogues and evaluate with different weight configurations.

  3. Select an appropriate balance based on success-weight and length-weight curves to optimise the individual implicit goal.

The method may be refined by applying it recursively with different grid sizes. After selecting a suitable weight configuration, a single-objective policy may be trained.

4 Experiments and Results

Figure 1: The MORL success-weight and length-weight curves (m, task success rate (TSR) on left, number of turns T on right vertical axes; success weights on horizontal axes) after 3,000 training dialogues. Each data point is the average over five policies with different seeds where each policy/weight configuration is evaluated with 300 dialogues. As a comparison, the same curves using single-objective RL (s, separate policies trained for each balance) have been created after selecting the weights.
Figure 2: The task success rates (TSR, left axes) and dialogue length in number of turns (T, right axes) for all six domains comparing the baseline (, ) with the optimised balance. The horizontal axes show the number of training dialogues. Each data point is the average over five policies with different seeds where each policy is evaluated with 300 dialogues.

The reward balancing method described in the previous section is applied to six domains: finding TVs, laptops, restaurants or hotels (the latter two in Cambridge and San Francisco). The following table depicts the domain statistics with the number of search constraints, the number of informational items the user can request, and the number of data-base entities:

Domain # constr. # requests # entities
CamRestaurants 3 9 110
CamHotels 5 11 33
SFRestaurants 6 11 271
SFHotels 6 10 182
TV 6 14 94
Laptops 11 21 126

For consistency with previous work Gašić and Young (2014); Young et al. (2013); Su et al. (2016) the rewards and are used representing the weight configuration . This results in and .

For the evaluation, simulated dialogues were created using the statistical spoken dialogue toolkit PyDial Ultes et al. (2017). It contains an agenda-based user simulator Schatzmann and Young (2009) with an error model to simulate the semantic error rate (SER) encountered in real systems due to the noisy speech channel.

A policy has been trained for each domain using multi-objective GPSARSA with 3,000 dialogues and an SER of 15%. Each policy was evaluated with 300 dialogues for each weight configuration in . The results in Figure 1 are the averages of five trained policies with different random seeds. All curves follow a similar pattern: at some point, the success curve reaches a plateau where the performance does not increase any further with higher .

The following weights were selected: CamRestaurants ; CamHotels ; SFRestaurants ; SFHotels ; TV ; Laptops . These weights were selected by hand according to the success rate222Taking into account the overall performance and the proximity to the edge of the plateau. To compensate for possible inaccuracies of the MO-GPSARSA, the configuration right at the edge has not been chosen. as well as the average dialogue length.

The selected weights were scaled to keep the turn penalty constant at . Using these reward settings, each domain was evaluated with 4,000 dialogues in 10 batches. After each batch, the policies were evaluated with 300 dialogues. The final results shown in Table 1 (selection of learning curves in Figure 2) are compared to the baseline of (i.e. standard unoptimised reward component weight balance). Evidently, optimising the balance has a significant impact on the performance of the trained polices.

TSR # Turns
base. opt. base. opt.
CamRestaurants 14 88.8% 86.2% 6.4 6.3
CamHotels 30 75.1% 79.8% 8.1 8.2
SFRestaurants 47 62.4% 65.7% 8.5 9.1
SFHotels 30 66.7% 69.4% 8.0 8.0
TV 30 75.7% 80.5% 7.4 7.4
Laptops 47 44.6% 54.6% 7.5 8.7
Table 1: Task success rates (TSRs) and number of turns after 4,000 training dialogues using a success reward of 20 (baseline) compared to the optimised success reward . All TSR differences are statistically significant (-test, ).

To analyse the performance of multi-objective GPSARSA, policies were trained and evaluated for each reward balance with single-objective (SO) GPSARSA (see Figure 1) after the weights had been selected. Each SO policy was trained with 1,000 dialogues and evaluated with 300 dialogues, all averaged over five runs. The success-weight curves for SORL clearly resemble the MORL curves for almost all domains except for CamRestaurants where it leads to an incorrect selection of weights. This may be attributed to the kernel used for multi-objective GPSARSA.

It is worth noting that for the presented full MORL analysis, 3,000 training dialogues were necessary for each domain to find a good balance. This is significantly less than the 9,000 dialogues needed for the SORL analysis and this difference would increase further for a finer grain search grid.

5 Conclusion

In this work, we have addressed the problem of finding a good balance between multiple rewards for learning dialogue policies. We have shown the relevance of the problem and demonstrated the usefulness of multi-objective reinforcement learning to facilitate the search for a suitable balance. Using the proposed procedure, only one policy needs to be trained which can then be evaluated for an arbitrary number of reward balances thus drastically reducing the total amount of training dialogues needed.

We have proposed and employed an extension of the GPSARSA algorithm for multiple objectives and applied it to six domains. The experiments show the successful application of our method: the optimal balance improved task success without unduly impacting on dialogue length in all domains except CamRestaurants, where it is clear that the weight selection criteria failed. In practice, this could have been easily trapped by applying a minimum weight to the success criteria. Furthermore, the domain-dependence of the reward balance has been confirmed.

For future work, the accuracy of the proposed multi-objective GPSARSA will be further improved with the ultimate goal of using the proposed method to directly learn a multi-objective policy through interaction with real users. To achieve this, alternative weight kernels will be explored. The resulting multi-objective policy may then directly be applied (without the need of re-training a single-objective policy) and the weights may even be adjusted according to a specific situation or user preferences.

Future work will also include an automatic method to find the optimal balance as well as investigating the relationship between the optimal success reward value and the domain characteristics (similar to Papangelis et al. (2017)).


Tsung-Hsien Wen, Paweł Budzianowski and Stefan Ultes are supported by Toshiba Research Europe Ltd, Cambridge Research Laboratory. This research was partly funded by the EPSRC grant EP/M018946/1 Open Domain Statistical Spoken Dialogue Systems.


All experiments were run in simulation. The corresponding source code is included in the PyDial toolkit which can be found on


  • Casanueva et al. (2015) Inigo Casanueva, Thomas Hain, Heidi Christensen, Ricard Marxer, and Phil Green. 2015. Knowledge transfer between speakers for personalised dialogue management. In 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue. page 12.
  • Castelletti et al. (2013) A Castelletti, F Pianesi, and M Restelli. 2013. A multiobjective reinforcement learning approach to water resources systems operation: Pareto frontier approximation in a single run. Water Resources Research 49(6):3476–3486.
  • Engel et al. (2005) Yaakov Engel, Shie Mannor, and Ron Meir. 2005. Reinforcement learning with gaussian processes. In

    Proceedings of the 22nd international conference on Machine learning

    . ACM, pages 201–208.
  • Gašić and Young (2014) Milica Gašić and Steve J. Young. 2014. Gaussian processes for POMDP-based dialogue manager optimization. IEEE/ACM Transactions on Audio, Speech, and Language Processing 22(1):28–40.
  • Lemon et al. (2006) Oliver Lemon, Kallirroi Georgila, James Henderson, and Matthew Stuttle. 2006. An isu dialogue system exhibiting reinforcement learning of dialogue policies: generic slot-filling in the talk in-car system. In Proceedings of the Eleventh Conference of the European Chapter of the Association for Computational Linguistics: Posters & Demonstrations. Association for Computational Linguistics, pages 119–122.
  • Levin et al. (1998) Esther Levin, Roberto Pieraccini, and Wieland Eckert. 1998.

    Using markov decision process for learning dialogue strategies.

    In Acoustics, Speech and Signal Processing, 1998. Proceedings of the 1998 IEEE International Conference on. IEEE, volume 1, pages 201–204.
  • Mossalam et al. (2016) Hossam Mossalam, Yannis M. Assael, Diederik M Roijers, and Shimon Whiteson. 2016. Multi-objective deep reinforcement learning. CoRR abs/1610.02707.
  • Papangelis et al. (2017) Alexandros Papangelis, Stefan Ultes, and Yannis Stylianou. 2017. Domain complexity and policy learning in task-oriented dialogue systems. In Proceedings of the 8th International Workshop On Spoken Dialogue Systems (IWSDS).
  • Pirotta et al. (2015) Matteo Pirotta, Simone Parisi, and Marcello Restelli. 2015. Multi-objective reinforcement learning with continuous pareto frontier approximation. In

    Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence

    . pages 2928–2934.
  • Roijers et al. (2013) Diederik M Roijers, Peter Vamplew, Shimon Whiteson, and Richard Dazeley. 2013. A survey of multi-objective sequential decision-making. Journal of Artificial Intelligence Research (JAIR) 48:67–113.
  • Schatzmann and Young (2009) Jost Schatzmann and Steve J. Young. 2009. The hidden agenda user simulation model. Audio, Speech, and Language Processing, IEEE Transactions on 17(4):733–747.
  • Silver et al. (2016) David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. 2016.

    Mastering the game of go with deep neural networks and tree search.

    Nature 529(7587):484–489.
  • Su et al. (2016) Pei-Hao Su, M. Gašić, N. Mrkšić, L. Rojas-Barahona, Stefan Ultes, D. Vandyke, T. H. Wen, and S. Young. 2016. On-line active reward learning for policy optimisation in spoken dialogue systems. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 2431–2441.
  • Sutton and Barto (1998) Richard S. Sutton and Andrew G. Barto. 1998. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, USA, 1st edition.
  • Ultes et al. (2017) Stefan Ultes, Lina M. Rojas-Barahona, Pei-Hao Su, David Vandyke, Dongho Kim, Iñigo Casanueva, Paweł Budzianowski, Nikola Mrkšić, Tsung-Hsien Wen, Milica Gašić, and Steve J. Young. 2017. Pydial: A multi-domain statistical dialogue system toolkit. In ACL Demo. Association of Computational Linguistics.
  • Van Moffaert et al. (2015) Kristof Van Moffaert, Tim Brys, and Ann Nowé. 2015. Risk-sensitivity through multi-objective reinforcement learning. In Evolutionary Computation (CEC), 2015 IEEE Congress on. IEEE, pages 1746–1753.
  • Walker (2000) Marilyn Walker. 2000. An application of reinforcement learning to dialogue strategy selection in a spoken dialogue system for email. Journal of Artificial Intelligence Research 12:387–416.
  • Young et al. (2013) Steve J. Young, Milica Gašić, Blaise Thomson, and Jason D. Williams. 2013. POMDP-based statistical spoken dialog systems: A review. Proceedings of the IEEE 101(5):1160–1179.