Strategic Dialogue Management via Deep Reinforcement Learning

by   Heriberto Cuayáhuitl, et al.
Heriot-Watt University

Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53 (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27 claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.


page 4

page 7


Deep Reinforcement Learning for On-line Dialogue State Tracking

Dialogue state tracking (DST) is a crucial module in dialogue management...

SimpleDS: A Simple Deep Reinforcement Learning Dialogue System

This paper presents 'SimpleDS', a simple and publicly available dialogue...

Ensemble-Based Deep Reinforcement Learning for Chatbots

Trainable chatbots that exhibit fluent and human-like conversations rema...

CraftAssist: A Framework for Dialogue-enabled Interactive Agents

This paper describes an implementation of a bot assistant in Minecraft, ...

Deep Reinforcement Learning Discovers Internal Models

Deep Reinforcement Learning (DRL) is a trending field of research, showi...

High-Quality Diversification for Task-Oriented Dialogue Systems

Many task-oriented dialogue systems use deep reinforcement learning (DRL...

SocialAI: Benchmarking Socio-Cognitive Abilities in Deep Reinforcement Learning Agents

Building embodied autonomous agents capable of participating in social i...

1 Introduction

Artificially intelligent agents can require strategic conversational skills to negotiate during their interactions with other natural or artificial agents, e.g. “A: I will give/tell you X if you give/tell me Y?, B: Okay”. While typical conversations of artificial agents assume cooperative behaviour from partner conversants, strategic conversation does not assume full cooperation during the interaction between agents [2]. Throughout this paper, we will use a strategic card-trading board game to illustrate our approach. Board games with trading aspects aim not only at entertaining people, but also at training them with trading skills. Popular board games of this kind include Last Will, Settlers of Catan, and Power Grid, among others [20]. While these games can be played between humans, they can also be played between computers and humans. The trading behaviours of AI agents in computer games are usually based on carefully tuned rules [33], search algorithms such as Monte-Carlo tree search [31, 9], and reinforcement learning with tabular representations [12, 11] or linear function approximation [26, 25]. However, the application of reinforcement learning is not trivial due to the complexity of the problem, e.g. large state-action spaces exhibited in strategic conversations. On the one hand, unique situations in the interaction can be described by a large number of variables (e.g. game board and resources available) so that enumerating them would result in very large state spaces. On the other hand, the action space can also be large due to the wide range of unique negotiations (e.g. givable and receivable resources). While one can aim for optimising the interaction via compression of the search space, it is usually not clear what features to incorporate in the state representation. This is a strong motivation for applying deep reinforcement learning for dialogue management, as first proposed by (anon citation), so that the agent can simultaneously learn its feature representation and policy. In this paper, we present an application of deep reinforcement learning to learning trading dialogue for the game of Settlers of Catan.

Our scenario for strategic conversation is the game of Settlers of Catan, where players take the role of settlers on the fictitious island of Catan—see Figure 1(left). The board game consists of 19 hexes randomly connected: 3 hills, 3 mountains, 4 forests, 4 pastures, 4 fields and 1 desert. In this island, hills produce clay, mountains produce ore, pastures produce sheep, fields produce wheat, forests produce wood, and the desert produces nothing. In our setting, four players attempt to settle on the island by building settlements and cities connected by roads. To build, players need specific resource cards, for example: a road requires clay and wood; a settlement requires clay, sheep, wheat and wood; a city requires three clay cards and two wheat cards; and a development card requires clay, sheep and wheat. Each player gets points for example by building a settlement (1 point) or a city (2 points), or by obtaining victory point cards (1 point each). A game consists of a sequence of turns, and each game turn starts with the roll of a die that can make the players obtain resources (depending on the number rolled and resources on the board). The player in turn can trade resources with the bank or through dialogue with other players, and can make use of available resources to build roads, settlements or cities. This game is highly strategic because players often face decisions about when to trade, what resources to request, and what resources to give away—which are influenced by what they need to build. A player can extend build-ups on locations connected to existing pieces, i.e. road, settlement or city, and all settlements and cities must be separated by at least 2 roads. The first player to win 10 victory points wins and all others

In this paper, we extend previous work on strategic conversation that has applied supervised or reinforcement learning in that we simultaneously learn the feature representation and dialogue policy by using Deep Reinforcement Learning (DRL). We compare our learnt policies against random, rule-based and supervised baselines, and show that the DRL-based agents perform significantly better than the baselines.

2 Background

A Reinforcement Learning (RL) agent learns its behaviour from interaction with an environment and the physical or virtual agents within it, where situations are mapped to actions by maximizing a long-term reward signal [29, 30]. An RL agent is typically characterized by: (i) a finite or infinite set of states ; (ii) a finite or infinite set of actions ; (iii) a stochastic state transition function that specifies the next state given the current state and action ; (iv) a reward function that specifies the reward given to the agent for choosing action when the environment makes a transition from state to state ; and (v) a policy that defines a mapping from states to actions. The goal of an RL agent is to select actions by maximising its cumulative discounted reward defined as , where function represents the maximum sum of rewards discounted by factor

at each time step. While the RL agent takes actions with probability

during training, it takes the best actions at test time.

To induce the function above we use Deep Reinforcement Learning as in [22], which approximates

using a multilayer convolutional neural network. The

function of a DRL agent is parameterised as , where are the parameters (weights) of the neural net at iteration . More specifically, training a DRL agent requires a dataset of experiences (also referred to as ‘experience replay memory’), where every experience is described as a tuple . Inducing the function consists in applying Q-learning updates over minibatches of experience drawn uniformly at random from the full dataset .

A Q-learning update at iteration

is thus defined as the loss function

, where are the parameters of the neural net at iteration , and are the target parameters of the neural net at iteration . The latter are only updated every steps. This process is implemented in the learning algorithm Deep Q-Learning with Experience Replay described in [22].

3 Policy Learning for Strategic Interaction

Our approach for strategic interaction optimises two tasks jointly: learning to offer and learning to reply to offers. In addition, our approach learns from constrained search spaces rather than unconstrained ones, resulting in quicker learning and also in learning from only legal (allowed) decisions.

3.1 Learning to offer and to reply

A strategic agent has to offer a trade to its opponent agents (or players). In the case of the game of Settlers of Catan, an example trading offer is I will give anyone sheep for clay. Several things can be observed from this simple example. First, note that this offer may include multiple givable and receivable resources. Second, note that the offer is addressed to all opponents (as opposed to one opponent in particular, which could also be possible). Third, note that not all offers are allowed at a particular point in the game – they depend on the particular state of the game and resources available to the player for trading. The goal of the agent is to learn to make legal offers that will yield the largest pay-off in the long run.

A strategic agent also has to reply to trading offers made by an opponent. In the case of the game of Settlers of Catan, the responses can be narrowed down to (a) accepting the offer, (b) rejecting it, or (c) replying with a counteroffer (e.g. I want two sheep for one clay). Note that this set of responses is available at any point in the game once there is an offer made by any agent (or player). Similarly to the task above, the goal of the agent is to learn to choose a response that will yield the largest pay-off in the long run.

While one can aim for optimising only one of the tasks above, a joint optimisation of the these two tasks equips an automatic trading agent with more completeness. To do that, given an environment state space , trading negotiations , and responses , the goal of a strategic learning agent consists of inducing an optimal policy so that action selection can be defined as , where the

function is estimated as described in the previous section,

is the set of trading negotiations in turn, and is the set of responses.

3.2 Deep Learning from constrained action sets

While the behaviour of a strategic agent can be trained as described above, using deep learning with large action sets can be prohibitively expensive in terms of computation time. Our solution to this limitation consists in learning from constrained action sets rather than whole and static action sets. We distinguish two action sets, an action set

which contains responses to trading negotiations and remains static, and an action set which contains those trading negotiations that are valid at any given point in the game (i.e. which the player is able to make due to the resources that they hold). We refer to the latter action set as , which contains a dynamic set of trading negotiations available according to the game state and available resources (e.g. the agent would not offer a particular resource if it does not have it). Thus, we reformulate the goal of a strategic learning agent as inducing an optimal policy so that action selection can be defined as , where the function is still estimated as described in Section 2, is the constrained set of trading negotiations in turn (i.e. legal offers), and is the set of responses. Note that the size of will vary depending on the game state.

4 Experiments and Results

In this section we apply the approach above to conversational agents that learn to offer and to reply in the game of Settlers of Catan.

4.1 Experimental Setting

Figure 1: Integrated system of the Deep Reinforcement Learning (DRL) agent for strategic interaction. (left) GUI of the board game “Settlers of Catan” [33]. (right) Multilayer neural network of the DRL agent–see text for details.

4.1.1 Integrated learning environment

Figure 1(left) shows our integrated learning environment. On the left-hand side, the JSettlers benchmark framework [33] receives an action (trading offer or response) and outputs the next game state and numerical reward. On the right-hand side, a Deep Reinforcement Learning (DRL) agent receives the state and reward, updates its policy during learning, and outputs an action following its learnt policy. Our integrated system is based on a multi-threaded implementation, where each player makes use of a synchronised thread. In addition, this system runs under a client-server architecture, where the learning agent acts as the ‘server’ and the game acts as the ‘client’. They communicate by exchanging messages, where the server tells the client the action to execute, and the client tells the server the game state and reward observed. Our DRL agents are based on the ConvNetJS tool [15], which implements the algorithm ‘Deep Q-Learning with experience replay’ proposed by [22]. We extended this tool to support multi-threaded and client-server processing with constrained search spaces.222The code of this substantial extension with an illustrative dialogue system is available at the following link:

Num. Feature Domain Description
1 hasClay {0…10} Number of clay units available
1 hasOre {0…10} Number of ore units available
1 hasSheep {0…10} Number of sheep units available
1 hasWheat {0…10} Number of wheat units available
1 hasWood {0…10} Number of wood units available
19 hexes {0…5} Type of resource: 0=desert,1=clay,2=ore, 3=sheep,4=wheat,5=wood
54 nodes {0…4} Where builds are located: 0=no settlement or city,
1 and 2=opponent builds, 3 and 4=agent builds
80 edges {0…2} Where roads are located:
0=no road in given edge, 1=opponent road, 2=agent road
1 robber {0…5} On type of resource: 0=desert,1=clay,2=ore, 3=sheep,4=wheat,5=wood
1 turns {0..100} Number of turns of the game so far
Table 1: Feature set (size=160) of the DRL agent for trading in the game of Settlers of Catan

4.1.2 Characterisation of the learning agent

The state space of our learning agent includes 160 non-binary features that describe the game board and the available resources. Table 1 describes the state variables that represent the input nodes, which we normalise to the range [0..1]. These features represent a high-dimensional state space—only approachable via reinforcement learning with function approximation.

The action space of our learning agents includes 70 actions for offering trading negotiations333Trading negotiation actions, where =clay, =ore, =sheep, =wheat, and : C4D, C4O, C4S, C4W, CC4D, CC4O, CC4S, CC4W, CD4O, CD4S, CD4W, CO4D, CO4S, CO4W, CS4D, CS4O, CS4W, CW4D, CW4O, CW4S, D4C, D4O, D4S, D4W, DD4C, DD4O, DD4S, DD4W, O4C, O4D, O4S, O4W, OD4C, OD4S, OD4W, OO4C, OO4D, OO4S, OO4W, OS4C, OS4D, OS4W, OW4C, OW4D, OW4S, S4C, S4D, S4O, S4W, SD4C, SD4O, SD4W, SS4C, SS4D, SS4O, SS4W, SW4C, SW4D, SW4O, W4C, W4D, W4O, W4S, WD4C, WD4O, WD4S, WW4C, WW4D, WW4O, WW4S. Example trade: C4D=clay for wood. and 3 actions444Reply actions: accept, reject and counteroffer for replying to offers from opponents. Notice that our offer actions only make use of up to two givable resources and only one receivable resource is considered.

The state transition function of our agents is based on the game itself using the JSettlers framework [33]. In addition, our strategic interactions were carried out at the semantic level rather than at the word level, for example: S4C is a higher-level representation of “I will give you sheep for clay”. Furthermore, our trained agents were active only during the selection of trading offers and reply to offers, the functionality of the rest of the game was based on the JSettlers framework.

The reward function of our agent is based on the game points provided by the JSettlers framework, but we make a distinction between reply actions and offer actions. This is due to the fact that we consider reply actions as high-level actions, and offer actions as lower-level ones. Our reward function is defined as:

where =points at time minus the points at time , and refers to the accumulated number of points of the trained agent during the game. We used the following weights for reply actions: , and the following for offer actions: .

The model architecture consists of a fully-connected multilayer neural network with 160 nodes in the input layer (see Table 1

), 50 nodes in the first hidden layer, 50 nodes in the second hidden layer, and 73 nodes (action set) in the output layer. The hidden layers use RELU (Rectified Linear Units) activation functions to normalise their weights, see

[23] for details. Finally, the learning parameters are as follows: experience replay size=30K, discount factor=0.7, minimum epsilon=0.05, learning rate=0.001, and batch size=64. A comprehensive analysis comparing multiple state representations, action sets, reward functions and learning parameters is left for future work.

4.2 Experimental Results

We use the following baselines to compare our trained strategic agents, where we only switch the trading offers and reply behaviours—the remaining behaviour of the game remains constant and is provided by the JSettlers framework:

  • [leftmargin=*]

  • Ran: This agent chooses trading negotiation offers randomly, and replies to offers from opponents also in a random fashion. Although this is a weak baseline, we use it to analyse the impact of policies trained (and tested) against random behaviour.

  • Heu

    : This agent chooses trading negotiation offers and replies to offers from opponents as dictated by the heuristic bots included in the JSettlers framework

    555The baseline trading agent referred to as ‘heuristic’ included the following parameters, see [13]: TRY_N_BEST_BUILD_PLANS:0, FAVOUR_DEV_CARDS:-5., see [34, 13] for details.

  • Sup

    : This agent chooses trading negotiation offers using a random forest classifier

    [3, 14]

    , and replies to offers from opponents using the heuristic behaviour above. This agent was trained from 32 games played between 56 different human players—labelled by multiple annotators. We compute the probability distribution of a human-like trade as

    , where refers to the class prediction (in our case, the givable resource), refers to observed features666Evidence: Number of resources available, number of builds (roads, settlements and cities), and the resource received., is the posterior distribution of the th tree, and is a normalisation constant [4]

    . This classifier used 100 decision trees. Assuming that

    is a set of givables at a particular point in time in the game, extracting the most human-like trading offer (givable ) given collected evidence (context of the game), is defined as . The classification accuracy of this statistical classifier was 65.7%—according to a 10-fold cross-validation evaluation [5, 6].

Figure 2: Learning curves of Deep Reinforcement Learners (DRLs) against random, heuristic and supervised opponents. It can be observed that DRL agents can learn from different types of opponents—even from randomly behaving ones.
Comparison Winning Victory Offers Successful Total Pieces Cards Turns
between Agents Rate(%) Points Made Offers Trades Built Bought p/Game
1 Ran vs. 3 Heu 00.01 2.58 133.72 122.69 140.58 1.76 0.73 56.35
1 Ran vs. 3 Sup 00.01 2.74 143.19 131.28 150.97 2.31 0.73 57.98
1 Heu vs. 3 Ran 98.46 10.15 41.63 18.19 167.12 13.81 0.24 45.59
1 Heu vs. 3 Heu 25.24 6.46 149.74 140.17 282.33 8.48 0.29 62.25
1 Sup vs. 3 Ran 97.30 10.13 45.61 19.89 175.38 13.84 0.24 47.97
1 Sup vs. 3 Heu 27.36 6.48 144.53 134.72 269.64 8.44 0.30 62.26
1 DRL vs. 3 Ran 98.31 10.16 38.52 17.08 203.19 13.98 0.22 45.34
1 DRL vs. 3 Heu 49.49 8.06 144.82 137.13 353.98 11.04 0.27 62.72
1 DRL vs. 3 Sup 39.64 7.51 154.00 146.18 364.64 10.36 0.29 62.62
1 DRL vs. 3 Ran 98.23 10.17 38.54 16.98 194.68 13.85 0.23 44.35
1 DRL vs. 3 Heu 53.36 8.22 146.84 139.12 343.29 11.37 0.27 61.46
1 DRL vs. 3 Sup 41.97 7.65 157.28 149.26 355.88 10.59 0.30 62.04
1 DRL vs. 3 Ran 98.52 10.15 38.26 16.80 193.31 13.81 0.23 43.88
1 DRL vs. 3 Heu 50.29 8.14 150.62 142.66 348.14 11.31 0.28 62.59
1 DRL vs. 3 Sup 41.58 7.64 156.37 147.90 356.27 10.70 0.30 62.73
Table 2: Evaluation results comparing Deep Reinforcement Learners (DRL) vs. 3 baseline traders (random, heuristic, supervised). Columns 2-7, show average results—of the player at the most left—over 10K test games. Notation: DRL=DRL agent trained vs. random behaviour, DRL=DRL agent trained vs. heuristic opponents, and DRL=DRL agent trained vs. supervised opponents.

We trained three DRL agents against random, heuristic and supervised opponents—see Figure 2, which used 500K training experiences (around 2000 games each learning curve). We evaluate the learnt policies according to a cross-evaluation using the following metrics in terms of averages per game (using 10 thousand test games per comparison): win-rate, victory points, (successful) offers, total trades, pieces built, cards bought, and number of turns. Our observations of the cross-evaluation, reported in Table 2, are as follows:

  1. [leftmargin=*]

  2. The DRL agents acquire very competitive strategic behaviour in comparison to the other types of agents—they simply win substantially more than their opponents. While random behaviour is easy to beat with over 98% win-rate, the DRL agents achieve over 50% of win-rate against heuristic opponents and over 40% against supervised opponents. These results substantially outperform the heuristic and supervised agents which achieve less than 30% of win-rate (at according to a two-tailed Wilcoxon-Signed Rank Test).

  3. The DRL agents outperform the baselines not just in win-rates but also in other metrics such as average victory points, pieces built and total trades. The latter is more prominent, for example, while the heuristic and supervised agents achieve between 270 to 280 trades per game, the DRL agents compared against heuristic and supervised agents achieve between 340 and 360 trades. This means that the DRL agents tend to trade more than their opponents, i.e. they accept more offered trading negotiations. These differences suggest that knowing when to accept, reject or counter offer a trading negotiation is crucial for winning.

  4. Training a DRL agent in the environment where it will be tested is better than training and testing across environments. For example, DRL versus heuristic behaviour is better (53.4% win-rate) than DRL versus heuristic behaviour (50.3% win-rate). However, our results report that DRL agents trained using randomly behaving opponents are almost as good as those trained with stronger opponents. This suggests that DRL agents for strategic interaction can be also be trained without highly skilled opponents, presumably by tracking their rewards over time.

  5. The DRL agents find the supervised agent harder to beat. This is because the supervised agent is the strongest baseline, which achieves the best winning rate of the baseline agents. It can be noted that the DRL agents versus supervised behaviour make more offers and trade more than the DRL agents versus heuristic behaviour. We can infer from this result that knowing when to offer and when to trade seem crucial for better winning rates.

  6. The fact that the agent with random behaviour hardly wins any games, suggests that sequential decision-making in this strategic game is far from trivial.

In summary, strategic dialogue agents trained with deep reinforcement learning have the potential to acquire highly competitive behaviour, not just from training against strong opponents but even from opponents with random behaviour. This result may help to reduce the resources (heuristics or labelled data) required for training future strategic agents.

5 Related Work

Reinforcement learning applied to strategic interaction includes the following. [32] proposes reinforcement learning with multilayer neural networks for training an agent to play the game of Backgammon. He finds that agents trained with such an approach are able to match and even beat human performance. [26] proposes hierarchical reinforcement learning for automatic decision making on object-placing and trading actions in the game of Settlers of Catan. He incorporates built-in knowledge for learning the behaviours of the game quicker, and finds that the combination of learned and built-in knowledge is able to beat human players. [11] used reinforcement learning in non-cooperative dialogue, and focus on a small 2-player trading problem with 3 resource types, but without using any real human dialogue data. This work showed that explicit manipulation moves (e.g. “I really need sheep”) can be used to win when playing against adversaries who are gullible (i.e. they believe such statements) but also against adversaries who can detect manipulation and can punish the player for being manipulative [10]. More recently, [16] designed an MDP model for selecting trade offers, trained and evaluated within the full jSettlers environment (4 players, 5 resource types). In comparison to the DRL model, it had a much more restricted state-action space, leading to significant, but more modest improvements over supervised learning and hand-coded baselines.

Other related work has been carried out in the context of automated non-cooperative dialogue systems, where an agent may act to satisfy its own goals rather than those of other participants [12]. The game-theoretic underpinnings of non-cooperative behaviour have also been investigated [1]. Such automated agents are of interest when trying to persuade, argue, or debate, or in the area of believable characters in video games and educational simulations [12, 28]. Another arena in which strategic conversational behaviour has been investigated is negotiation [35], where hiding information (and even outright lying) can be advantageous.

Recent work on deep learning applied to games include the following. [19] train a deep convolutional network for the game of Go, but it is trained in a supervised fashion rather than trained to maximise a long-term reward as in this work. A closely related work to ours is a DRL agent for text-based games [24]. Their states are based on words, their policies are induced using game-based rewards, and their actions are based on directions such as ‘go east/west/south/north’. Another closely related work to ours is DRL agents trained to play ATARI games [21]. Their states are based on pixels from down-sampled images, their policies make use of game-based rewards, and their actions are based on joystick movements. In contrast to these previous works which are based on navigation commands, our agents are use trading dialogue moves (e.g. ‘I will give you ore and sheep for clay’, or ‘I accept/decline your offer’), which are essential behaviours for strategic interaction.

This paper extends the recent work above on training strategic agents using reinforcement learning, which have either used small state-action spaces or focused on navigation commands rather than negotiation dialogue. The learning agents described in this paper use a high dimensional state representation (160 non-binary features) and a fairly large action space (73 actions) for learning strategic non-cooperative dialogue behaviour. To our knowledge, our results report the highest winning rates reported to date in the game of Settlers of Catan, see [13, 16, 9]. The comprehensive evaluation reported in the previous section is evidence to argue that deep reinforcement learning is a promising framework for training strategic interactive agents.

6 Concluding Remarks

The contribution of this paper is the first application of Deep Reinforcement Learning (DRL) to optimising the behaviour of strategic conversational agents. Our learning agents are able to: (i) discover what trading negotiations to offer, (ii) discover when to accept, reject, or counteroffer; (iii) discover strategic behaviours based on constrained action sets—i.e. action selection from legal actions rather than from all of them; and (iv) learn highly competitive behaviour against different types of opponents. All of this is supported by a comprehensive evaluation of three DRL agents trained against three baselines (random, heuristic and supervised), which are analysed from a cross-evaluation perspective. Our experimental results report that all DRL agents substantially outperform all the baseline agents. Our results are evidence to argue that DRL is a promising framework for training the behaviour of complex strategic interactive agents.

Future work can for example carry out similar evaluations as above in other strategic environments, and can also extend the abilities of the agents with other strategic features [18] and forms of learning [7, 27]. In addition, a comparison of different model architectures, training parameters and reward functions can be explored in future work. Last but not least, given that our learning agents trade at the semantic level, they can be extended with language understanding/generation abilities to communicate verbally [17, 8].


Funding from the European Research Council (ERC) project “STAC: Strategic Conversation” no. 269427 is gratefully acknowledged, see Funding from the ESPRC, project EP/M01553X/1 “BABBLE” is gratefully acknowledged, see


  • [1] N. Asher and A. Lascarides. Commitments, beliefs and intentions in dialogue. In Proc. of SemDial, 2008.
  • [2] N. Asher and A. Lascarides. Strategic conversation. Semantics and Pragmatics, 6(2):1–62, 2013.
  • [3] L. Breiman. Random forests. Machine Learning, 45(1), 2001.
  • [4] A. Criminisi, J. Shotton, and E. Konukoglu.

    Decision forests: A unified framework for classification, regression, density estimation, manifold learning and semi-supervised learning.

    Foundations and Trends in Computer Graphics and Vision, 7(2-3), 2012.
  • [5] H. Cuayáhuitl, S. Keizer, and O. Lemon. Learning to trade in strategic board games. In IJCAI Workshop on Computer Games (IJCAI-CGW), 2015.
  • [6] H. Cuayáhuitl, S. Keizer, and O. Lemon. Learning trading negotiations using manually and automatically labelled data. In International Conference on Tools with Artificial Intelligence (ICTAI), 2015.
  • [7] H. Cuayáhuitl, M. van Otterlo, N. Dethlefs, and L. Frommberger. Machine learning for interactive systems and robots: A brief introduction. In Proceedings of the 2nd Workshop on Machine Learning for Interactive Systems: Bridging the Gap Between Perception, Action and Communication, MLIS ’13, New York, NY, USA, 2013. ACM.
  • [8] N. Dethlefs and H. Cuayáhuitl.

    Hierarchical reinforcement learning for situated natural language generation.

    Natural Language Engineering, 21, 5 2015.
  • [9] M. S. Dobre and A. Lascarides. Online learning and mining human play in complex games. In IEEE Conference on Computational Intelligence and Games, CIG, 2015.
  • [10] I. Efstathiou and O. Lemon. Learning to manage risk in non-cooperative dialogues. In SemDial, 2014.
  • [11] I. Efstathiou and O. Lemon. Learning non-cooperative dialogue behaviours. In SIGDIAL, 2014.
  • [12] K. Georgila and D. Traum. Reinforcement learning of argumentation dialogue policies in negotiation. In Proc. of INTERSPEECH, 2011.
  • [13] M. Guhe and A. Lascarides. Game strategies for The Settlers of Catan. In 2014 IEEE Conference on Computational Intelligence and Games, CIG, 2014.
  • [14] T. Hastie, R. Tibshirani, and J. Friedman. The elements of statistical learning: data mining, inference and prediction. Springer, 2 edition, 2009.
  • [15] A. Karpathy. ConvNetJS: Javascript library for deep learning., 2015.
  • [16] S. Keizer, H. Cuayáhuitl, and O. Lemon. Learning Trade Negotiation Policies in Strategic Conversation. In Workshop on the Semantics and Pragmatics of Dialogue: goDIAL, 2015.
  • [17] O. Lemon. Adaptive Natural Language Generation in Dialogue using Reinforcement Learning. In Proc. SEMDIAL, 2008.
  • [18] R. Lin and S. Kraus. Can automated agents proficiently negotiate with humans? Commun. ACM, 53(1), Jan. 2010.
  • [19] C. J. Maddison, A. Huang, I. Sutskever, and D. Silver. Move evaluation in go using deep convolutional neural networks. CoRR, abs/1412.6564, 2014.
  • [20] M. McFarlin. 10 great board games for traders. Futures Magazine, 2013.
  • [21] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing atari with deep reinforcement learning. In NIPS Deep Learning Workshop. 2013.
  • [22] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 02 2015.
  • [23] V. Nair and G. E. Hinton.

    Rectified linear units improve restricted Boltzmann machines.

    In Proceedings of the 27th International Conference on Machine Learning (ICML), pages 807–814, 2010.
  • [24] K. Narasimhan, T. Kulkarni, and R. Barzilay. Language understanding for text-based games using deep reinforcement learning. In

    Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

    , September 2015.
  • [25] A. Papangelis and K. Georgila. Reinforcement Learning of Multi-Issue Negotiation Dialogue Policies. In Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGdial), 2015.
  • [26] M. Pfeiffer. Reinforcement learning of strategies for settlers of catan. In International Conference on Computer Games: Artificial Intelligence, Design and Education, 2004.
  • [27] O. Pietquin and M. Lopez. Machine learning for interactive systems: Challenges and future trends. In Proceedings of the Workshop Affect, Compagnon Artificiel (WACAI), 2014.
  • [28] J. Shim and R. Arkin. A Taxonomy of Robot Deception and its Benefits in HRI. In Proc. IEEE Systems, Man, and Cybernetics Conference, 2013.
  • [29] R. Sutton and A. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
  • [30] C. Szepesvári. Algorithms for Reinforcement Learning. Morgan and Claypool Publishers, 2010.
  • [31] I. Szita, G. Chaslot, and P. Spronck. Monte-carlo tree search in settlers of catan. In Proceedings of the 12th International Conference on Advances in Computer Games, ACG’09, 2010.
  • [32] G. Tesauro. Temporal difference learning and TD-Gammon. Commun. ACM, 38(3), 1995.
  • [33] R. Thomas and K. J. Hammond. Java Settlers: a research environment for studying multi-agent negotiation. In Intelligent User Interfaces (IUI), pages 240–240, 2002.
  • [34] R. S. Thomas. Real-time decision making for adversarial environments using a plan-based heuristic. PhD thesis, Northwestern University, 2003.
  • [35] D. Traum. Extended abstract: Computational models of non-cooperative dialogue. In Proc. of SIGdial Workshop on Discourse and Dialogue, 2008.