A Deep Reinforcement Learning Approach to Concurrent Bilateral Negotiation

01/31/2020 ∙ by Pallavi Bagga, et al. ∙ Design and Development by: 27

We present a novel negotiation model that allows an agent to learn how to negotiate during concurrent bilateral negotiations in unknown and dynamic e-markets. The agent uses an actor-critic architecture with model-free reinforcement learning to learn a strategy expressed as a deep neural network. We pre-train the strategy by supervision from synthetic market data, thereby decreasing the exploration time required for learning during negotiation. As a result, we can build automated agents for concurrent negotiations that can adapt to different e-market settings without the need to be pre-programmed. Our experimental evaluation shows that our deep reinforcement learning-based agents outperform two existing well-known negotiation strategies in one-to-many concurrent bilateral negotiations for a range of e-market settings.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

We are concerned with the problem of learning a strategy for a buyer agent to engage in concurrent bilateral negotiations with unknown seller agents in open and dynamic e-markets such as E-bay111https://www.ebay.com/

. Previous work in concurrent bilateral negotiation has mainly focused on heuristic strategies 

[Nguyen and Jennings2004, Mansour and Kowalczyk2014, An et al.2006], some of which adapt to changes in the environment [Williams et al.2012]. Different bilateral negotiations are managed in such strategies through a coordinator agent [Rahwan et al.2002] or by coordinating multiple dialogues internally [Alrayes and Stathis2013]

, but do not support agent learning which is our main focus. Other approaches use agent learning based on Genetic Algorithms (GA) 

[Oliver1996, Zou et al.2014], but they require a huge number of trials before obtaining a good strategy, which makes them infeasible for online negotiation settings. Reinforcement Learning (RL)-based negotiation approaches typically employ Q-learning [Papangelis and Georgila2015, Bakker et al.2019, Rodriguez-Fernandez et al.2019] which does not support continuous actions. This is an important limitation in our setting because we want the agent to learn how much to concede e.g. on the price of an item for sale, which in turn naturally leads to a continuous action space. Consequently, the design of autonomous agents capable of learning a strategy from concurrent negotiations with other agents is still an important open problem.

We propose, to the best of our knowledge, the first Deep Reinforcement Learning (DRL) approach for one-to-many concurrent bilateral negotiations in open, dynamic and unknown e-market settings. In particular, we define a novel DRL-inspired agent model called ANEGMA, which allows the buyer to develop an adaptive strategy to effectively use against its opponents (which use fixed-but-unknown strategies) during concurrent negotiations in an environment with incomplete information. We choose deep neural networks as they provide a rich class of strategy functions to capture the complex decisions-making behind negotiation.

Since RL approaches need a long time to find an optimal policy from scratch we pre-train our deep negotiation strategies using supervised learning (SL) from a set of training examples. To overcome the lack of real-world negotiation data for the initial training, we generate synthetic datasets using the simulation environment in 

[Alrayes et al.2016] and two well-known strategies for concurrent bilateral negotiation described in  [Alrayes et al.2018] and [Williams et al.2012] respectively.

With this work, we empirically demonstrate three important benefits of our deep learning framework for automated negotiations: 1) existing negotiation strategies can be accurately approximated using neural networks; 2) evolving a pre-trained strategy using DRL with additional negotiation experience yields strategies that even outperform the teachers, i.e., the strategies used for supervision; 3) buyer strategies trained assuming a particular seller strategy quickly adapt via DRL to different (and unknown) sellers’ behaviours.

In summary, our contribution is threefold: we propose a novel agent model for one-to-many concurrent bilateral negotiations based on DRL and SL; we extend the existing simulation environment [Alrayes et al.2016] to generate data and perform experiments that support agent learning for negotiation; and we run an extensive experiments showing that our approach outperforms the existing strategies and produces adaptable agents that can transfer to a range of e-market settings.

2 Related work

The existing body of automated negotiations differs from ours in one or more of the following ways: the application domain, the focus (or goal of the research), and the way and what machine learning approach has been used to improve the autonomous decision making performance of an agent.

The work in [Lau et al.2006] uses GAs to derive a heuristic search over a set of potential solutions in order to find the mutually acceptable offers. Also, in [Choudhary and Bharadwaj2018], the authors propose a GA-based learning technique for multi-agent negotiation but with regard to making recommendations to a group of persons based on their preferences. Since we are dealing with an environment with limited information, another relevant consideration is related to RL. In [Bakker et al.2019], the authors study a modular RL based BOA (Bidding strategy, Opponent model and Acceptance condition) framework which is an extension of the work done in [Baarslag et al.2016]

. This framework implements an agent that uses tabular Q-learning to learn the bidding strategy by discretizing the continuous state/action space (not an optimal solution for large state/action spaces as it may lead to curse of dimensionality and cause the loss of relevant information about the state/action domain structure too). Q-learning is also used in 

[Rodriguez-Fernandez et al.2019] to provide a decision support system for the Energy market. In addition, the work in [Sunder et al.2018] uses a variable reward function for an RL approach called REINFORCE to model the pro-social or selfish behaviour of agents. Furthermore, the work of [Hindriks and Tykhonov2008, Zeng and Sycara1998] uses Bayesian Learning to learn the opponent preferences instead of the negotiation strategy.

Previous work also consider the combination of different learning approaches to determine an optimal negotiation strategy for an agent. In [Zou et al.2014]

, the authors propose the fusion of evolutionary algorithms (EAs) and RL that outperforms classic EAs; here the replicator dynamics is used with a GA to adjust the probabilities of strategies. In this work, the experiments have shown that different weights assigned to the historical and current payoffs (due to change in environment dynamics) while learning impact both the negotiation performance and the learning to a great extent. Another relevant work is

[Lewis et al.2017]

, which combines SL (Recurrent Neural Network (RNN)) and RL (REINFORCE) to train on human dialogues. We also combine SL and RL but with the main focus on autonomy of negotiations rather than Natural Language Processing (NLP). Also, we differ with respect to the combination of ML approaches (i.e. Artificial Neural Network (ANN) for SL and Actor-Critic model called DDPG

[Lillicrap et al.2017] for RL), which will be explained in subsequent sections.

In addition and independently of the approach, numerous works in the domain of bilateral negotiation rely on the Alternating Offers protocol [Rubinstein1982] as the negotiation mechanism, which, despite its simplicity does not capture many realistic bargaining scenarios.

3 Proposed Work

In this section, we formulate the negotiation environment and introduce our agent negotiation model called ANEGMA (Adaptive NEGotiation model for e-MArkets).

3.1 Negotiation Environment

We consider e-marketplaces like E-bay where the competition is visible, i.e. a buyer can observe the number of competitors that are dealing with the same resource from the same seller. We assume that the environment consists of a single e-market with agents, with a non-empty set of buyers and a non-empty set of sellers – these sets need not be mutually exclusive. For a buyer and resource , we denote with the set of sellers from market which, at time point , negotiate with for a resource (over a range of issues ). The buyer uses negotiation threads, in order to negotiate concurrently with each seller . We assume that no agent can be both buyer and seller for the same resource at the same time, that is, . is the set of competitors of , i.e. those agents negotiating with the same sellers and for the same resource as that of .

As we are interested in practical settings, we adopt the negotiation protocol of [Alrayes et al.2018], since it supports concurrent bilateral negotiations. This protocol assumes an open e-market environment, i.e., where agents can enter or leave the negotiation at their own will. A buyer always starts the negotiation by making an offer whose start time is . Any negotiation is for a resource , since we index the negotiation thread with the name of the seller and the resource , and can last for up to time , the maximum time can negotiate for. The deadline for is, thus, , which for simplicity we assume for all the resources being negotiated. Information about the deadline , Initial Price and Reservation Price is private to each . Each seller also has its own Initial Price , Reservation Price and maximum negotiation duration parameter (which are not visible by other agents). The protocol is turn-based and allows agents to take actions from a pool at each negotiation state (from S1 to S5, see [Alrayes et al.2018]) where .

3.2 Anegma Components

Our proposed agent negotiation model supports learning during concurrent bilateral negotiations with unknown opponents in dynamic and complex e-marketplaces. In this model, we use a centralized approach in which the coordination is done internally to the agent via multi-threading synchronization. This approach minimizes the agent communication overhead and thus, improve the run-time performance. The different components of the proposed model are shown in Figure 1 and explained below.

Figure 1: The Architecture of ANEGMA

3.2.1 Physical Capabilities:

These are the sensors and actuators of the agent that enable it to access an e-marketplace. More specifically, they allow a buyer to perceive the current (external) state of the environment and represent that state locally in the form of internal attributes as shown in Table 1. Some of these attributes (, ) are perceived by the agent using its sensors, some of them (, , ) are stored locally in its knowledge base and some of the them (, , ) are obtained while interacting with other seller agents during a negotiation. At time , the internal agent representation of the environment is , which is used by the agent to decide what action to execute using its actuators. Action execution then changes the state of the environment to .

3.2.2 Learning Capabilities:

The foundation of our model is a component providing learning capabilities similar to those in the Actor-Critic architecture as in [Lillicrap et al.2017]. It consists of three sub-components: Negotiation Experience, Decide and Evaluate.

Negotiation Experience stores historical information about previous negotiation experiences which involve the interactions of an agent with other agents in the market. Experience elements are of the form , where is the state of the e-market environment, is action performed by at , is scalar reward or feedback received from the environment and is new e-market state after executing .

Decide refers to a negotiation strategy which helps to choose an optimal action among a set of actions () at a particular state . In particular, it consists of two different functions and . take state as an input and returns a discrete action among counter-offer, accept, confirm, reqToReserve and exit, see (1). When decides to perform a counter-offer action, is used to compute, given an input state , the value of the counter-offer, see (2). From a machine learning perspective, deriving corresponds to a classification problem, deriving to a regression problem.

(1)
(2)

Evaluate refers to a critic which helps learn and evolve the negotiation strategy for unknown and dynamic environments. More specifically, it is a function of random () past negotiation experiences fetched from the database. Also, the learning process of is retrospective since it depends on the feedback or scalar reward (and ) obtained from the e-market environment by performing action at state which is calculated using (3) and (4) to evaluate the discrete and continuous action made by Decide component at time respectively. Our design of reward functions accelerate agent learning by allowing to receive rewards after every action it performs in the environment instead of at the end of the negotiation.

(3)
(4)

In (3) and (4), refers to the utility value of offer (generated using (2)) at time and is calculated using Initial Price (), Reservation Price (), agreement offer () and temporal discount factor () [Williams et al.2012] as defined in (5) . The parameter encourages to negotiate without delay. The reward function in (4) helps learn that it should not offer greater than what active sellers have already offered it. refers to a list of preferred offers of at time .

(5)

In our experiments, the value of is set to . Higher the value, higher is the penalty due to delay.

Attribute Description
Number of sellers that is concurrently dealing for resource at time ().
Number of buyer agents competing with for resource at time ().
Current state of the negotiation protocol (S1 to S5 [Alrayes et al.2018])).
Best offer made by either or in .
Time left for to reach after the last action of .
Minimum price which can offer at the start of the negotiation.
Maximum price which can offer to .
Table 1: Agent’s State Attributes

4 Materials and Methods

In this section, we describe the data set collected for training the SL model (used for pre-training the ANEGMA agent), various performance measures (used for evaluating the negotiation process) and ML models (used for the learning process).

4.1 Data set collection

In order to collect the data set to train ANEGMA agent using an SL model, we have used a simulation environment [Alrayes et al.2016] that supports concurrent negotiations between buyers and sellers. The buyers use two different strategies presented in [Alrayes et al.2018] and [Williams et al.2012]; whereas the sellers use the strategies described in [Faratin et al.1998]. We could have also collected the negotiation examples for training using other buyer strategies for concurrent negotiation which can deal with same environment as ours, or any real-world market data; however, to the best of our knowledge none of these had readily available implementations. We have selected the input features for the dataset manually, and this set of features correspond to the agent’s state attributes in Table 1. To avoid choosing overlapping features, we have then applied the Pearson Correlation coefficient [Lee Rodgers and Nicewander1988] and ensured no correlation (with all correlation coefficients between and ; most are closer to ) between the selected features.

4.2 Performance Evaluation Measures

To successfully evaluate the performance of ANEGMA and compare it with other negotiation approaches, it is necessary to identify the appropriate performance metrics. For our experiments, we have used the following widely adopted metrics [Williams et al.2012, Faratin et al.1998, Nguyen and Jennings2004, Alrayes et al.2018]: Average utility rate (), Average negotiation time () and Percentage of successful negotiations (), which are described in Table 2.

Our main motive behind calculating the is to calculate the agent profit over only successful negotiations, hence we exclude the unsuccessful ones in this metric. We capture the (un)successful negotiations in a separate metric called .

Metric Definition Ideal Value
Sum of all the utilities of the buyer averaged over the successful negotiations. High(1.0)
Total time taken by the buyer (in milliseconds) averaged over all successful negotiations to reach the agreement. Low(1000ms)
Proportion of total negotiations in which the buyer reaches an agreement successfully with one of the concurrent sellers. High(100%)
Table 2:

Performance Evaluation Metrics

4.3 Methodology

During our experiments, the buyer negotiates with fixed-but-unknown seller strategies in an e-market. Also, the competitor buyers use only a single fixed-but-unknown strategy which can be learnt by the buyers after some simulation runs. Hence, we consider our negotiation environment as fully-observable. Following this, for our dynamic (agents leave and enter the market at any time) and episodic (the negotiation terminates at some point) environment, we use a model-free, off-policy RL approach which generates a deterministic policy based on the policy gradient method to support continuous control. More specifically, we use the Deep Deterministic Policy Gradient algorithm (DDPG), which is an actor-critic RL approach and generates a deterministic action selection policy for the buyer (see [Lillicrap et al.2017] for more details, due to lack of space). We consider a model-free RL approach because our buyer is more concerned with determining what action to take given a particular state rather than predicting a new state of the environment. This is because the strategies of sellers and competitor buyers are unknown in the environment. On the other hand, we consider the off-policy approach for efficient and independent exploration of continuous action spaces. Furthermore, we, instead of initializing the RL policy randomly, use a policy generated by an Artificial Neural Network (ANN) [Goodfellow et al.2016] due to its compatibility with DRL in order to speed up and reduce the cost of the RL process. To reduce the over-fitting and generalization errors, we also apply regularization techniques (dropout) during the training of the neural network.

5 Experimental Setup and Results

We use ANEGMA to build autonomous buyers that negotiate against unknown opponents in different e-market settings. Our experiments make the following hypotheses.

Hypothesis A: The Market Density (), the Market ratio or Demand/Supply Ratio (), the Zone of Agreement () and the Buyer’s Deadline () have a considerable effect on the success of negotiations. Here,

  • is the total agents in the e-market at any given time dealing with the same resource as that of our buyer.

  • is the ratio of the total number of buyers over the sellers in the e-market.

  • refers to the intersection between the price ranges of buyers and sellers for them to agree.

In practice, buyers have no control over these parameters except the deadline, which can be decided by the user or constrained by a higher-level goal the buyer is trying to achieve.

Hypothesis B: The ANEGMA buyer outperforms SL, CONAN, and Williams’ negotiation strategies in terms of , and in a range of e-market settings.

Hypothesis C: An ANEGMA buyer if trained against a specific seller strategy, still performs well against other fixed-but-unknown seller strategies. This shows that the ANEGMA agent behaviour is adaptive in that the agent transfers knowledge from previous experience to unknown e-market settings.

5.1 Design of the Experiments

To carry out our experiments, we have extended the simulation environment RECON [Alrayes et al.2016] with a new online learning component for ANEGMA.

5.1.1 Seller Strategies

For the purpose of training our SL model and conducting large-scale quantitative evaluations, we have used two groups of fixed seller strategies developed by Faratin et al.  faratin1998negotiation: Time-Dependent (Linear, Conceder and Boulware) and Behaviour-Dependent (Relative tit-for-tat, Random Absolute tit-for-tat and Averaged tit-for-tat). Each seller’s deadline is assumed to be same as that of buyer but private to the seller. Other parameters such as and are determined by the parameter, as shown in Table 3.

5.1.2 Simulation Parameters

We assume that the buyer negotiates with multiple sellers concurrently to buy a second-hand laptop () based only on a single issue Price (). We stress that the single-issue assumption is realistic in several real-world e-markets. The simulated market allows the agents to enter and leave the market at their own will. The maximum number of agents allowed in the market, the demand/supply ratio, the buyer’s deadline and the s are simulation-dependent.

As in [Alrayes et al.2018], three qualitative values are considered for each parameter during simulations, e.g., High (H), Average (A) and Low (L) for or Long (Lg), Average (A) and Short (Sh) for . Parameters are reported in Table 3. The user can select one of such qualitative values for each parameter. Each qualitative value corresponds to a set of three quantitative values, of which only one is chosen at random for each each simulation (e.g., setting for parameter corresponds to choosing at random among , , and ). The only exception is parameter

, which maps to a range of uniformly distributed quantitative values for the seller’s initial price

and reservation price (e.g., selecting for leads to a value of uniformly sampled in the interval ). Therefore, the total number of simulation settings is 81, as we consider possible settings for each of , , , and (see Table 3).

Values
H, A, L
H:::, A:::, L:::
Lgs –s, As –s, Shs –s]
H(%), A(%), L(%)
Table 3: Simulation Parameter Values

5.2 Empirical Evaluation

We evaluate hypotheses A, B and C as described at the beginning of this section.

5.2.1 Hypothesis A (, , and have significant impact on negotiations)

We experimented with different e-market settings, by considering, for each setting, both time-dependent and behaviour-dependent seller strategies over simulations using the CONAN buyer strategy. As shown in Figure 2, these experiments suggest that and have a considerable effect on . From our observations, when is low, the agents reach more negotiation agreements. Also, there is not much difference in the agreement rate for % and % when is low. The very low number of successful negotiations for % is not unexpected since only a minority of agents is willing to concede more in such a small . On the other hand, and have, according to our experiments, a comparably minor impact on the negotiation success (only some effect of on is observed under behaviour-dependent strategies and low as shown in Figure 3). These results support our hypothesis.

Figure 2: Effect of Market Density () and Zone of Agreement () on Proportion of Successful Negotiations () using time-dependent strategies (left) and behaviour-dependent strategies (right).
Figure 3: Effect of Market Density () and Market Ratio () Proportion of Successful Negotiations () using time-dependent strategies (left) and behaviour-dependent strategies (right).

5.2.2 Hypothesis B (ANEGMA outperforms SL and CONAN)

We performed simulations for our ANEGMA agent in low , 60% and 100% , high and a long because these settings yielded the best performance in terms of in our experiments for Hypothesis A. We have used these settings against Conceder Time Dependent and Relative Tit for Tat Behaviour Dependent seller strategies. Firstly, we collected training data for our SL approach (ANN) using two distinct strategies for supervision, viz. CONAN [Alrayes et al.2018] and Williams [Williams et al.2012]. Both were run for simulations and with the same settings. Table 4 compares the performances of CONAN’s and Williams’ models. CONAN outperforms Williams’ strategy in these settings.

Figure 4: Training Accuracy’s of ANN when trained using datasets collected by negotiating CONAN and Williams’ buyer strategy (for different s) against time-dependent strategies (left) and behaviour-dependent strategies (right).
Metric CONAN Williams’
Conceder Time Dependent Seller Strategy
60% 100% 60% 100%
0.27 0.03 0.25 0.07 0.18 0.08 0.17 0.04
172942.78 15177.77 174611.43 15139.52 177091.09 15304.90 174468.311 15365.11
80.76 79.08 78.21 78.05
Relative Tit For Tat Behaviour Seller Strategy
0.25 0.03 0.24 0.04 0.22 0.05 0.21 0.06
175198.93 14193.23 179529.47 14651.15 176334.65 14683.03 176468.31 15365.11
80.69 79.90 73.00 73.21
Table 4: Performance comparison of CONAN and Williams’ model. Best results are in bold.
Metric ANN ANEGMA(SL+RL) ANEGMA(RL)
Trained and Tested on Conceder Time Dependent Seller Strategy
ANN-C ANN-W ANEGMA(SL+RL)-C ANEGMA(SL+RL)-W
0.27 0.04 0.21 0.08 0.29 0.04 0.21 0.04 -0.38 0.14
173529.47 14651.15 171096.09 14584.90 67750.62 37628.57 132477.71 26601.48 768.55 373.65
80.80 80.34 87.12 81.72 64.54
Trained and Tested on Relative Tit for Tat Behaviour Dependent Seller Strategy
ANN-C ANN-W ANEGMA(SL+RL)-C ANEGMA(SL+RL)-W
0.26 0.03 0.23 0.05 0.29 0.03 0.23 0.14 -0.19 0.42
176018.69 14380.28 169334.65 12389.03 36331.34 70247.33 41225.17 72938.79 755.74 292.29
81.86 74.80 86.03 74.57 61.51
Table 5: Performance comparison of ANN VS ANEGMA(SL+RL) VS ANEGMA(RL) when is 60%. Best results are in bold. ANN-C and ANN-W correspond to ANN trained using data set collected from CONAN and Williams’ approach respectively, whereas ANEGMA(SL+RL)-C and ANEGMA(SL+RL)-W correspond to ANEGMA(DDPG) initialized with ANN-C and ANN-W respectively.
Metric ANN ANEGMA(SL+RL) ANEGMA(RL)
Trained and Tested on Conceder Time Dependent Seller Strategy
ANN-C ANN-W ANEGMA(SL+RL)-C ANEGMA(SL+RL)-W
0.23 0.04 0.17 0.08 0.27 0.51 0.21 0.71 -0.88 0.16
172234.73 14516.15 170969.09 14464.09 171266.64 11573.38 185425.74 19909.06 1021.95 771.47
79.80 78.49 79.73 74.61 59.41
Trained and Tested on Relative Tit for Tat Behaviour Dependent Seller Strategy
ANN-C ANN-W ANEGMA(SL+RL)-C ANEGMA(SL+RL)-W
0.26 0.30 0.18 0.55 0.29 0.35 0.23 0.84 -0.24 0.55
160178.98 14809.18 163943.05 12895.03 33695.16 64292.37 23528.25 61440.37 817.67523.67
75.61 74.02 80.81 72.53 58.09
Table 6: Performance comparison of ANN VS ANEGMA(SL+RL) VS ANEGMA(RL) when is 100%. Best results are in bold. ANN-C and ANN-W correspond to ANN trained using data set collected from CONAN and Williams’ approach respectively, whereas ANEGMA(SL+RL)-C and ANEGMA(SL+RL)-W correspond to ANEGMA(DDPG) initialized with ANN-C and ANN-W respectively.
Metric ANN ANEGMA(SL+RL) ANEGMA(RL)
Trained on Relative Tit for Tat Behaviour Dependent and Tested on Conceder Time Dependent Seller Strategy
ANN-C ANN-W ANEGMA(SL+RL)-C ANEGMA(SL+RL)-W
0.16 0.05 0.17 0.04 0.26 0.06 0.23 0.07 -0.36 0.12
174139.30 14655.42 174035.91 14627.59 38402.78 64367.45 108051.11 57755.84 738.55 279.65
70.51 69.54 86.72 81.32 54.54
Trained on Conceder Time Dependent and Tested on Relative Tit for Tat Behaviour Dependent Seller Strategy
ANN-C ANN-W ANEGMA(SL+RL)-C ANEGMA(SL+RL)-W
0.25 0.05 0.21 0.04 0.28 0.01 0.21 0.08 -0.28 0.51
176048.05 14423.36 175170.19 14623.53 19295.84 53767.54 114510.0 64667.79 806.83 375.51
79.67 76.50 84.72 71.37 51.89
Table 7: Performance comparison for the adaptive behaviour of ANN VS ANEGMA(SL+RL) VS ANEGMA(RL). Best results are in bold. ANN-C and ANN-W correspond to ANN trained using data set collected from CONAN and Williams’ approach respectively, whereas ANEGMA(SL+RL)-C and ANEGMA(SL+RL)-W correspond to ANEGMA(DDPG) initialized with ANN-C and ANN-W respectively.

Then, the resulting trained ANN models – called ANN-C and ANN-W respectively – were used as the initial strategies in our DRL approach (based on DDPG), where strategies are evolved using negotiation experience from additional simulations. In the remainder, we will abbreviate this model by ANEGMA(SL+RL).

Finally, we use test data from simulations to compare the performance of such derived ANEGMA(SL+RL) buyers against CONAN, Williams’ model, ANN-C, ANN-W, and the so-called ANEGMA(RL) model, which uses DDPG but initialized with a random strategy.

According to our results shown in Tables 5 and 6, the performance of ANN-C is comparable to that of CONAN for both 60% and 100% s (see Table 4), and we observe the same for ANN-W and the Williams’ strategy. So, we conclude that our approach can successfully produce neural network strategies which are able to imitate the behaviour and the performance of CONAN and Williams’ models (moreover, the training accuracy’s were in the range between and as shown in Figure 4).

Even more importantly, the results demonstrate that ANEGMA(SL+RL)-C (i.e. DDPG initialized with ANN-C) and ANEGMA(SL+RL)-W (i.e. DDPG initialized with ANN-W) improve on their respective initial ANN strategies obtained by SL, and outperform the DRL agent ANEGMA(RL) initialized at random for both 60% and 100% s, see Tables 5 and 6. This proves that both the evolution of the strategies via DRL and the initial supervision are beneficial. Furthermore, ANEGMA(SL+RL)-C and ANEGMA(SL+RL)-W also outperform the existing “teacher strategies” (CONAN and Williams) used for the initial supervision and hence can improve on them, see Table 4.

5.2.3 Hypothesis C (ANEGMA is adaptable)

In this final test, we evaluate how well our ANEGMA agents can adapt to environments different from those used at training-time. Specifically, we deploy strategies trained using Conceder Time Dependent opponents into an environment with Relative Tit for Tat Behaviour Dependent opponents, and viceversa. The ANEGMA agents use experience from 500 simulations to adapt to the new environment. Results are presented in Table 7 for 60% and show clear superiority of the ANEGMA agents over the ANN-C and ANN-W strategies which, without online retraining, cannot maintain their performance in the new environment. This confirms our hypothesis that ANEGMA agents can learn to adapt at run-time to different unknown seller strategies.

5.2.4 Further discussion

Pondering over the negative average utility values of ANEGMA(RL) (see Tables 5 and 6), recall that we define the utility value as per Equation (5) but without the discount factor term. Therefore, if an agent concedes a lot to make a deal, it will collect a negative utility. This is precisely what happens to the initial random (and inefficient) strategy used in the ANEGMA(RL) configuration. The combination of SL and DRL prevents this very problem as it uses an initial pre-trained strategy which is much less likely to incur negative utility values.

For the same reason, we observe a consistently shorter average negotiation time for ANEGMA(RL), which is caused by the buyer that concedes more to reach the agreement without negotiating for a long time with the seller. Hence, a shorter alone does not generally imply a better negotiation performance.

An additional advantage of our approach is that it alleviates the common limitation of RL that an RL agent needs a non-trivial amount of experience before reaching a satisfactory performance.

6 Conclusions and Future Work

We have proposed ANEGMA, a novel agent negotiation model that supports agent learning and adaptation during concurrent bilateral negotiations for a class of e-markets such as E-bay. Our approach derives an initial neural network strategy via supervision from well-known existing negotiation models, and evolves the strategy via DRL. We have empirically evaluated the performance of ANEGMA against fixed-but-unknown seller strategies in different e-market settings, showing that ANEGMA outperforms the well-known existing “teacher strategies”, the strategies trained with SL only and those trained with DRL only. Crucially, our model also exhibit adaptive behaviour, as it can transfer to environments with unknown sellers’ behaviours different from training.

As future work, we plan to consider more complex market settings including multi-issue negotiations and dynamic opponent strategies.

References

  • [Alrayes and Stathis2013] Bedour Alrayes and Kostas Stathis. An agent architecture for concurrent bilateral negotiations. In Decision Support Systems III-Impact of Decision Support Systems for Global Environments, pages 79–89. Springer, 2013.
  • [Alrayes et al.2016] Bedour Alrayes, Özgür Kafalı, and Kostas Stathis. Recon: a robust multi-agent environment for simulating concurrent negotiations. In Recent advances in agent-based complex automated negotiation, pages 157–174. Springer, 2016.
  • [Alrayes et al.2018] Bedour Alrayes, Özgür Kafalı, and Kostas Stathis. Concurrent bilateral negotiation for open e-markets: the conan strategy. Knowledge and Information Systems, 56(2):463–501, 2018.
  • [An et al.2006] Bo An, Kwang Mong Sim, Liang Gui Tang, Shuang Qing Li, and Dai Jie Cheng. Continuous-time negotiation mechanism for software agents. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 36(6):1261–1272, 2006.
  • [Baarslag et al.2016] Tim Baarslag, Mark JC Hendrikx, Koen V Hindriks, and Catholijn M Jonker. Learning about the opponent in automated bilateral negotiation: a comprehensive survey of opponent modeling techniques. Autonomous Agents and Multi-Agent Systems, 30(5):849–898, 2016.
  • [Bakker et al.2019] Jasper Bakker, Aron Hammond, Daan Bloembergen, and Tim Baarslag. Rlboa: A modular reinforcement learning framework for autonomous negotiating agents. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pages 260–268. International Foundation for Autonomous Agents and Multiagent Systems, 2019.
  • [Choudhary and Bharadwaj2018] Nirmal Choudhary and KK Bharadwaj. Evolutionary learning approach to multi-agent negotiation for group recommender systems. Multimedia Tools and Applications, pages 1–23, 2018.
  • [Faratin et al.1998] Peyman Faratin, Carles Sierra, and Nick R Jennings. Negotiation decision functions for autonomous agents. Robotics and Autonomous Systems, 24(3-4):159–182, 1998.
  • [Goodfellow et al.2016] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
  • [Hindriks and Tykhonov2008] Koen Hindriks and Dmytro Tykhonov. Opponent modelling in automated multi-issue negotiation using bayesian learning. In Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems-Volume 1, pages 331–338. International Foundation for Autonomous Agents and Multiagent Systems, 2008.
  • [Lau et al.2006] Raymond YK Lau, Maolin Tang, On Wong, Stephen W Milliner, and Yi-Ping Phoebe Chen. An evolutionary learning approach for adaptive negotiation agents. International Journal of Intelligent Systems, 21(1):41–72, 2006.
  • [Lee Rodgers and Nicewander1988] Joseph Lee Rodgers and W Alan Nicewander. Thirteen ways to look at the correlation coefficient. The American Statistician, 42(1):59–66, 1988.
  • [Lewis et al.2017] Mike Lewis, Denis Yarats, Yann N Dauphin, Devi Parikh, and Dhruv Batra. Deal or no deal? end-to-end learning for negotiation dialogues. arXiv preprint arXiv:1706.05125, 2017.
  • [Lillicrap et al.2017] Timothy Paul Lillicrap, Jonathan James Hunt, Alexander Pritzel, Nicolas Manfred Otto Heess, Tom Erez, Yuval Tassa, David Silver, and Daniel Pieter Wierstra. Continuous control with deep reinforcement learning, January 26 2017. US Patent App. 15/217,758.
  • [Mansour and Kowalczyk2014] Khalid Mansour and Ryszard Kowalczyk. Coordinating the bidding strategy in multiissue multiobject negotiation with single and multiple providers. IEEE transactions on cybernetics, 45(10):2261–2272, 2014.
  • [Nguyen and Jennings2004] Thuc Duong Nguyen and Nicholas R Jennings. Coordinating multiple concurrent negotiations. In Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems-Volume 3, pages 1064–1071. IEEE Computer Society, 2004.
  • [Oliver1996] Jim R Oliver. A machine-learning approach to automated negotiation and prospects for electronic commerce. Journal of management information systems, 13(3):83–112, 1996.
  • [Papangelis and Georgila2015] Alexandros Papangelis and Kallirroi Georgila. Reinforcement learning of multi-issue negotiation dialogue policies. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 154–158, 2015.
  • [Rahwan et al.2002] Iyad Rahwan, Ryszard Kowalczyk, and Ha Hai Pham. Intelligent agents for automated one-to-many e-commerce negotiation. In Australian Computer Science Communications, volume 24, pages 197–204. Australian Computer Society, Inc., 2002.
  • [Rodriguez-Fernandez et al.2019] J Rodriguez-Fernandez, T Pinto, F Silva, I Praça, Z Vale, and JM Corchado. Context aware q-learning-based model for decision support in the negotiation of energy contracts. International Journal of Electrical Power & Energy Systems, 104:489–501, 2019.
  • [Rubinstein1982] Ariel Rubinstein. Perfect equilibrium in a bargaining model. Econometrica: Journal of the Econometric Society, pages 97–109, 1982.
  • [Sunder et al.2018] Vishal Sunder, Lovekesh Vig, Arnab Chatterjee, and Gautam Shroff. Prosocial or selfish? agents with different behaviors for contract negotiation using reinforcement learning. arXiv preprint arXiv:1809.07066, 2018.
  • [Williams et al.2012] Colin R Williams, Valentin Robu, Enrico H Gerding, and Nicholas R Jennings. Negotiating concurrently with unknown opponents in complex, real-time domains. 2012.
  • [Zeng and Sycara1998] Dajun Zeng and Katia Sycara. Bayesian learning in negotiation. International Journal of Human-Computer Studies, 48(1):125–141, 1998.
  • [Zou et al.2014] Yi Zou, Wenjie Zhan, and Yuan Shao. Evolution with reinforcement learning in negotiation. PLOS one, 9(7):e102840, 2014.