Negotiations, either between individuals or entities, are ubiquitous in everyday human interactions ranging from sales to legal proceedings. Being a good negotiator is a complex skill, requiring the ability to understand the partner’s motives, ability to reason and to communicate effectively, making it a challenging task for an automated system. While research in building automatically negotiating agents has primarily focused on agent-agent negotiations Williams et al. (2012); Lin et al. (2014), there is a recent interest in agent-human negotiations Gratch et al. (2015) as well. Such agents may act as mediators or can be helpful for pedagogical purposes Johnson et al. (2019).
|Single speed bianchi practically new (Bike)|
|Listing: , Buyer Target: , Agreed (Ground-truth):|
|Negotiation Seen by the Model||Predictions|
|7cmBuyer: Hi. I am interested in your bicycle. How long have you had it for?|
|Seller: I have had it for a little over a month.|
|7cmBuyer: Is there anything wrong with it?|
|Seller: Nothing wrong at all pretty much new.|
|7cmBuyer: Okay. I see that you are listing it at . However, I can buy a new one for that. Honestly, without any sort of warranty available and the fact that it is used-I can do .|
|Seller: It actually still has over months of the warranty that comes with the bike when you buy it. I will not go as low as I can do though.|
|7cmBuyer: Usually a warranty doesn’t transfer if you sell it. I can do .|
|Seller: if you have any problems with it, within the next months save my number. I can do you will pick up tonight?|
|7cmBuyer: Sure, I can do that.|
|Seller: (OFFER )|
Efforts in agent-human negotiations involving free-form natural language as a means of communication are rather sparse. Researchers He et al. (2018) recently studied natural language negotiations in buyer-seller bargaining setup, which is comparatively less restricted than previously studied game environments Asher et al. (2016); Lewis et al. (2017). Lack of a well-defined structure in such negotiations allows humans or agents to express themselves more freely, which better emulates a realistic scenario. Interestingly, this also provides an exciting research opportunity: how can an agent leverage the behavioral cues in natural language to direct its negotiation strategies? Understanding the impact of natural language on negotiation outcomes through a data-driven neural framework is the primary objective of this work.
We focus on buyer-seller negotiations He et al. (2018) where two individuals negotiate the price of a given product. Leveraging the recent advancements Vaswani et al. (2017); Devlin et al. (2019) in pre-trained language encoders, we attempt to predict negotiation outcomes early on in the conversation, in a completely data-driven manner (Figure 1). Early prediction of outcomes is essential for effective planning of an automatically negotiating agent. Although there have been attempts to gain insights into negotiations Adair et al. (2001); Koit (2018), to the best of our knowledge, we are the first to study early natural language cues through a data-driven neural system (Section 3). Our evaluations show that natural language allows the models to make better predictions by looking at only a fraction of the negotiation. Rather than just realizing the strategy in natural language, our empirical results suggest that language can be crucial in the planning as well. We provide a sample negotiation from the test set He et al. (2018) along with our model predictions in Table 1.
2 Problem Setup
We study human-human negotiations in the buyer-seller bargaining scenario, which has been a key research area in the literature Williams et al. (2012). In this section, we first describe our problem setup and key terminologies by discussing the dataset used. Later, we formalize our problem definition.
Dataset: For our explorations, we use the Craigslist Bargaining dataset (CB) introduced by He et al. (2018). Instead of focusing on the previously studied game environments Asher et al. (2016); Lewis et al. (2017), the dataset considers a more realistic setup: negotiating the price of products listed on Craigslist111sfbay.craigslist.org. The dataset consists of dialogues between a buyer and a seller who converse in natural language to negotiate the price of a given product (sample in Table 1). In total, product ad postings were scraped from Craigslist, belonging to six categories: phones, bikes, housing, furniture, car and electronics. Each ad posting contains details such as Product Title, Category Type and a Listing Price. Moreover, a secret target price is also pre-decided for the buyer. The final price after the agreement is called the Agreed Price, which we aim to predict.
Defining the problem: Say we are provided with a product scenario , a tuple: (Category, Title, Listing Price, Target Price)222This setup assumes a buyer’s perspective, where the target price information is known.. Define the interactions between a buyer and seller using a sequence of events , where occurs before iff . Event is also a tuple: (Initiator, Type, Data). Initiator is either the Buyer or Seller, Type can be one of (message, offer, accept, reject or quit) and Data consists of either the corresponding natural language dialogue, offer price or can be empty. Nearly of events in CB dataset are of type ‘message’, each consisting a textual message as Data. An offer is usually made and accepted at the end of each negotiation. Since the offers directly contain the agreed price (which we want to predict), we only consider ‘message’ events in our models. Given the scenario and first events , our problem is then to learn the function : where refers to the final agreed price between the two negotiating parties.
Pre-trained language models, such as BERT Vaswani et al. (2017); Devlin et al. (2019) have recently gained huge success on a wide range of NLP tasks. However, since our framework deals with various auxiliary pieces (category, price, etc.), we cannot directly leverage these language models, which have only been trained on natural language inputs. Instead of relying on additional representations along with BERT outputs, we propose a simple, yet effective way to incorporate the auxiliary information into the same embedding space. Our model hierarchically builds a representation for the given negotiation to finally predict the agreed price. We present our complete architecture in Figure 1.
|Available Information||Sentence Template|
|Category||Category is Category.|
|Target||Target Price is Target.|
|Title||Title is Title.|
|Events (Initiator, Type, Data)|
|(Buyer, message, message)||Buyer: message|
|(Seller, message, message)||Seller: message|
Encoding the input: In order to effectively capture the natural language dialogue and the associated auxiliary information, we make use of pre-defined sentence templates. Table 2 shows how we represent the category, target price and the product title in natural language sentences. These sentences are concatenated to form our Scenario . Moving ahead in a similar manner, we define templates to capture the negotiator identity (buyer/seller) and any message which is conveyed. As shown in Figure 1, the scenario and the events are separated with the usage of [SEP] tokens. Following Liu and Lapata (2019)
, who use BERT for extractive text summarization, we add a [CLS] token at the beginning of each segment. We also alternate between a sequence ofs and s for segment embeddings to differentiate between the scenario and the events.
Architecture and Learning:
BERT representation for each [CLS] token is a contextualized encoding for the associated word sequence after it. In order to further capture the sequential nature of negotiation events, we pass these [CLS] representations through Gated-Recurrent Units (GRU). Recurrent Networks have been shown to be useful along with Transformer architecturesChen et al. (2018). Finally, a feed-forward network is applied to predict the agreed price for the negotiation. The model is end-to-end trained and fine-tuned using the Mean Squared Error (MSE) loss between the predicted price and the ground-truth.
4 Experimental Details
We perform experiments on the CB
dataset to primarily answer two questions: 1) Is it feasible to predict negotiation outcomes without observing the complete conversation between the buyer and seller? 2) To what extent does the natural language incorporation help in the prediction? In order to answer these questions, we compare our model empirically with a number of baseline methods. This section presents the methods we compare to, the training setup and the evaluation metrics.
Methods: The first baseline is the Listing Price (LP) where the model ignores the negotiation and returns the listing price of the product. Similarly, we use Target Price (TP), where the model just returns the target price for the buyer. We also consider the mean of Listing and Target price (TP+LP/2) as another baseline. Although trivial, these baselines help in benchmarking our results and also show good performance in some cases.
Next, we build another baseline which completely ignores the natural language incorporation. In this case, the model only sees a sequence of prices shared across the messages in the negotiation. We keep the input format the same as our model and all the parameters are randomly initialized to remove learning from natural language. We refer to this model as Prices-only.
We compare two variants for BERT-based models. First, for the BERT method, we keep only the first [CLS] token in the input and then train the model with fine-tuning using a single feed-forward network on top of the [CLS] representation. Secondly, we call our complete approach as BERT+GRU, where we use a recurrent network with BERT fine-tuning, as depicted in Figure 1.
Training Details: Given the multiple segments in our model input and small data size, we use BERT-base Devlin et al. (2019), having output dimension of
. To tackle the variance in product prices across different categories, all prices in the inputs and outputs were normalized by the listing price. The predictions were unnormalized before final evaluations. Further, we only considered the negotiations where an agreement was reached. These were the instances for which ground truth was available (of the data). We use a two-layer GRU with a dropout of and hidden units. The models were trained for a maximum of iterations, with AdamW optimizer Loshchilov and Hutter (2018), a learning rate of x and a batch size of . We used a linear warmup schedule for the first fraction of the steps. All the hyper-parameters were optimized on the provided development set.
Evaluation Metrics: We study the variants of the same model by training with different proportions of the negotiation seen, namely, . We compare the models on two evaluation metrics: MAE: Mean Absolute Error between the predicted and ground-truth agreed prices along with Accuracy: the percentage of cases where the predicted price lies within percent of the ground-truth. We use and in our experiments.
5 Results and Discussion
We present our results in Figure 2. We also show Accuracy for different product categories in the Appendix. First, Target Price (TP) and (TP+LP)/2 prove to be strong baselines, with the latter achieving Accuracy. This performance is also attested by relatively strong numbers on the other metrics as well. Prices-only, which does not incorporate any knowledge from natural language, fails to beat the average baseline even with of the negotiation history. This can be attributed to the observation that in many negotiations, before discussing the price, buyers tend to get more information about the product by exchanging messages: what is the condition of the product, how old it is, is there an urgency for any of the buyer/seller and so on. Incorporating natural language in both the scenario and event messages paves the way to leverage such cues and make better predictions early on in the conversation, as depicted in the plots. Both BERT and BERT-GRU consistently perform well on the complete test set. There is no clear winner, although using a recurrent network proves to be more helpful in the early stages of the negotiation. Note that BERT method still employs multiple [SEP] tokens along with alternating segment embeddings (Section 3). Without this usage, the fine-tuning pipeline proves to be inadequate. Overall, BERT-GRU achieves Accuracy with just the product scenario, reaching to with of the messages and crosses as more information about the final price is revealed. Paired Bootstrap Resampling Koehn (2004) with bootstraps shows that for a given , BERT-GRU is better than its Prices-only counterpart with statistical significance.
The prices discussed during the negotiation still play a crucial role in making the predictions. In fact, in only of the negotiations, the first price is quoted within the first fraction of the events. This is visible in higher performance as more events are seen after this point. This number is lower than average for Housing, Bike and Car, resulting in relative better performance of Price-only model for these categories over others. The models also show evidence of capturing buyer interest. By constructing artificial negotiations, we observe that the model predictions at =
increase when the buyer shows more interest in the product, indicating more willingness to pay. With the capability to incorporate cues from natural language, such a framework can be used in the future to get negotiation feedback, in order to guide the planning of a negotiating agent. This can be a viable middle-ground between following the average human behavior through supervised learning or exploring the wild by optimizing on rewards using reinforcement learningLewis et al. (2017); He et al. (2018).
We presented a framework to attempt early predictions of the agreed product prices in buyer-seller negotiations. We construct sentence templates to encode the product scenario, exchanged messages and associated auxiliary information into the same hidden space. By combining a recurrent network and the pre-trained BERT encoder, our model leverages natural language cues in the exchanged messages to predict the negotiation outcomes early on in the conversation. With this capability, such a framework can be used in a feedback mechanism to guide the planning of a negotiating agent.
- Negotiation behavior when cultures collide: the united states and japan.. Journal of Applied Psychology 86 (3), pp. 371. Cited by: §1.
- Discourse structure and dialogue acts in multiparty dialogue: the stac corpus. Cited by: §1, §2.
The best of both worlds: combining recent advances in neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 76–86. Cited by: §3.
- BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Cited by: §1, §3, §4.
- Negotiation as a challenge problem for virtual humans. In International Conference on Intelligent Virtual Agents, pp. 201–215. Cited by: §1.
Decoupling strategy and generation in negotiation dialogues.
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2333–2343. Cited by: §1, §1, §2, §5.
Intelligent tutoring system for negotiation skills training.
International Conference on Artificial Intelligence in Education, pp. 122–127. Cited by: §1.
- Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 conference on empirical methods in natural language processing, pp. 388–395. Cited by: §5.
- How people negotiate? from the analysis of a dialogue corpus to a dialogue system. In 2018 Innovations in Intelligent Systems and Applications (INISTA), pp. 1–6. Cited by: §1.
- Deal or no deal? end-to-end learning for negotiation dialogues. arXiv preprint arXiv:1706.05125. Cited by: §1, §2, §5.
- Genius: an integrated environment for supporting the design of generic automated negotiators. Computational Intelligence 30 (1), pp. 48–70. Cited by: §1.
- Text summarization with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 3721–3731. Cited by: §3.
- Decoupled weight decay regularization. Cited by: §4.
- Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §1, §3.
- Iamhaggler: a negotiation agent for complex environments. In New Trends in Agent-based Complex Automated Negotiations, pp. 151–158. Cited by: §1, §2.
Appendix A Category-wise performance
We show the category-wise performance in Figure 3.