Social versus Moral preferences in the Ultimatum Game: A theoretical model and an experiment

04/03/2018 ∙ by Valerio Capraro, et al. ∙ 0

In the Ultimatum Game (UG) one player, named "proposer", has to decide how to allocate a certain amount of money between herself and a "responder". If the offer is greater than or equal to the responder's minimum acceptable offer (MAO), then the money is split as proposed, otherwise, neither the proposer nor the responder get anything. The UG has intrigued generations of behavioral scientists because people in experiments blatantly violate the equilibrium predictions that self-interested proposers offer the minimum available non-zero amount, and self-interested responders accept. Why are these predictions violated? Previous research has mainly focused on the role of social preferences. Little is known about the role of general moral preferences for doing the right thing, preferences that have been shown to play a major role in other social interactions (e.g., Dictator Game and Prisoner's Dilemma). Here I develop a theoretical model and an experiment designed to pit social preferences against moral preferences. I find that, although people recognize that offering half and rejecting low offers are the morally right things to do, moral preferences have no causal impact on UG behavior. The experimental data are indeed well fit by a model according to which: (i) high UG offers are motivated by inequity aversion and, to a lesser extent, self-interest; (ii) high MAOs are motivated by inequity aversion.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.