I Introduction
The very survival and success of a society with shared resources depends on the rules and protocols agents use to interact with each other.
In designing the rules of these societies, there is always a tradeoff concerning centralization, efficiency, robustness, and resiliency. A centralized system for resource allocation needs more infrastructure and is less robust and resilient, yet it is the most efficient. A distributed system is more resilient and privacypreserving.
In intelligent transportation systems, we can distinguish the “macro” level of the fleet, and the “micro” level of the vehicles. At the macro level, much research has shown how it is possible to obtain a substantial improvement in the efficiency of a transportation network [1, 2] by optimizing resource use through cooperative approaches; that is, one takes the perspective of a single agent which is able to control centrally a fleet of vehicles. At the micro level there are similar resource allocation problems. Because of the advent of selfdriving cars to be used in autonomous mobility on demand networks, the ‘micro’ coordination problems become interesting, as we study how the codes, customs, and conventions of human drivers can be generalized to a scenario with both artificial and human agents.
The prototypical problem is intersection management. Deciding which car may pass first is a resource allocation problem, in which the resource is the use of the space inside the intersection in a given time interval. Similar resource allocation problems happen also in maneuvers outside of intersections, as drivers compete for the use of space, although the outcome is not as simple as a discrete decision as in intersection management. These interactions happen between independent agents, with competitive goals, and typically are not repeated, as it is rare to encounter the same vehicle again. Therefore, there is little incentive to give in at one interaction; at face value, this appear to be a nonrepeated game.
Typical human drivers do not act like selfinterested agents. Humans have ways to communicate urgency and politely negotiate maneuvers while they drive. Ultimately this is due to the altruism and prosociality bias that evolved in our species [3]; the bias makes the single individual intrinsically happy to accommodate somebody who seems to be in a hurry. Our species thrived because individuals are not completely selfinterested. When we lived in tribes, deviant antisocial behavior was easily spotted and repressed; now that our social groups are counted in the billions, a set of rules (laws) and corresponding incentives (punishments) help in aligning the individual and societal interests in the handling of common resources [4]. When driving, some of our behaviors derive from these incentives (we do not speed because we are afraid of tickets), but many polite behaviors are due to our visceral intrinsic motivation rather than extrinsic rewards/punishment.
How can we ensure that a population of artificial agents, such as selfdriving cars, can attain the same efficiency of a prosocial species like humans? In this paper, we consider the problem of resource allocation in a setting that we call Karma Game. The idea is that considerable gains can be realized if an agent is inclined to give in at one interaction, if it is compensated with “karma”. Thus, we introduce karma as a way to account for an agent’s past actions. (This concept is closer to how “karma” is used in video game mechanics, rather than to how it is understood in Indian religions.)
We define a karma protocol with which agents can negotiate the use of resources. The protocol describes the exchange of bidding messages and how karma is updated based on the outcome of the interaction. The protocol does not need a third party, and the primitives needed to implement karma accounting and the interaction are those provided by many blockchains, such as Ethereum [5].
Having fixed the protocol, we study how a population of selfinterested agents will use it, by computing the Nash equilibria for the resulting Karma game. We then compare the Nash equilibria of the distributed system with the baseline of the optimal centralized policies. We observe that the efficiency of the system is remarkably similar. The social welfare is thus closely aligned with the selfinterest of the agents, assuming the agents have reasonable discount factors. An agent that does not care about the future and lives for the present will also create an inefficient society.
Ii Related work
Intersection control
Traditional intersection control strategies have been substantially based on utilizing control devices such as traffic lights, in which an offline optimization based on historical data can be used to provide a control signal [6, 7]. The main drawback of this control strategy is that it cannot adapt to changes in request patterns and environment. Improving upon classical control strategies, communicationbased schemes [8] are based on a competitive scenario, in which different vehicles aim at minimizing their own selfish cost. It is assumed that the urgency is a piece of private information of each vehicle , and is therefore not accessible to other vehicles. This kind of scenario is typically tackled via auctions, which can be designed in order to induce selfish agents to disclose their true urgency [9, 10, 11, 12, 13, 14, 15, 16]. For example, in [9], the earliest timeslot in an intersection is auctioned off by an intersection manager among all vehicles at the front of each lane. ^{inline}^{inline}todo: inlinethe following sentence is unclearIn [10], having an infinite budget, any agent in a lane can participate in a secondprice auction to enhance the winning chance of the agent at the front. In [11], a mechanism based on a firstprice auction is proposed for the management of intersections. Two scenarios for single intersection and a network of intersections are considered in [12], and a policy based on a combinatorial auction for assigning the reservations of timespace slots is presented. However, finding the winner of a combinatorial auction is NPhard [17]. Finally, to schedule the intersection usage, [16] proposes a variant of the VickreyClarkeGroove mechanism in which an intersection unit charges each agent at the front of any lane with a timetoken based on its impact on others.
We note that our approach departs from the auctionbased schemes in the mentioned papers in that to maintain the fairness properties between wealthier drivers and those without many funds, it does not require any monetary transactions, and therefore does not require to attach an objective value to the cost incurred by the vehicles. We will discuss later how this sheds light on the true nature of this coordination problem. Any vehicle is assigned an initial karma level. In light of the budgetbalance property of our mechanism, the total amount of karma remains constant over the whole transportation network. Also, unlike the assumption in [10], every agent is assumed to have a limited total karma at any time period, which neither is negative, nor exceeds a maximum value.
Almost all works in the literature which proposed an auctionbased approach for the intersection control are static, onetime decision problems. However since the urgencies and the agent’s private information change over time, a sequence of decisions needs to be made, resulting in a dynamic resource allocation and a dynamic bidding process [18]. Thus, the utility function of each agent along with the social welfare are defined based on the discounted utility over time. We assume that in every interaction, vehicles are allowed to communicate a scalar message . The karma value of each agent is a public state . Both the outcome of the interaction (who goes first) and the update of the public state are determined based on a set of rules which are known and verifiable to all agents (as they only depend on public information: the states and the messages ).
Karmalike concepts
A “karma” system was introduced in [19] in the context of filesharing to prevent “freeloading” in peertopeer networks. In this framework, karma represents the standing of each agent in the system, that increases when contributing and decreases when consuming a resource, and thereby incentivizes agents to contribute resources [20]. In this and similar systems, the “value” of the karma is fixed—in our approach, the agents are free to assign a value to karma according to their goals and current state.
Population games
This competitive scenario can be modeled as a repeated game (interactions) between randomly selected agents in a large population. For the analysis of the resulting game, we adopt the approach that is typically used in the study of population games [21], which has its underpinnings in the following abstractions: 1) populations are continuous rather than discrete; the payoffs to a given strategy therefore depend on society’s aggregate behavior in a continuous fashion; 2) the aggregate behavior in a population game is described by a “social state”, which specifies the empirical distribution of strategy choices (or types) in the population; for simplicity, this social state is generally finitedimensional. The specific application that we are considering has however some peculiarities, compared to standard population games: for example, each agent’s type is also determined by an exogenous timevarying signal (their urgency). Moreover, there is no natural revision protocol or adaptation, and therefore no evolution of the agents. We therefore prefer to present the resulting game in a selfcontained and specialized form, without explicitly tapping into that literature for definitions or results. Notice that the game we are formulating is more general than the specific traffic interaction problem, although clearly inspired by that setup.
Iii Resource allocation in a “driveby” scenario
In this section we introduce a deliberately simple model for vehicletovehicle interaction at intersections. We strove to simplify the model to its core features, in order to isolate the essential phenomena in this problem. We understand the problem of vehicletovehicle interaction at intersections as an example of a “driveby” scenario, in which:

There is a large number of agents in the systems.

Agents interact with a random schedule.

Each agent interacts many times with other agents over its lifetime.

The value of a resource to an agent varies in time according to an exogenous factor.
For vehicletovehicle (V2V) interactions at intersection:

There is a large number of cars on the road.

Cars meet randomly at intersections.

Each car encounters many intersections over its lifetime.

The value of time saved to a car varies in time according to its urgency on that day.
Iiia Formalization
More formally, consider a population of vehicles. Each vehicle has an associated urgency process . The urgency at time indicates the marginal value that agent gives to a unitary delay in its trip. It is an exogenous process that is not affected by the behavior of the vehicles.
The vehicles interact at intersections. Each interaction at time involves only a pair of vehicles .
Every time two vehicles interact, one of the two vehicles is necessarily delayed by a unitary delay, while the other vehicle does not incur any delay. We therefore have two possible outcomes , that is . Agent (and, in a completely symmetric way, agent ) incurs a cost that is a function of the outcome and of its own urgency, and is defined as
(1) 
IiiB Assumptions
We propose the following assumptions about the model.
Assumption 1 (Randomness of encounters).
The sequence is random and identically distributed at all times over the set
, and each vehicle has the same probability of belonging to
at a given .Assumption 2.
The urgency processes are identical for all vehicles . The urgency at each time is independent and identically distributed, and takes values in .
IiiC Performance measures
The focus of this paper is on policies that allow to decide optimally, where the notion of optimality is to be defined hereafter.
We define two measures of social cost for the entire population, which are associated to two different interpretations. The first measure simply quantifies the expected aggregate cost for the entire system at each interaction:
The second measure quantifies the expected rate at which the variance (across agents) of the accumulated cost grows:
where
and
denotes the vector of accumulated costs of the agents, defined elementwise as
In these expressions, represents the expectation with respect to both the stochastic urgency processes and the interaction selection process (which are independent processes).IiiD Centralized policies
In this section, we derive the optimal centralized policies for the simplified intersection management problem that we presented, under the notions of social optimality that we described. These optimal centralized policies will constitute a baseline for the analysis of the policies that emerge in a distributed competitive setting.
In a centralized setting, we are allowed to adopt causal policies of the kind
where by , we indicate the past urgencies of all agents.
Under Assumptions 1 and 2, the optimal policies for the two social costs and can be computed explicitly.
Proposition 1.
The social costs and are minimized, respectively, by the policies
(2) 
and
(3) 
If the operation does not return a singleton, then any of the two choices is optimal. Here and thereafter, we assume that ties are resolved via fair coin flipping.
We also define a third centralized policy, which prioritizes the minimization of (therefore obtaining the same value for as ) and, in case of ties between the urgencies and (where ), aims at minimizing the unfairness defined by :
(4) 
Iv Resource allocation using Karma Games
In this section, we formulate a mechanism for resource allocation based on the notion of karma. We only design the mechanism and not the agents’ policy, which is going to be found automatically through optimization.
Iva Informal definition of karma interaction mechanism
We assume that there is an integer counter (karma) associated to each agent bounded by . The agents exchange one message at each interaction. Each agent can produce a message which contains a value not to exceed its current karma:
We give this message the semantics of how much karma the agent sees fit to bid on the current interaction. The agent that provides the highest message is allowed to use the resource (go first at the intersection) and must pay the other agent up to the karma value that it has bet. The karma transferred is reduced if the transfer would make the other agent overflow . Suppose that agent wins betting . Then the karma transferred is .
Remark 1.
In this paper we do not delve into the technical implementation of such a scheme, but we would like to remark that it is possible to implement such a scheme, in a completely distributed way, without an arbiter to preside at each interaction, by using some of the cryptographic primitives associated to blockchain technology. The counters are implemented using public addresses. Nonrefutable messages are implemented using cryptographic commitments. The resolution and the outcome can be easily implemented using the primitives of, for example, Ethereum’s Solidity language.
IvB Formal definition of Karma Game
We formalize the discussion so far by defining Karma Games in a way that is slightly more general.
Definition 1 (Karma Game in Tabular Format).
A Karma Game is a tuple
where:

is a set of possible public states (karma) of an agent;

is a set of possible messages of an agent;

is a set of possible outcomes of an interaction;

is the instantaneous cost for each agent, which depends on the outcome of the interaction and on the exogenous state of the agent;

is a discount factor;

is the interaction outcome function, as a probability distribution on ;

is the state transition function.
The interpretation is as follows. Suppose an agent of karma meets an agent of karma , and they exchange messages and . The function gives a distribution on the possible outcome given by
As for the consequences, is the map that specifies the probability distribution of the next value of and :
The cost for each agent is given by the following series, where time is to be interpreted as ranging over the instants in which the agent participated in an interaction:
(5) 
IvC Vehicle interaction as a Karma Game
We now put in the form of Definition 1, the model we described so far. is the set of integers up to :
There are two possible outcomes of each interaction, as explained in Section III: For the outcome distribution , we have
where we defined .
For the state transition function , we have that with probability 1
These rules guarantee that

the total amount of karma is conserved.

karma is bounded above by and below by .
The cost function is the one already defined in (1).
V Acting rationally in a Karma Game
We now turn attention to what is the rational behavior of an agent in a Karma Game. An agent’s behavior is completely defined by its policy.
Definition 2 (Agent policy).
In a Karma Game, the agent’s policy is a probability distribution over the possible messages, which varies as a function of the agent’s current urgency and current karma :
As an agent, we need to decide what message to send for each combination of urgency and karma
. In game theory jargon, we speak of a set
of different agent “types”; in this case, . The traditional notion of “agent type” does not fully capture our setting; because following an interaction, the type of an agent changes as they gain/lose karma. Moreover, the urgency is an exogenous variable that nobody can predict. Still, we use “agent type” in the following.Under our assumptions, it is easy to compute the optimal policy for an agent if the urgency is zero. In that case, the optimal action for the agent is to send a message . That is because the agent is indifferent to losing or winning the interaction regarding the cost; and, regarding the karma, the agent prefers to lose the interaction hoping to gain some karma.
If an agent has a nonzero urgency, how much karma should she bid today? This does not have an easy answer, except in special cases. For example, if the discount factor is zero—the agent does not care about the future, then the optimal policy is to send the maximum message . In all other cases, we need to characterize and compute Nash equilibria for this game. Figure 2 shows a representation of such an optimal policy obtained as a Nash equilibrium.
Va Characterization of Nash equilibria for a Karma Game
To characterize the equilibrium of the game, we must consider, in addition to the policy, a series of other related quantities. These are:
The transition function immediately descends from the composition of with the policy and the outcome distribution , assuming the karma distribution for the other agents and for all agents’ urgencies.
To express , it is convenient to define the function which gives the expected utility of choosing message for an agent of type .
(6) 
Figure 2 shows a representation of a typical . Based on this definition, the expected cost of an interaction is
(7) 
We can now define the notion of Nash equilibrium for a Karma Game.
Definition 3 (Nash equilibrium for a Karma Game).
A policy is a Nash equilibrium for the Karma Game if there exist that satisfy three properties:
P1: Stationarity: is the equilibrium distribution for the transition map :
P2: Bellman: There exists a function , representing the expected total cost for an agent as a function of the present value of the karma, that satisfies the Bellmanlike equation
(8) 
for the expected interaction cost defined in (7) and the discount factor .
P3: Rationality: The policy must yield the best expected outcome:
where was defined in (5) and can be expressed as
The next section will be devoted to the numerical computation of a Nash equilibrium for the Karma Game of interest and to the interpretation of the resulting policies and outcome.
Vi Computing Nash equilibria of Karma Games
In general, Nash equilibria can be computed by iterative algorithms. Starting with an initial policy, one computes the other unknown (stationary distribution, karma utility); then one recomputes the optimal policy. If the recomputed policy is different from the initial one, the delta is a profitable perturbation of the policy. Based on the perturbation, one can make a small update of the policy, and repeat the process until convergence. If this process converges to a distribution, then by definition, we have found a Nash equilibrium as defined above. However, there is in general no guarantee that the iterative process converges.
Via Fixed point computation
We show here how to rearrange the equations to put them in the form of a fixed point.
Suppose we have a current guess of the policy , the stationary distribution , and the utility .
Step 1: Compute the policy from the previous policy, the stationary distribution , and the expected utility . The policy is computed using (9) based on the values of obtained from (6).
Step 2: Compute the transitions from the policy and the stationary distribution . Given the policy and the stationary distribution, we can compute the transitions of the system. For each type , we know the distribution of the types it will encounter, and we know their policy. Thus, we can compute the outcomes, and the consequences of the outcomes in terms of what will be the next value of .
Step 3: Compute the stationary distribution from the transitions
. This is a standard step  given a transition matrix, compute the equilibrium distribution. It can be done by iteration or by solving an eigenvector problem.
ViB Momentum and simulated annealing
We found two simple devices that make the convergence robust, in the sense that the policy converges to the same solution no matter the initial conditions of the policy, stationary distribution, and karma utility.
ViB1 Momentum
In Section VIA, we have defined a way to update the policy that we can abstract as a function such that:
Define the “momentum” as a scalar . Then we update the policy as
For the set of simulations described below the optimization parameters were constant, but we did find in general that for different values of the model properties, the optimization parameters had to be optimized.
ViB2 Simulated annealing
Let be a temperature parameter. Rather than looking for a pure strategy, we set
(9) 
For large values of , agents choose a random action. As decreases, the agents choose more often actions with good rewards. As , the policy tends to the deterministic policy, where we select the maximum of :
In the simulations, we gradually decrease the temperature of the system in a series of “eras” (Figure 3).
ViC Equilibria parametrization in
The parameter introduced as a cost discounting factor in (5) determines how much importance an agent assigns to future costs. In the limit , the agent is only occupied with minimizing instantaneous costs. When approaches 1, future costs are deemed almost as important as present costs. To determine the influence this factor has on agent policies, we ran experiments with different values ranging from to in increments. As an overview of the effect, we provide Figure 4 which depicts the gradual changes in policy as is increased. Similarly we offer Figure 5 as an overview of the effect of time discounting on the best message to send given a karma level.
One caveat that we have is that the Nash equilibria are not well defined when as some of the series in the formalization do not converge. Still, we also include the results of the algorithm for . Similarly, we believe that for there are numerical instabilities, and in fact we find that there are much larger oscillations. Rather than tuning the optimization parameters for each , we keep the same parameters, and we still picture the results for and , without fully believing they are Nash equilibria for the game.
Vii Policy comparison
In this section, we are interested in gaining an empirical understanding of different solutions to the proposed distributed interaction problem.
Evaluation protocol
All simulations of interactions follow the same general procedure. As described in Section III, agents randomly meet in pairs and bid karma if they are urgent in order to pass first in an intersection. All experiments were conducted with 200 agents and a total of 1000 time periods. On each day, there are an average of 0.1 interactions per agent. Agents are urgent with magnitude with probability and not urgent (magnitude ) again with probability . Each agent has an initial karma level uniformly randomly chosen between and . Agents can, through interactions, attain a minimum karma level of and a maximum karma level of .
In the following, we compare various policies as well as the underlying parameters influencing the agents’ policies. We consider two performance metrics which are finitesample proxies for and , respectively:

“Inefficiency”: This is the average cost per interaction attained by the agent at the end of the simulation period. Note that this is not the discounted factor that each agent is trying to minimize; rather, this is the social welfare—which roughly corresponds to the case .

“Unfairness”: This is the standard deviations of the costs at the end of the simulation period.
Policies
In addition to the Nash equilibria found for sweeping between 0 and 1, we consider these other policies, as they are useful reference points:
Results
The overall results are shown in Figure 6.
baselinerandom (top right) obtains the worst results, as one might expect.
centralizedurgencythencost (bottom left) obtains the best results for both fairness and efficiency, as expected.
centralizedcost does well in terms of unfairness, as it tries to reduce the spread of the costs, but it is very inefficient.
centralizedurgency obtains minimum inefficiency (as predicted), but it does not do anything to reduce the spread of the costs, leading to a relatively high unfairness.
The baselines provide a reference frame to interpret the results for the karmabased policies.
We find many interesting nuggets. For example, bid1always is very inefficient, as inefficient as baselinerandom, but it is less unfair. This is because the karma accounting keeps track of previous times when the agent lost, thereby slightly reducing the unfairness even if the policy is trivial.
Next consider the performance of bid1ifurgent. This corresponds to a mechanism in which the agents use the karma message to reveal their urgency. Notwithstanding the fact that this is not an equilibrium for the game (this can be easily verified by noting that this is not a fixed point of the procedure discussed above), what we found surprising is that the efficiency is not as good as some of the Nash equilibria that we find.
Next we consider the performance of the Nash equilibrium as a function of . The sequence draws a hook in the inefficiency/unfairness space. The continuity of this curve also is good evidence that the procedure converged well (as noted before, for the convergence is not assured).
We find the surprising result that for , the Nash equilibria are better in efficiency than the bid1ifurgent strategy. The reason is that the agents should bid more or less if their karma levels allow—bidding only 1 is not the best strategy (neither for the agents nor society). For , the agents do worse.
The “there is no tomorrow” strategy (bid everything if urgent) is particularly bad for society, though not as bad as random: karma still allows some reparations to be made.
We observe that for , the karma strategies beat the centralizedurgency strategy in unfairness. There is a minimum unfairness observed for —we are not sure how this relates to the parameters of the problem.
In these experiments, for , the performance is closest to the centralizedurgency strategy in both inefficiency and unfairness, in fact surprisingly close.
In conclusion, we obtain the surprising result that, for agents that are reasonably futureconscious, Nash equilibrium strategies beat heuristic solutions in both efficiency and fairness, and their performance is extremely close to the centralized solutions.
Viii Conclusions
We have demonstrated how the efficient use of a shared infrastructure can emerge from simple coordination protocols among competitive agents, without the need of any monetary transaction or complex decision infrastructures, in sharp contrast to most of the literature. The enabler is the notion of karma: a public state that links the decision of the same agent at different times (as long as each agent reasonably values its own future cost). A solid understanding of the mechanisms that are necessary and sufficient for fair sharing of an infrastructure has the potential to guide the design of scalable solutions in many applications, and in particular for autonomous mobility.
References
 [1] S. Samaranayake, K. Spieser, H. Guntha, and E. Frazzoli, “Ridepooling with tripchaining in a sharedvehicle mobilityondemand system,” in 20th IEEE Intell. Transp. Syst. Conf., 2017, pp. 1–7.
 [2] C. Ruch, S. Hörl, and E. Frazzoli, “Amodeus, a simulationbased testbed for autonomous mobilityondemand systems,” in 21st IEEE Intell. Transp. Syst. Conf., 2018, pp. 3639–3644.
 [3] R. Kurzban, M. N. BurtonChellew, and S. A. West, “The evolution of altruism in humans,” Annual Review of Psychology, vol. 66, no. 1, pp. 575–599, 2015, pMID: 25061670.
 [4] E. Ostrom, Governing the commons. Cambridge University Press, 2015.
 [5] G. Wood et al., “Ethereum: A secure decentralised generalised transaction ledger,” Ethereum project yellow paper, 2014.
 [6] D. I. Robertson, “Transyt: a traffic network study tool,” Tech. rep., Rep. LR 253, 1969.
 [7] A. G. Sims and K. W. Dobinson, “The Sydney Coordinated Adaptive Traffic (SCAT) system philosophy and benefits,” IEEE Trans. vehicular tech., vol. 29, no. 2, pp. 130–137, 1980.
 [8] L. Chen and C. Englund, “Cooperative intersection management: A survey,” IEEE Trans. Intell. Trans. Sys., vol. 17, no. 2, 2016.
 [9] H. Schepperle, K. Böhm, and S. Forster, “Towards valuationaware agentbased traffic control,” in Int. Joint Conf. Auton. Agents, 2007.
 [10] D. Carlino, S. D. Boyles, and P. Stone, “Auctionbased autonomous intersection management,” in 16th IEEE Intell. Transp. Syst. Conf., 2013, pp. 529–534.
 [11] M. W. Levin and S. D. Boyles, “Intersection auctions and reservationbased control in dynamic traffic assignment,” Transp. Res. Rec.: J. Transp. Res. Board, no. 2497, pp. 35–44, 2015.
 [12] M. Vasirani and S. Ossowski, “A marketinspired approach for intersection management in urban road traffic networks,” J. Artif. Intell. Res., vol. 43, pp. 621–659, 2012.
 [13] M. Mashayekhi and G. List, “A multiagent auctionbased approach for modeling of signalized intersections,” in Workshop Synergies Between Multiagent Syst., Mach. Learn. Complex Syst., 2015.
 [14] J. Raphael, E. I. Sklar, and S. Maskell, “An intersectioncentric auctionbased traffic signal control framework,” in AgentBased Modeling of Sustainable Behaviors, 2017, pp. 121–142.
 [15] I. K. Isukapati and S. F. Smith, “Accommodating high valueoftime drivers in marketdriven traffic signal control,” in IEEE Intell. Vehicles Symp., 2017, pp. 1280–1286.
 [16] M. O. Sayin, C.W. Lin, S. Shiraishi, J. Shen, and T. Başar, “Informationdriven autonomous intersection control via incentive compatible mechanisms,” IEEE Trans. Intell. Trans. Sys., no. 99, pp. 1–13, 2018.
 [17] N. Nisan, T. Roughgarden, E. Tardos, and V. V. Vazirani, Algorithmic game theory. Cambridge University Press, 2007.
 [18] D. Bergemann and M. Said, “Dynamic auctions,” Wiley Encycl. Operations Research and Management Science, 2010.
 [19] V. Vishnumurthy, S. Chandrakumar, and E. G. Sirer, “Karma: A secure economic framework for peertopeer resource sharing,” in Workshop on Economics of Peertopeer Systems, vol. 35, no. 6, 2003.
 [20] F. D. Garcia and J.H. Hoepman, “Offline karma: A decentralized currency for static peertopeer and grid networks,” in 5th International Networking Conference (INC’05), 2004, pp. 325–332.
 [21] W. H. Sandholm, Population Games and Evolutionary Dynamics, ser. Economic Learning and Social Evolution. MIT Press, 2010.
Comments
There are no comments yet.