DeepAI AI Chat
Log In Sign Up

Adaptive Mechanism Design: Learning to Promote Cooperation

by   Tobias Baumann, et al.

In the future, artificial learning agents are likely to become increasingly widespread in our society. They will interact with both other learning agents and humans in a variety of complex settings including social dilemmas. We consider the problem of how an external agent can promote cooperation between artificial learners by distributing additional rewards and punishments based on observing the learners' actions. We propose a rule for automatically learning how to create right incentives by considering the players' anticipated parameter updates. Using this learning rule leads to cooperation with high social welfare in matrix games in which the agents would otherwise learn to defect with high probability. We show that the resulting cooperative outcome is stable in certain games even if the planning agent is turned off after a given number of episodes, while other games require ongoing intervention to maintain mutual cooperation. However, even in the latter case, the amount of necessary additional incentives decreases over time.


page 1

page 2

page 3

page 4


Cooperative Artificial Intelligence

In the future, artificial learning agents are likely to become increasin...

Cooperation Enforcement and Collusion Resistance in Repeated Public Goods Games

Enforcing cooperation among substantial agents is one of the main object...

Nanowars can cause epidemic resurgence and fail to promote cooperation

In a non-sustainable, "over-populated" world, what might the use of nano...

On the Emergence of Cooperation in the Repeated Prisoner's Dilemma

Using simulations between pairs of ϵ-greedy q-learners with one-period m...

The Good Shepherd: An Oracle Agent for Mechanism Design

From social networks to traffic routing, artificial learning agents are ...

On the Role of Hypocrisy in Escaping the Tragedy of the Commons

We study the emergence of cooperation in large spatial public goods game...