DeepAI AI Chat
Log In Sign Up

Strongly-Typed Agents are Guaranteed to Interact Safely

by   David Balduzzi, et al.

As artificial agents proliferate, it is becoming increasingly important to ensure that their interactions with one another are well-behaved. In this paper, we formalize a common-sense notion of when algorithms are well-behaved: an algorithm is safe if it does no harm. Motivated by recent progress in deep learning, we focus on the specific case where agents update their actions according to gradient descent. The first result is that gradient descent converges to a Nash equilibrium in safe games. The paper provides sufficient conditions that guarantee safe interactions. The main contribution is to define strongly-typed agents and show they are guaranteed to interact safely. A series of examples show that strong-typing generalizes certain key features of convexity and is closely related to blind source separation. The analysis introduce a new perspective on classical multilinear games based on tensor decomposition.


page 1

page 2

page 3

page 4


Regularized Gradient Descent Ascent for Two-Player Zero-Sum Markov Games

We study the problem of finding the Nash equilibrium in a two-player zer...

Memory Asymmetry: A Key to Convergence in Zero-Sum Games

This study provides a new convergence mechanism in learning in games. Le...

Safe Equilibrium

The standard game-theoretic solution concept, Nash equilibrium, assumes ...

Infection-Curing Games over Polya Contagion Networks

We investigate infection-curing games on a network epidemics model based...

Competitive Gradient Descent

We introduce a new algorithm for the numerical computation of Nash equil...

Multi-Scale Games: Representing and Solving Games on Networks with Group Structure

Network games provide a natural machinery to compactly represent strateg...

COLA: Consistent Learning with Opponent-Learning Awareness

Learning in general-sum games can be unstable and often leads to sociall...