1 Introduction
Segregation is a wellknown sociological phenomenon which is intensely monitored and investigated by sociologists and economists. It essentially means that a community of people which is mixed along e.g. ethical, racial, linguistic or religious dimensions tends to segregate over time such that almost homogeneous subcommunities emerge. The most famous example of this phenomenon is residential segregation along racial lines in many urban areas in the US.^{1}^{1}1See the racial dot map [8] for an impressive visualization.
To explain the emergence of residential segregation Schelling [18] proposed in a seminal paper a very simple and elegant agentbased model. In Schelling’s model two types of agents, say type and type agents, are placed on a line or a grid which models some residential area. Each agent is aware of its neighboring agents and is content with her current residential position if at least a fraction of agents in her neighborhood is of the same type, for some . If this condition is not met, then the agent becomes discontent with her current position and exchanges positions with a randomly chosen discontent agent of the other type or the agent jumps to a randomly chosen empty spot.^{2}^{2}2A playful interactive demonstration can be found in [12]. Schelling showed with simple experiments using coins, graph paper and random numbers that even with , i.e., with tolerant agents, the society of agents will eventually segregate into almost homogeneous communities. This surprising observation caught the attention of many economists, physicists, demographers and computer scientists who studied related random models and verified experimentally that tolerant local neighborhood preferences can nonetheless induce global segregation in social and residential networks, see e.g. [1, 4, 10, 13].
To the best of our knowledge, all agentbased models of segregation are essentially random processes where discontent agents choose their new location at random. In this paper we depart from this assumption by introducing and analyzing a gametheoretic version of Schelling’s model where agents strategically choose their location. Empirically, our model yields outcomes which are very similar to Schelling’s original model  see Fig. 1 for an example.
Moreover, our model generalizes Schelling’s model since we allow agents to have preferences over the available locations. Hence, we introduce and explore the influence of such individual location preferences.
1.1 Related Work
There is a huge body of work on Schelling’s model and variations thereof, see e.g. [17, 19, 9, 21, 4, 5, 16, 22, 20]. Most related work is purely empirical and provides simulation results. We focus here on the surprisingly small amount of related work, which rigorously proves properties of (variants of) Schelling’s model.
Young [22]
was the first to rigorously analyze a variant of the onedimensional segregation model by using techniques from evolutionary game theory. He considered the specific dynamics where a pair of agents is chosen at random and they swap places with a suitably chosen probability. Then he analyzes the induced Markov chain and proves that under certain conditions total segregation will be with high probability a stochastically stable state. Later Zhang
[23, 24] proved similar results in 2dimensional models.The first rigorous analysis of the original Schelling model was achieved by Brandt et al. [7] for the case where agents with tolerance parameter are located on a ring and agents can only swap positions. They prove that the process converges with high probability to a state where the average size of monochromatic neighborhoods is polynomial in , where is the windowsize for determining the neighborhood. Interestingly, Barmpalias et al. [2] have proven a drastically different behavior for where the size of monochromatic neighborhoods is exponential in . Later, Barmpalias et al. [3] analyzed a 2dimensional variant where both agent types have different tolerance parameters and agents may change their type if they are discontent. Finally, Immorlica et al. [14] considered the random Schelling dynamics on a 2dimensional toroidal grid with , for some small . Their main result is a proof that the average size of monochromatic neighborhoods is exponential in .
Not much work has been done on the game theory side. To the best of our knowledge only the model by Zhang [24] is gametheoretic and closely related to Schelling’s model. In this game agents are placed on a toroidal grid and are endowed with a noisy single peaked utility function which depends on the ratio of the numbers of the two agent types in any local neighborhood. The highest utility is attained in perfectly balanced neighborhoods and agents slightly prefer being in the majority over being in the minority. In contrast to our model, Zhang’s model [24] assumes transferable utilities and it can happen that after a randomly chosen swap one or both agents are worse off. Moreover Zhang’s model does not incorporate the threshold behavior at . However, despite the different model Zhang [24] uses a similar potential function as we do in this paper.
We note that hedonic games [11, 6] are also remotely related to Schelling’s model, but there the utility of an agent only depends on her chosen coalition. In Schelling’s model the neighborhood of an agent could be considered as her coalition, but then not all agents in a coalition derive the same utility from it.
1.2 Model and Notation
We consider a network , where is the set of nodes and is the set of edges, which is connected, unweighted and undirected. If in every node has the same degree, i.e., the same number of incident edges, then we call regular. The distance between two nodes in network is the number of edges on a shortest path between and . The diameter of is the length of the longest shortest path between any pair of nodes and is denoted by . For a given node let be the set of nodes which are in distance at most from node . We call the neighborhood of and is the window size. We will omit whenever a statement holds for all .
Agents of two different types are located on the nodes of network . There are two disjoint sets of agents and , with and we say that all agents are of type and agents are of type . In each state of our game, there is an injective mapping between agents and nodes which we call a placement. In any placement a node of can be occupied by exactly one agent either from or from or the node can be empty. Let be any placement and let , with and , be agents which are neighbors under placement . In this case, we call a colored pair.
For any agent , , we define , as the set of other nodes in the neighborhood of node , with , which are occupied by the same type of agents as agent and is the corresponding set of other nodes which are occupied by agents of the other type. Note that . If , then agent has no neighboring agents and we say that agent is isolated.
Let be the tolerance parameter. Similar to Schelling’s model we say that an unisolated agent is happy or content with placement if at least a fraction of the agents which occupy the nodes in her neighborhood under are of the same type as her. I.e., an unisolated agent is happy if otherwise is unhappy or discontent with placement . Moreover, we will assume that isolated agents are always unhappy. We call the ratio the local happiness ratio of agent . Besides having preferences about the neighborhood structure, every agent may have a favorite node in the network .
The cost function of our agents is based on two main assumptions:
[nosep]
An agent’s high priority goal is to find a location where she is happy.
An agent’s low priority goal is to find a location which is as close as possible to her favorite location.
Thus, a happy agent strives for locations where she is happy, but as close as possible to . If an agent is unhappy, she will try to improve her local happiness ratio. If this is not possible then she will select a location which has maximum possible local happiness ratio and which is closest to .
We incorporate these assumptions as follows in our cost function: The cost of an unisolated agent with placement in network
is the vector
For an isolated agent the cost vector is .
Thus, an agent is happy with placement , if and only if . Note that we use instead of as second component of the cost vector for technical reasons. This has no influence on the behavior of the agents.
We choose the lexicographic order ^{3}^{3}3. . . for comparing cost vectors. Agents want to minimize their cost vector lexicographically, i.e., it is more important for an agent to be happy than to be close to her favorite node.
The social cost of a placement in network is the vector consisting of the number of unhappy players and the sum of all distance terms:
The strategy space of an agent is the set of all nodes of . A strategy vector is feasible if all of its entries are pairwise disjoint. Clearly, there is a bijection between feasible strategy vectors and placements and we will use them interchangeably. For the possible strategy changes of an agent there are two versions, which yield the Swap Schelling Game and the Jump Schelling Game.
The Swap Schelling Game: In the Swap Schelling Game (SSG) only pairs of agents can jointly change their strategies by swapping locations. Two agents and agree to swap their nodes if both agents strictly decrease their cost by swapping. A placement is stable if no pair of agents can both improve their cost via swapping. Hence, stable placements correspond to 2coalitional pure Nash equilibria. Since locations can only be swapped, we will assume throughout the paper that there are no empty nodes in , that is, is also surjective.
The Jump Schelling Game: In the Jump Schelling game (JSG) an agent can change her strategy to any currently empty node, which constitutes a “jump” to that node. An agent will jump to another empty node, if this strictly decreases her cost. Here a stable placement corresponds to a pure Nash equilibrium.
Different Variants: Besides assuming that every agents has some individual favorite position, we will consider two additional variants of the SSG and the JSG, depending on the favorite nodes of the agents. If the agents do not have a favorite node, then we call these versions uniform (uSSG or uJSG) and we simply ignore the second entry in the cost vector. Note that the uniform versions are very close to Schelling’s original model. If all agents have the same favorite node, then we call these games common favorites (cfSSG or cfJSG). Observe that this variant is especially interesting since it models the case where some particular location is intrinsically more attractive than others to all agents, e.g. it could be the most popular location in a city.
Dynamic Properties: We will use ordinal potential functions. Such a function maps placements to real numbers with the property that if is the placement which results from an improving move by a (pair of) agent(s) under placement then . If an ordinal potential function for some special case of the game exists, then this implies that this special case has the finite improvement property (FIP), which states that any sequence of improving moves must be finite. Having the FIP is equivalent to the game being a potential game [15]. Such games have many attractive properties like guaranteed existence of pure equilibria and often a fast convergence to such a stable state. Moreover, a potential function is useful for analyzing the quality of equilibria. In contrast, if an infinite sequence of improving moves, usually called improving response cycle (IRC), exists then there cannot exist an ordinal potential function.
1.3 Our Contribution
We introduce the first agentbased model for Schelling segregation where the agents strategically choose their locations. For this, we consider a generalization of Schelling’s model where agents besides having preferences over their local neighborhood structure also have preferences of the possible locations. This introduces the important aspect of individual location differentiation which has a significant influence on residential decisions in real life.
Our main contribution is a thorough investigation of the convergence properties of many variants of our model. See Table 1 for details.
uSSG  cfSSG  SSG  uJSG  cfJSG & JSG  

(T.1)  reg. (T.3)  ring, (T.5)  ring, (T.6)  IRC: (T.7)  
(T.1)  reg. (T.3)  reg. (T.4)  ring, (T.6)  IRC (T.7)  
reg. (T.2)  reg. (T.3)  reg. (T.4)  ring, (T.6)  IRC: (T.7) 
In particular, we prove guaranteed convergence to an equilibrium for uSSG, which essentially is Schelling’s model, if tolerant agents are restricted to location swaps or if the underlying network is regular. In contrast, previous work [7, 2, 3, 14] has established, that the process converges with high probability. Moreover, also the (cf)SSG behaves nicely on regular networks. In contrast to this, we show that location preferences have a severe impact in the (cf)JSG, since improving response cycles exist, which imply that there cannot exist a potential function.
Furthermore, we investigate basic properties of stable placements and their efficiency in the (u)SSG. In particular, we prove tight bounds on a variant of the Price of Anarchy for the (u)SSG.
2 Dynamic Properties
We analyze the convergence behavior of the Schelling game. Our main goal is to investigate under which conditions an ordinal potential function exists.
2.1 Dynamic Properties of the Swap Schelling Game
We prove for various special cases of the SSG that they are actually potential games. For this we analyze the change in the potential function value for a suitably chosen potential function for an arbitrary location swap of two agents and . Such a swap changes the current placement only in the locations of agents and and yields a new placement .
Theorem 1.
If then the uSSG is a potential game for any .
Proof.
We prove the statement by showing that
is an ordinal potential function. Note that is the number of colored pairs. First of all, notice that a swap between two agents and will only executed when both agents are unhappy and of different types, since a swap between agents of the same type cannot be an improvement for at least one of the involved agents. Furthermore a happy agent has no possibility to improve, so there is no incentive to. An agent will decrease her cost if and only if she is unhappy and reduces the ratio of neighbors with different type by swapping. It holds that
Hence,
Since it follows that
and analogously we get for agent that . Thus,
This implies that for the change in the potential function value,
∎
Remark 1.
The function is not a potential function for the (cf)SSG. See Fig. 2 below.
Theorem 2.
For any the uSSG on regular networks is a potential game.
Proof.
We prove the statement by showing that
is an ordinal potential function. Analogously to the proof of Theorem 1, there is no incentive for a happy agent to swap or for an unhappy agent to swap with another agent . So an unhappy agent will only swap with another agent of different type.
Since we consider the uSSG on regular networks, we have
For any improving swap between and it follows that
We have to distinguish between two cases:

Assume and are in different neighborhoods, so and . Observe that all agents who were in or before the swap are after the swap in or , respectively. The same holds the other way around. Hence,
and

Assume and are in the same neighborhood, so and . All agents who were in or before the swap are after the swap in or , respectively. Except for and all agents who were in or are after the swap in or , respectively. The two agents and who are involved in the swap are after the swap in and , respectively. Hence
and
Since a swap between two agents and just affects colored pairs where or are involved, we have that
Remark 2.
The function is not a potential function for the uSSG on nonregular networks. See Fig. 3.
Theorem 3.
For any the cfSSG is a potential game on regular networks.
Proof.
Analogously to the proof of Theorem 2 we prove the statement by showing that
is an ordinal potential function.
Again, note that there is no incentive for an agent to swap with another agent who has the same type, since this swap is a setback for at least one of the involved agents, as she either becomes unhappy or increases her distance cost. Without loss of generality, we can assume and .
If and are both happy before the swap, they would also not swap. Either at least one of them becomes unhappy or, since both have the same favorite node fav, the decrease in the distance cost of one agent is equal to the increase of the distance cost of the other agent. Thus to swap is not an improving move for at least one of the involved agents.
Hence, we just have to consider the case where at least one agent is unhappy. First, we assume without loss of generality that is unhappy before the swap and is happy. Again, since we have regular networks, we have
Agent is just willing to swap if she stays happy and gets closer to her favorite node fav. For this reason a possible swap will increase agent ’s distance cost. Therefore, since a swap has to be an improvement for all involved agents, we have
Analogous the proof of Theorem 2 we have to distinguish between two subcases:

Assume and are in different neighborhoods, so and . It holds that
and

Assume and are in the same neighborhood, so and . It holds that
and
Which leads immediately to
Since a swap between two agents and just affects colored pairs where agent or are involved, we have
The second case we have to consider is when both agents and are unhappy before the swap. If the distance cost is equal for both agents, which means that the swap does not change the respective distances to fav, then we already showed in the proof of Theorem 2 that a swap would decrease the potential function. Otherwise one agent, without loss of generality agent , would increase her distance cost. So we must have
which leads to the previous case. ∎
Theorem 4.
For any and any the SSG is a potential game on regular networks.
Proof.
We prove the theorem by showing that
is an ordinal potential function. We prove that if the agents and make an improving swap, the value of the potential function decreases lexicographically.
Since we have regular networks, we have
From the proof of Theorem 2 we already know that whenever an agent who is involved in the swap improves her local happiness ratio, which means
this always leads to a decrease of . Also, from the definition of SSG we know that every agent tries first to maximize her local happiness ratio. This implies decreases lexicographically.
So we just have to consider the case when two agents and swap only to decrease their distance cost.
When both agents (or ) swaps, the number of colored pairs stays the same, so the first entry of doesn’t change. Since the agents just swap when both decreases their distance cost, the sum over all distance cost decreases, which reduces the second entry of .
Now it can be observed that the two happy agents and can swap if where and and and are in different neighborhoods, so and . In this swap, again the number of colored pairs stays the same, so the first entry of doesn’t change. However as a swap improves the distance cost for both of the agents thus, the sum over all distance cost decreases, which reduces the second entry of .
If the happy agents of different types would not swap as both will become unhappy after the swap.
Without loss of generality for we can assume is unhappy.
Agent swaps when and stays the same. Otherwise, if increases the number of agents of type on swap, so , we are in the case already discussed further above and if decreases the swap is not an improvement for .
Since and , same holds for agent since the graph is regular. The number of colored pairs is unchanged. Again, both agents swap to decrease their distance to their favorite node which leads to a decrease in the sum over all distance cost, which reduces the potential function. ∎
Theorem 5.
If and then the SSG on a ring is a potential game.
Proof.
We use an argument similar to the one in the proof of Theorem 4 and prove that the value of decreases lexicographically with every improving swap, where is defined as follows:
Since , an agent will become happy if she has at least one neighbor of her type. We have already shown in the proof of Theorem 2 that if a swap improves the local happiness ratio of an involved agent then this kind of swap always reduces the first entry of , thus decreases lexicographically.
Now we look at the cases when an agent reduces her distance cost by swapping. The case where at least one unhappy agent is involved in the swap is analogous to the proof of Theorem 4. Thus, we are left to consider the case where two happy agents swap.
Two happy agents will swap if and only if they remain happy after the swap and if they get closer to their favorite node by swapping. We show that after such a swap, the number of colored pairs stays the same. This implies that decreases lexicographically by such a swap, since the first entry stays the same but the second entry decreases.
If two happy agents of the same type swap, then trivially nothing changes. If two happy agents and of different types swap, then both of them must have exactly one neighboring agent of their own type and of the other type, respectively and and have to be in different neighborhoods, so and . Thus, also in this case the number of colored pairs does not change. ∎
2.2 Dynamic Properties of the Jump Schelling Game
Now we consider the JSG. Remember that in the JSG we assume that agents can only decrease their cost by jumping to empty nodes. Such a jump of an agent changes the current placement only in the location of agent . We prove for the ring network that the uJSG is a potential game. Furthermore we show that the cfJSG and JSG are not potential games for different ranges of .
Theorem 6.
If and the underlying graph is ring network then, the uJSG is a potential game.
Proof.
For any ring network we define the weight of any edge as follows:
Then we use the function and prove that if any agent makes an improving jump to some other node in the ring then decreases.
An agent will jump to a new node if this improves her local happiness ratio. Let be the placement before the jump and let be the induced placement after the jump. Thus, we have
Since is a ring, this implies that an unhappy agent jumps to a different node only if she increases the number of neighboring agents of her type. Since the degree of every node in is the jump of an agent can affect the weight of at most four edges: the two edges incident to the node and the two edges incident to the node . Hence, the change in the weight of the involved edges equals the total change in potential function value .
Without loss of generality let be an unhappy agent who executes an improving jump. Depending on the neighboring nodes of we have the following cases:

Both neighboring agents of agent are of different type.

Both neighboring nodes are empty.

One neighboring node is empty and the other one is occupied by an agent of the other type.

If one neighboring agent is of ’s type and the other is of the other type. (Only interesting for .)
It is easy to check, than in all cases the potential function decreases. See Fig. 4.
∎
Theorem 7.
There cannot exist an ordinal potential function for the cfJSG and the JSG for .
Proof.
We prove the statement by giving two examples of an improving response cycle for the cfJSG on a grid. Since the cfJSG is a special case of the JSG the statement holds for both variants.

Consider Fig. 5 with . The improving response cycle consists of six steps. We have green and blue agents and several (gray) empty nodes and the tolerance parameter . All agents have the same favorite node shown in purple. In the first step the blue agent is unhappy and has cost of . By jumping next to node she becomes happy and reduces her cost to . The green agent is happy and by jumping next to node and she gets closer to the common favorite node and remains happy as . Thus reduces her cost to . Now the green agent jumps to the common favorite node. The blue agent jumps back to her old node and reduces her cost from to . Because of the movement of the blue agent the green agent becomes unhappy and jumps back to her old node and has cost . In the next step the green agent jumps also back to her old node and reduces her cost from to which brings us in the same situation as in the beginning.

Consider Fig. 6 with . The improving response cycle consists of six steps. We have green and blue agents and several (gray) empty nodes. All agents have the same favorite node shown in purple. In the first step the blue agent is unhappy and has cost of . By jumping next to node she becomes happy and reduces her cost to . The green agent is happy and by jumping next to node she gets closer to the common favorite node and reduces her cost to . Now the green agent jumps to the common favorite node. The blue agent jumps back to her old node and reduces her cost from to . Because of the movement of the blue agent the green agent becomes unhappy and jumps back to her old node and has cost . In the next step the green agent jumps also back to her old node and reduces her cost from to which brings us in the same situation as in the beginning. The improving response cycle for can be found in Fig. 6.
∎
3 Efficiency of Stable Placements
In this section we investigate the properties of stable placements. In particular, we investigate their (in)efficiency.
We start with proving that stable placements exist for many of our versions.
Theorem 8.
Stable placements exist for the uSSG, the cfSSG and the uJSG.
Proof.
We prove the existence separately.

Let be a node, such that with . Place all the agents of type sequentially in the following way: (1) Agent is placed at node , (2) Place an agent at the node which has maximum value for , (3) The agents of type are placed on the remaining nodes of . No happy agent will swap as it is not an improving move. Moreover there is no unhappy player of type who want to swap since the procedure ensures that all the agents of type have maximum possible local happiness ratio. Thus the placement is stable.

The procedure of placement is analogue to the one we introduced for the uSSG. The only difference is that we now consider the favorite node in . Again we place all the agents of type sequentially in the following way: (1) Place agent at node , (2) At step , place an agent at the node which is closest to node and has maximum value for . Then all the agents in are placed on the remaining nodes.
It holds that every agent of type is either happy and closest to her favorite node or unhappy with maximum possible local happiness ratio. Thus no agent of type will swap. Since all agents have the same favorite node, no agents from will swap. Therefore we have a stable placement.

Let and be the nodes in the network such that . Place all the agents of type sequentially in the following way: (1) Agent is at node . (2) Place an agent on node which has maximum value for at iteration . Then all the agents in are sequentially placed in an analogous way starting from the node with using the local happiness ratio for each agent.
All agents in either are happy or they are unhappy and have the maximum local happiness ratio. The same holds for agents in . Thus, the placement is stable.
∎
Now we move on to proving basic properties of stable placements.
Theorem 9.
Let be a stable placement for the SSG on some graph . The following statements hold:

If and then for any , at most one type of agents can be unhappy. Moreover, there exists a stable placement with unhappy agent(s).

If there exist stable placements with unhappy agents of both types.

For the there is a graph such that has a better total distance cost than the socially optimal placement.

For the with , for , there exists a graph such that there is no placement where at least one agent is happy.
Proof.

We prove the first statement by contradiction. Assume that there exists a stable placement where and are unhappy. It holds that
and
respectively. Thus, a swap between and will ensure that both agents are happy afterwards and therefore strictly decrease their cost. This contradicts the fact that is stable. See Fig. 7 for an example of a stable placement with an unhappy agent. The placement is stable since all but one agents are content with .
Comments
There are no comments yet.