Topological Bounds on the Price of Anarchy of Clustering Games on Networks

We consider clustering games in which the players are embedded into a network and want to coordinate (or anti-coordinate) their strategy with their neighbors. The goal of a player is to choose a strategy that maximizes her utility given the strategies of her neighbors. Recent studies show that even very basic variants of these games exhibit a large Price of Anarchy: A large inefficiency between the total utility generated in centralized outcomes and equilibrium outcomes in which players selfishly maximize their utility. Our main goal is to understand how structural properties of the network topology impact the inefficiency of these games. We derive topological bounds on the Price of Anarchy for different classes of clustering games. These topological bounds provide a more informative assessment of the inefficiency of these games than the corresponding worst-case Price of Anarchy bounds. More specifically, depending on the type of clustering game, our bounds reveal that the Price of Anarchy depends on the maximum subgraph density or the maximum degree of the graph. Among others, these bounds enable us to derive bounds on the Price of Anarchy for clustering games on Erdős-Rényi random graphs. Depending on the graph density, these bounds stand in stark contrast to the known worst-case Price of Anarchy bounds. Additionally, we characterize the set of distribution rules that guarantee the existence of a pure Nash equilibrium or the convergence of best-response dynamics. These results are of a similar spirit as the work of Gopalakrishnan et al. [19] and complement work of Anshelevich and Sekar [4].


INTRODUCTION
Clustering games on networks constitute a class of strategic games in which the players are embedded into a network and want to coordinate, or anti-coordinate, their choices with their neighbors.These games capture several key characteristics encountered in applications such as opinion formation, technology adoption, information difusion or virus spreading on various types of networks, e.g., the Internet, social networks and biological networks.
Diferent variants of clustering games have recently been studied intensively in the algorithmic game theory literature, both with respect to the existence and the ineiciency of equilibria, see, e.g., [4,5,16,20,21,24,30,33].Unfortunately, several of these studies reveal that the strategic choices of the players may lead to equilibrium outcomes that are highly ineicient.Arguably the most prominent notion to assess the ineiciency of equilibria is the Price of Anarchy (PoA) [29], which refers to the worst-case ratio of the optimal social welfare and the social welfare of a (pure) Nash equilibrium.It is known that even the most basic clustering games exhibit a large, sometimes even unbounded, Price of Anarchy (see below for details).These negative results naturally trigger the following questions: Is this high ineiciency inevitable in clustering games on networks?Or, can we trace more precisely what causes a large ineiciency?These questions constitute the starting point of our investigations: Our main goal in this paper is to understand how structural properties of the network topology impact the Price of Anarchy in clustering games.In general, our idea is that a more ine-grained analysis may reveal topological parameters of the network which can be used to derive more accurate bounds on the Price of Anarchy.Given the many applications of clustering games on diferent types of networks, our hope is that such topological bounds will be more informative than the corresponding worst-case bounds.
Clearly, this hope is elusive for a number of fundamental games on networks whose ineiciency is known to be independent of the network topology.Arguably, the most prominent example are selish routing games whose Price of Anarchy has been analyzed in the seminal works by Rougharden and Tardos [36] and Roughgarden [35].But, in contrast to these games, clustering games exhibit a strong locality property induced by the network structure, i.e., the utility of each player is afected only by the choices of her direct neighbors in the network.This observation also motivates our choice of quantifying the ineiciency by means of topological parameters rather than other parameters of the game.
In this paper, we derive topological bounds on the Price of Anarchy for diferent classes of clustering games.Our bounds reveal that the Price of Anarchy depends on diferent topological parameters in the case of symmetric and asymmetric strategy sets of the players: While they exhibit a dependency on the maximum subgraph density for symmetric clustering games, they reveal a dependency on the maximum degree for asymmetric clustering games.Using these topological bounds, we are able to derive improved bounds for certain special graph classes as simple corollaries.
We also use our topological bounds to obtain a precise understanding of the Price of Anarchy of clustering games on Erdős-Rényi random graphs [18].Our results reveal that, depending on the density of the graph, the Price of Anarchy improves signiicantly over the known worst-case bounds.To the best of our knowledge, this is also the irst work that addresses the ineiciency of equilibria on random graphs. 1  The applicability of our topological Price of Anarchy bounds is not limited to the class of Erdős-Rényi random graphs.The main reason for using these graphs is that their structural properties are well-understood.In particular, our topological bounds can be applied directly to any graph class of interest for which the above topological parameters are well-understood.
Apart from our topological Price of Anarchy bounds, we also give a complete characterization of what type of distribution rules, which determine how utility generated by two adjacent players in the network is split when they (anti-)coordinate, guarantee the convergence of best-response dynamics in symmetric clustering games.These results extend and complement the results by Gopalakrishnan et al. [19] and Anshelevich and Sekar [4].
Altogether, our results give a more ine-grained view on clustering and coordination games.

Our Clustering Games
We study a generalization of the unifying model of clustering games introduced by Feldman and Friedler [16]: We are given an undirected graph = ( , ) on = | | nodes whose edge set = ∪ is partitioned into a set of coordination edges and a set of anti-coordination edges .The game is called a coordination game if all edges are coordination edges and an anti-coordination game (or cut game) if all edges are anti-coordination edges.Further, we are given a set [] = {1, . . ., } of > 1 colors and non-negative edge weights = ( ) ∈ . 2 Each node corresponds to a player who chooses a color ∈ from a set of colors ⊆ [] which are available to her.
1. Topological bounds on the Price of Anarchy.We show that the Price of Anarchy for symmetric clustering games is bounded as a function of the maximum subgraph density of which is deined as is the number of edges in the subgraph induced by .More speciically, we prove that PoA ≤ 1+ (1+ ) () and that this bound is tight already for coordination games.Using this topological bound, we are able to show that the Price of Anarchy is at most 4 + 3 for clustering games on planar graphs and 1 + 2 () for coordination games with equal-split distribution rule.We also derive a (qualitatively) reined bound of PoA ≤ 2 + 2 ( [ ]) for clustering games with equal-split distribution rule.In particular, this bound reveals that the maximum subgraph density with respect to the graph [ ] (or simply ) induced by the coordination edges only, is the crucial topological parameter determining the Price of Anarchy.These bounds provide more reined insights than the known (tight) bound of PoA ≤ (number of colors) on the Price of Anarchy for (i) symmetric coordination games with individual preferences and arbitrary distribution rule [4], and (ii) clustering games without individual preferences and equal-split distribution rule [16], both being special cases of our model.An important point to notice here is that this bound indicates that the Price of Anarchy is unbounded if the number of colors = () grows as a function of .In contrast, our topological bounds are independent of .In particular, they provide improved bounds when is large, while the maximum subgraph density is small.Moreover, our reined bound of 2 + 2 ( [ ]) mentioned above provides a nice qualitative bridge between the facts that for max-cut (or anti-coordination) games the Price of Anarchy is known to be constant, whereas for coordination games the Price of Anarchy might grow large.
Table 1.Overview of our topological Price of Anarchy bounds for symmetric and asymmetric clustering games.A ł+ž or ł1ž in the column łDistr.ž indicates whether the distribution rule is positive or equal-split, respectively.is the maximum disparity, and is the number of colors.The parameters () and Δ() refer to the maximum subgraph density and maximum degree of , respectively.The stated bounds for random graphs hold with high probability.
2. Price of Anarchy for Random Coordination Games.By using our topological bounds, we are able to derive bounds on the Price of Anarchy for coordination games on random graphs.We focus on the Erdős-Rényi random graph model [18], also known as the (, )-model, where each graph consists of nodes and every edge is present independently with probability ∈ [0, 1].More speciically, we show that with high probability the Price of Anarchy is constant for coordination games on sparse random graphs, i.e., = / for some constant > 0, with equal-split distribution rule.In contrast, we show that with high probability the Price of Anarchy remains Ω() for dense random graphs, i.e., = for some constant 0 < ≤ 1.We leave it as an interesting open question to understand what happens for in the intermediate regime between the sparse and dense extremes.
Note that our constant bound on the Price of Anarchy for sparse random graphs stands in stark contrast to the deterministic bound of PoA = [4,16], which could increase with the size of the network.On the other hand, our bound for dense random graphs reveals that we cannot signiicantly improve upon this bound through randomization of the graph topology.
It is worth mentioning that all our results for random graphs hold against an adaptive adversary who can ix the input of the clustering game knowing the realization of the random graph.To obtain these results, we need to exploit some deep probabilistic results on the maximum subgraph density and the existence of perfect matchings in random graphs.

Convergence of Best-Response Dynamics.
In general, pure Nash equilibria are not guaranteed to exist for clustering games with arbitrary distribution rules , even if the game is symmetric [4].While some suicient conditions for the existence of pure Nash equilibria, or the convergence of best-response dynamics, are known (see also [4]), a complete characterization is elusive so far.
In this work, we instead obtain a complete characterization of the class of distribution rules which guarantee the convergence of best-response dynamics in clustering games on a ixed network topology.We prove that best-response dynamics converges if and only if is a generalized weighted Shapley distribution rule.Our proof relies on the fact that there needs to be some form of cyclic consistency similar to the one used in [19].In fact, our characterization results regarding the existence of pure Nash equilibria and convergence of best-response dynamics are conceptually similar to the work of Chen et al. [11] and Gopalakrishnan et al. [19].We refer to Section 4 for more details.
Prior to our work, the existence of pure Nash equilibria was known for certain special cases of coordination games only, namely for symmetric coordination games with individual preferences and = 2 [4], and for symmetric coordination games without individual preferences [16].To the best of our knowledge, this is the irst characterization of distribution rules in terms of best-response dynamics, which, in particular, applies to the settings in which pure Nash equilibria are guaranteed to exist for every distribution rule [4,16].

Related Work
The literature on clustering and coordination games is vast; we only include references relevant to our model here.The proposed model above is a mixture of special cases of existing models considered in [4,5,16,33].
Anshelevich and Sekar [4] consider symmetric coordination games with individual preferences and (general) distribution rules.They show existence of -approximate -strong equilibria, (, )-equilibria for short, for various combinations; in particular, (2, )-equilibria always exist for any .Moreover, they show that the number of colors is an upper bound on the PoA.Apt et al. [5] study asymmetric coordination games with unit weights, zero individual preferences, and equal-split distribution rules.They derive an almost complete picture of the existence of (1, )-equilibria for diferent values of .Feldman and Friedler [16] introduce a uniied framework (as introduced above) for studying the (strong) Price of Anarchy in clustering games with individual preferences set to zero and equal-split distribution rules.In particular, they show that the number of colors is an upper bound on the PoA and that 2( − 1)/( − 1) is an upper bound on the (1, )-PoA.Rahn and Schäfer [33] consider the more general setting of polymatrix coordination games with equal-split distribution rule, of which our asymmetric coordination games with individual preferences are a special case.They show a bound of 2 ( − 1)/( − 1) on the (, )-PoA and that an (, )-equilibrium is guaranteed to exist for any ≥ 2 and any .
There is also a vast literature on diferent variants of anti-coordination (or cut) games, see, e.g., [21,24] and the references therein, which are also captured by our clustering games.In a recent paper, Carosi and Monaco [10] consider so-called -coloring games.Moreover, clustering and coordination games were also studied on directed graphs [5,9].Finally, certain coordination and clustering games can be seen as special cases of hedonic games [14]; we refer the reader to [7] for, in particular, a survey of recent literature on (fractional) hedonic games.Identifying topological ineiciency bounds for these type of games, as well as for clustering games on directed graphs, could be an interesting direction for future work.
Regarding the study of the ineiciency of equilibria on random graphs, closest to our work seems to be the work by Valiant and Roughgarden [37].They study the Braess paradox on large Erdős-Rényi random graphs and show that for certain settings the Braess paradox occurs with high probability as the size of the network grows large.The study of randomness in games has also received some attention in other settings, see, e.g., [2,6].These are mostly settings with small strategy sets and random utility functions, and are not comparable with ours.We only focus on randomness in the graph topology of the game.
In the case of equal-split distribution rules, our clustering games can also be modelled as congestion games [34].The ineiciency of pure Nash equilibria in congestion games has received a lot of attention, see, e.g., [1,8,12,13,26,28] and references therein.However, none of these results are directly applicable to the clustering games considered in this work.Finally, our games are also a special case of so-called distributed welfare games as studied, e.g., by Marden and Wierman [32].

PRELIMINARIES
An instance of a clustering game is given by Γ = (, , , , , ), where: Whenever we refer to a clustering game below, we assume that all of the above input parameters are non-trivial; we specify the respective restrictions otherwise.Each node ∈ corresponds to a player whose goal is to choose a color ∈ from the set of colors available to her to maximize her utility function .Given a strategy proile = ( 1 , . . ., ) ∈ , the utility of player is deined as We say that an edge {, } ∈ is satisied in a strategy proile ∈ if either (i) = and {, } is a coordination edge, or (ii) ≠ and {, } is an anti-coordination edge.We assume that the distribution rule satisies + > 0 for every edge = {, } ∈ ; in particular, not both and have a zero split for edge .We say that is positive if > 0 and > 0 for all = {, } ∈ ; we also write > 0. Further, is called the equal-split distribution rule if = for all = {, } ∈ ; we also indicate this by = 1.The disparity of an edge = {, } is deined as = max{ / , / } and we use = max ∈ to denote the maximum disparity.
We say that the clustering game is symmetric if = [] for every player ∈ and asymmetric otherwise.If we focus on symmetric clustering games, we omit the explicit reference of the strategy sets = × ∈ with = [].A clustering game is called a coordination game if = ∅ and an anti-coordination game, or cut game, if = ∅.We use = | | to refer to the number of players.
A strategy proile = ( 1 , . . ., ) ∈ is an -approximate -strong equilibrium with ≥ 1 and ∈ [], or (, )-equilibrium for short, if for every set of players ⊆ with | | ≤ and every deviation ′ = ( ′ ) ∈ , there is at least one player ∈ such that • () ≥ ( − , ′ ).That is, for any joint deviation of the players in from strategy proile , there is at least one player that cannot improve her utility by more than a factor .

Graph theory
We consider undirected simple graphs = ( , ) where = ∪ ⊆ {{, } : , ∈ } is a partition of the edges in coordination and anti-coordination edges.We usually write = | |.For a subset ⊆ , we write [ ] for the subgraph of formed by the edges of .For a subset ⊆ , we write [] = (, []) for the induced subgraph on , where [] = {{, } ∈ : , ∈ }.We say that a node ∈ is adjacent to an edge ∈ if ∈ .A graph is complete if = {{, } : , ∈ }.Furthermore, a graph is triangle-free if it contains no complete induced subgraph on three nodes.Finally, a graph is planar if, informally speaking, it can be drawn in R 2 without crossings; see, e.g., [38] for a formal deinition.
A matching ⊆ is a collection of edges so that every node in is adjacent to at most one edge in .A perfect matching is a collection of edges such that every node in is adjacent to precisely one edge; in particular, this means that | | = /2.A maximum matching is a matching so that no other matching in has larger cardinality.
The degree of a node ∈ is deined as () = |{ : {, } ∈ }| and the maximum degree of a graph is deined as The maximum subgraph density of a graph is deined as

Random Clustering Games
In our probabilistic framework to study the Price of Anarchy of random clustering games, we use the well-known Erdős-Rényi random graph model [18], denoted by (, ): There are nodes and every undirected edge is present independently with probability = () ∈ [0, 1].Although this model was irst introduced by Gilbert, it is often referred to as the Erdős-Rényi random graph model.We say that a random graph is sparse if = / for some constant > 0, and it is dense if = for some constant 0 < < 1.In this paper, we focus on random graph instances with equal-split distributions rules.Some of our results naturally extend to more general distribution rules, but we omit the details here because they do not provide additional insights.We continue with deining the Price of Anarchy for games with a random graph topology.Fix some probability = () ∈ [0, 1] and let = (, ()) be a given function.Deine G = {Γ : Γ = ( , (), , , )} as the set of all clustering games on random graph ∼ (, ) with at most () available colors.We say that the Price of Anarchy for random clustering games is at most with high probability, or PoA(G ) ≤ for short, if where the asymptotics in (1) is with respect to → ∞.We use a similar deinition if we want to lower bound the Price of Anarchy.
Finally, for a constant independent of and , we say that the Price of Anarchy for random clustering games is with high probability, or PoA(G ) → for short, if for all > 0 where again the asymptotics in (1) is with respect to → ∞.
All our results for clustering games on random graphs hold with high probability.

Shapley Distribution Rules
We adapt the deinition of Shapley distribution rules for resource allocation games [19] to our setting.A distribution rule corresponds to a generalized weighted Shapley distribution rule if and only if there exists a permutation of the players in and weight vector ∈ R ≥0 such that the following two conditions are satisied for every edge = {, }: If all weights are strictly positive, then the resulting distribution rule is a weighted Shapley distribution rule.If = for all , ∈ the resulting distribution rule is an unweighted Shapley distribution rule.Note that this case corresponds to an equal-split distribution rule.

REFINED BOUNDS ON THE PRICE OF ANARCHY
In this section, we irst establish our topological bound on the Price of Anarchy for symmetric clustering games and then use it to derive new bounds for some special cases as well as random clustering games.

Topological Price of Anarchy Bound
Our topological bound depends on the maximum subgraph density of which is deined as is the number of edges in the subgraph induced by .Recall that refers to the maximum disparity.
Proof.Let and * be a Nash equilibrium and a social optimum, respectively.Consider an edge {, } ∈ and assume without loss of generality that () ≤ (), so that () = min{ (), ()}.If {, } is a coordination edge, then () ≥ ( − , ) ≥ + • , where ( − , ) is the strategy proile in which player deviates to the color of player and all other players play according to .Suppose {, } is an anti-coordination edge.If ≠ , then we trivially have () ≥ + • by non-negativity of the weights and individual preferences.If = , then the same inequality holds by using the Nash condition for some color which is not .Recall that such a color exists, because we assume that | | ≥ 2 for all .In either case, we conclude that Moreover, by exploiting that is a Nash equilibrium and the non-negativity of the edge weights, we obtain for every ∈ , () ≥ ( − , * ) ≥ ( * ).Using that the sum of the weights of all satisied edges in * is at most the sum of all edge weights, we obtain If we can ind a value such that ︁ then it follows that and note that ∈ = ||.We can assume without loss of generality that ∈ () = 1, since the expression in ( 2) is invariant under multiplication with a constant positive scalar.Moreover, the players may be renamed such that We continue by showing that is an upper bound for the linear program (P) given below, in which the variables are the = (), and the are considered constants.The dual program (D) is given on the right. .
We now construct a feasible dual solution for (D).Set * = max We will often use that ( − ) * ≥ −1 = for any ixed .In particular, with = − 1, we ind * ≥ , so that Using induction it then easily follows that * := * +1 + * − ≥ 0 for all = 1, . . ., − 2 as well.We have constructed a feasible dual solution with objective function value * .Using weak duality it follows that for any feasible primal solution = ( 1 , . . ., ), we have since the term in middle is precisely the density of the induced subgraph on the nodes , . . ., .This completes the proof of the upper bound.
We continue with showing tightness, even for coordination games.Let = ( ∪ , ) be a complete bipartite graph between node-sets and , with || = ℓ and || = , and assume that all edges in are coordination edges.We show tightness using a weighted Shapley distribution rule.That is, for any value = max {, } ∈ , there is also some weighted Shapley distribution rule that attains this value.The nodes in get a ixed weight , with ≥ 1, and the nodes in get a ixed weight 1.
We deine = ∪ ∪ { 0 } where contains colors { 1 , . . ., ℓ } and = { 1 , . . ., }. Assume that every ∈ has an individual preference of ( ) = ( 0 ) = /(1+) for colors and 0 , and every player ∈ an individual preference of ( ) = ( 0 ) = 1/(1 + ) for colors and 0 , and all other individual preferences for a player and color are () = 0. Furthermore, all edge weights are set to = 1.Consider the strategy proile in which player ∈ plays , and ∈ plays .This proile is a Nash equilibrium with A social optimum evolves when every player plays color 0 .The resulting proile * has social welfare Here the irst two terms arise from the individual preferences of the nodes in and , respectively.The last term arises because coordination takes place between all pairs of nodes (ℓ, ) ∈ × ; remember that we consider a complete bipartite graph.It then follows that By letting → ∞, we ind a lower bound of 1 + ℓ • (1 + ).Note that for ℓ and ixed, the densest subgraph is the whole graph and has density ℓ/(ℓ + ) which converges to ℓ as → ∞. □ We use our topological bound to derive deterministic bounds on the Price of Anarchy for two special cases of clustering games.Note that these bounds cannot be deduced from [4,16].For planar graphs that are also triangle-free, we can give a slightly better bound.The result in Corollary 3 also shows that the linear dependence on in Corollary 2 is necessary.
Corollary 3 (Planar triangle-free clustering games).Let Γ = (, , , , ) be a symmetric clustering game on a triangle-free planar graph with > 0. If a pure Nash equilibrium exists for the game Γ, then PoA(Γ) ≤ 3 + 2 and this bound is tight in general.
We emphasize that the bound in Corollary 4 is tight on every ixed graph topology , rather than only in the value of ().It is known that the Price of Anarchy of anti-coordination games is 2 (see, e.g., [24]), which is not relected by our bound in Theorem 1. Intuitively, this suggests that a large Price of Anarchy is caused by the coordination edges of the graph.Theorem 5 reveals that this intuition is correct: it shows that the maximum subgraph density with respect to the coordination edges only is the determining topological parameter.Note that it captures the bound of 2 for anti-coordination games.
Theorem 5. Let Γ = (, , 1, , ) be a symmetric clustering game with equal-split distribution rule.Then where [ ] is the subgraph induced by the coordination edges .
Proof.The proof is a modiication of the proof of Theorem 1.Let be a Nash equilibrium and * a socially optimal strategy proile.For notational convenience, we write = () for ∈ .The proof relies on the following two claims.
Claim 1: For any coordination edge {, } ∈ , it holds that Proof.Assume without loss of generality that () ≤ ().Then where ( − , ) is the strategy proile in which player deviates to the color of player and all others play their strategy in .Rewriting gives ≤ 2 ().□ Proof.First note that for the Nash equilibrium is holds that Also, for every player and some ixed color ℓ with ℓ ≠ * , it holds that The last inequality is true as Adding up (7) for every player then yields (4).To see this, one should observe that every edge {, } ∈ appears in precisely two summations, that of player and .□ Combining ( 3) and ( 4), we ind where the inal step follows from similar arguments as in the proof of Theorem 1.
The lower bound can be achieved using a similar construction as in the proof of Corollary 4. □

Price of Anarchy for Random Coordination Games
We now turn to our bounds for random coordination games.Recall that for random graphs we consider equal-split distribution rules only.We irst show that for sparse random graphs the Price of Anarchy is constant with high probability.
Corollary 6 (Sparse random coordination games).Let > 0 be a constant.Let G be the set of all symmetric coordination games Γ = ( , , 1, , ) on graph ∼ (, /) with equal-split distribution rule.Then there is a constant = () such that PoA G → .
Proof.Anantharam and Salez [3] prove that the maximum subgraph density of a random graph approaches a constant = () with high probability; approximations of this constant can be found in [22].Combining this with the bound in Corollary 4 proves the claim.□ As we show in Theorem 7, the result of Corollary 6 does not hold for suiciently dense random graphs if the number of available colors grows large.
Theorem 7 (Dense random coordination games).Let ( ) ∈N → ∞ be a sequence of available colors and let 0 < ≤ 1 be a constant independent of .Let G ( ) be the set of all symmetric coordination games Γ = ( , , 1, , 0) on graph ∼ (, ) with common colors, equal-split distribution rule, and no individual preferences.Then there is a constant = () such that PoA G ( ) ≥ .
We note that this lower bound holds even for coordination games without individual preferences (as studied in [16]).Basically, this bound implies that for dense graph topologies we cannot signiicantly improve upon the Price of Anarchy bound of by [4,16], even if we randomize the graph topology.
Proof of Theorem 7. We irst construct a deterministic instance Γ with Price of Anarchy Ω( ) and then show that we can embed this construction into a random graph with high probability.
Consider a graph = ( , ) and let be the number of available colors.Let = { 1 , . . ., ℓ } ⊆ be a matching of size at most .Let be the set of nodes which are matched in .Deine the weight of an edge ∈ as , where [ ] is the set of edges of the induced subgraph on = { : ∈ for some ∈ }, i.e, the induced subgraph of the nodes that are matched in .Consider the strategy proile in which the nodes in play color , for = 1, . . ., ℓ, and all other nodes play an arbitrary color, say, color ℓ.Note that ℓ distinct colors are used in this proile, which is possible because ℓ ≤ by assumption.Furthermore, we have () = 2ℓ, as every edge in is satisied, and therefore contributes 1 to the social welfare of .We claim that is a pure Nash equilibrium.To see this, irst note that all nodes ∉ are only adjacent to edges with weight zero, and so playing color ℓ is a best response for them, independently of the colors played by the nodes in .We next consider a node ∈ ∈ for an arbitrary = 1, . . ., ℓ.We write for the other node in , i.e., = {, }.Because = 2, all individual preferences are zero, we only have coordination edges, and consider an equal-split distribution rule, the utility of player in proile is () = 1.In order to see that color is a best response for player , note that if she deviates to another color ∈ [ℓ] \ {} then she derives a utility of 1/2 from every node in she is adjacent to, as edges ∈ [ ] \ have weight = 1.Since | | = 2, this means the maximum utility she can obtain is 1, which happens in case she is adjacent to both nodes in .
Note that 2⌊/2⌋ = − 1 if is odd, and 2⌊/2⌋ = if is even.Because we will be working with a matching, it is convenient to work with an even number of nodes, hence, this deinition of .Note that → ∞ whenever → ∞.
By applying this result to the random (induced) subgraph on and using that approaches ininity as → ∞, property 2) follows.Now, if two events and , our properties 1) and 2), respectively, happen with high probability as → ∞, then their intersection ∩ also happens with high probability as → ∞.This means that the subgraph on contains Ω( 2) edges and a perfect matching, with high probability.Using the deterministic construction of the irst part of the proof on the subgraph then gives This completes the proof.□

CONVERGENCE OF BEST-RESPONSE DYNAMICS
In this section, we derive our characterization results for the convergence of best-response dynamics in symmetric clustering games and for the existence of pure Nash equilibria in symmetric coordination games.Recall that best-response dynamics is said to converge if any sequence of player deviations, where in each step the deviating player chooses a most proitable deviation, converges in a inite number of steps to a pure Nash equilibrium.
Basically, for symmetric clustering games our characterization shows that best-response dynamics is guaranteed to converge to a pure Nash equilibrium if and only if is a generalized weighted Shapley distribution rule.For the special case of symmetric coordination games with ≥ 3, we can further strengthen this characterization result and show that a pure Nash equilibrium is guaranteed to exist if and only if is a generalized weighted Shapley distribution rule.This complements a result of Anshelevich and Sekar [4].

Symmetric Clustering Games
We provide a characterization of distribution rules that guarantee the convergence of best-response dynamics in symmetric clustering games.
Theorem 8 (Best-response convergence).Let G ,, be the set of all symmetric clustering games Γ = (, , , , ) on a ixed graph with common colors and distribution rule .Then best-response dynamics is guaranteed to converge to a pure Nash equilibrium for every clustering game in G ,, if and only if corresponds to a generalized weighted Shapley distribution rule.
In general, this characterization does not hold if the condition of łguaranteed convergence of best-response dynamicsž is replaced by łguaranteed existence of a pure Nash equilibriumž (as in [19] or [11]): There are settings where on a ixed graph , a pure Nash equilibrium is guaranteed to exist even if is not a generalized weighted Shapley distribution rule, e.g., in the case of = 2, or in coordination games with no individual preferences.
The proof of Theorem 8 relies on the following lemma.In the proofs of Lemma 9 and Theorem 8, all player or edge indices are always modulo .Lemma 9. Consider a symmetric clustering game (, 2, , , ) on a cycle = ⟨1, . . ., ⟩ with players and = 2 colors.If for every strategy proile it is a best-response for every player to choose a color that satisies at least edge {, + 1}, then there exists a best-response sequence that does not converge to a Nash equilibrium.
Proof.We irst construct an initial state 0 using only colors 1 and 2 .Set 0 1 = 1 , and iteratively, for = 2, . . ., − 1, set 0 such that edge { − 1, } is satisied (using only colors 1 and 2 ).The color for is chosen in such a way that at least one of the edges { − 1, } or {, 1} is not satisied (this can always be done, since if color 1 would satisfy both edges, that color 2 would satisfy neither, and vice versa).Now, either ) precisely − 1 edges of the cycle are satisied in 0 , or ) precisely − 2 edges are satisied in 0 (and two consecutive edges are not).
Case i): There are precisely − 1 edges satisied.This case is illustrated in Figure 1.Note that currently edge {, 1} is not satisied.Therefore, by assumption, it is a best-response for player to switch to its other color.But the situation after this switch is isomorphic to the starting proile 0 , that is, if we would have started the numbering at node instead of node 1.Therefore, we can repeat the same argument, and in particular, after 2 of such best-response steps we are back in 0 .Roughly speaking, during this process the unsatisied edge moves over the whole cycle twice.After steps, we are essentially also in the same situation as 0 , but now with the roles of 1 and 2 interchanged.
Case ii): There are precisely − 2 edges satisied except for the two consecutive edges { − 1, } and {, 1}.This case is illustrated in Figure 2. If player would switch to its other color (which is a best-response move) then we would ind a Nash equilibrium, however, we do not choose player .Instead we let player − 1 switch to its other color (which is a best-response move by assumption), then afterwards, it is a best-response for player − 2 ACM Trans.Econ.Comput.to switch as well (in order to satisfy edge { − 2, − 1}, and we continue this in decreasing player order up until (and including) player 2. In particular, we are then in the situation were again precisely − 2 edges are satisied except two consecutive edges, which are now {, 1} and {1, 2}.This situation is equivalent to the starting state 0 , and in particular by repeating this process times, we are back in 0 .This completes the proof.□ Proof of Theorem 8.If corresponds to a generalized weighted Shapley distribution rule, then the convergence of best-response dynamics follows immediately from the fact that the game can be modeled as a resource allocation game (see Appendix B).Such games, with generalized weighted Shapley distribution rules, are potential games, see, e.g., [19] for details.We now continue with the other direction, i.e., assume that best-response dynamics always converge.

Price of Anarchy of Clustering Games on
The idea of the proof is to show that the pair (, ) has a certain nice structure if best-response dynamics is guaranteed to converge, from which it will be easy to conclude that is a generalized weighted Shapley distribution rule.In order to deine this nice structure, we need the help of an auxiliary digraph = (, ).The nodes in the set = { 1 , . . ., } form a partition of the players in , that is, = 1 ∪ 2 ∪ • • • ∪ .This partition is constructed by looking at the subgraph >0 = ( , >0 ) of given by edges {, } from which both and get strictly positive utility if it is satisied and the weight is non-zero, i.e, >0 () = {{, } : , > 0 and > 0}.
We start with showing that condition 1) is true.For sake of contradiction, irst suppose that = (, ) has a selfloop, that is, ( , ) ∈ for some .This means that there are nodes ℎ , 1 ∈ such that ℎ 1 > 0 and 1 ℎ = 0. We write ℎ = { ℎ , 1 } for the edge in the original graph .Furthermore, as >0 is a connected component there exists a simple path ( 1 , . . ., ℎ ) in >0 where both +1 , +1 > 0, for the edges = { , +1 } for = 1, . . ., ℎ − 1.We write = ⟨ 1 , 2 , . . ., ℎ−1 , ℎ ⟩ for the resulting simple cycle in the original graph obtained by the path ( 1 , . . ., ℎ ) concatenated with the edge { ℎ , 1 }.We will next create an instance of a symmetric clustering game, of which the relevant part takes place on the cycle , for which best-response dynamics is not guaranteed to converge.Set 1 2 = 1 and iteratively deine the weights +1 so that for = 2, . . ., ℎ mod ℎ.All other edge weights, of edges not in , are set to zero.Individual preferences ( ) for every player are set to = ∈ for two ixed colors 1 and 2 , and zero otherwise.In particular this means that for all players 1 , . . ., ℎ on the cycle , only colors 1 and 2 can potentially be a best response, in any strategy proile.By construction, it will always be a best response for player to satisfy edge { , +1 }.For player 2 , . . ., ℎ this claim follows directly from (10).For player 1 it is true as she derives zero utility from edge { ℎ , 1 } if it is satisied, because 1 ℎ = 0, and strictly positive utility from { 1 , 2 } if it is satisied.Because of the fact that for every player it is always a best response to satisfy { , +1 }, we are in the situation of Lemma 9, and, thus, we may conclude that best-response dynamics is not guaranteed to converge.Secondly, suppose that contains a directed cycle.The argument showing that best-response dynamics is not guaranteed to converge is very similar to the case of a self-loop.We can now construct a cycle in that traverses multiple components >0 , and, in particular, contains multiple edges from the set \ >0 ().Because of the fact that the edges in \ >0 () form a directed cycle in , the procedure in (10) still can be used to deine the edge weights for the edges in to make sure that every player always tries to satisfy the edge { , +1 }.
In particular, for any edge { , +1 } in which one of the two players receives the full share, this will always be , because the edges of that are contained in \ >0 () form a directed cycle in .If the cycle would not have been directed, then this would mean that for some edge { , +1 } player +1 would get the full share, i.e., +1 = 0, and then the procedure in ( 10) cannot be used as for that , the right hand side in ( 10) would be zero.Individual preferences and remaining edge weights, of edges not in , can be chosen just as in the case of a self-loop in .
We continue with proving condition 2).The proof is the same for every component so ix any >0 .To make things easier notation-wise, we will write = {1, . . ., }.The goal is to show that there exist numbers 1 , . . ., such that for every {, } ∈ .It is clear that if we can ind such a vector ( 1 , . . ., ), and multiply every with a ixed constant > 0, then the weights • also satisfy (11).In particular, this implies that we may ix 1 as we like without loss of generality.We next ix a spanning tree in >0 in order to obtain the values of 2 , . . ., .We root the tree at the node 1, and set where ( 1 , 2 , . . ., ℎ ) is the unique path from player 1 = 1 to player = ℎ in .Setting in this way guarantees that ( 11) is satisied for the edges of the spanning tree .To see this, note that equation ( 11) for an edge {, } ∈ ( ) is equivalent to Now, if there is some edge = { 1 , 2 } ∈ ( >0 ) \ ( ) with the property that ( 11) is not satisied, then we get Let = ⟨ 1 , . . ., ℎ ⟩ be the unique cycle in ∪ containing edge .It then follows that which can be seen by multiplying (13) with the equations in (12) for the edges on ( ) \ .The expression ( ) = 1 is analogue to the cyclic consistency property of Gopalakrishnan et al. [19].We will show that ( ) ≠ 1 implies that best-response dynamics is not guaranteed to converge.
We let = { , +1 } for = 1, . . ., ℎ (mod ℎ).Assume that ( ) < 1.This can be done without loss of generality by changing the orientation of the cycle if needed.Then there exists a constant > 0 so that Set 1 2 = 1 and iteratively deine the weights so that for = 2, . . ., ℎ.We then deine the weights ′ +1 = (1 + ) • +1 for = 1, . . ., ℎ.All other edge weights, of edges not in , are set to zero.Individual preferences ( ) for every player are set to = ∈ for two ixed colors 1 and 2 , and zero otherwise.This is similar to what we did in the proof of condition 1).For players 2 , . . ., ℎ it is now always a best-response to choose, among 1 and 2 , the color that satisies edge {, + 1}.This follows from the fact that the weights ′ +1 satisfy This is the same idea used in the proof of condition 1), based on the inequality in (10).For player 1 the argument is slightly more involved.Suppose it is not a best response for 1 to satisfy the edge { 1 , 2 }.Then If we multiply all equalities in (14) with each other for = 2, . . ., ℎ, and the result also with ( 15), then after simpliication we ind (1 + ) ( ) ≥ 1, which contradicts the choice of .As for all players 1 , . . ., ℎ it is a best response to satisfy edge { , +1 }, we are again in the situation of Lemma 9.This leads to a contradiction and concludes the proof.□ Corollary 10.The characterization in Theorem 8 also is true if G ,, is replaced by either: i) The set H ,, ,0 = {Γ : Γ = (, , , , 0) with = ∅} of symmetric coordination games on , with common colors, and distribution rule , but without individual preferences .ii) The set G ,2, ,0 = {Γ : Γ = (, 2, , , 0)} of symmetric clustering games on graph with 2 common colors, and distribution rule , but without individual preferences .
The irst setting corresponds to certain models in [4,16].Also note that the second setting cannot hold true with ≥ 3, by considering a cycle of length three with only anti-coordination edges.That is, if there are no individual preferences , any best-response sequence will in at most three steps end up in a strategy proile in which all three players have a diferent color.Such a proile is always a pure Nash equilibrium in case there are no individual preferences, and so any best-response sequence will converge.
Proof of Corollary 10.In order to prove the irst case, one should observe that if there are only coordination edges in the proof of Theorem 8, it is not necessary to set the individual preferences ( ) to the sum of the for two chosen colors 1 and 2 , and zero otherwise.Instead, one can set all individual preferences equal to zero, i.e., let them play no role in the game.This is allowed, because when all players start with either color 1 or 2 then no player can have some other color ∈ [] \ { 1 , 2 } as a best response, since there are only coordination edges and no individual preferences.
For the second case, note that when there are only two colors 1 and 2 available to the players, then every player will always play either of these colors.There is then no need to use the individual preferences in order to force players to always use one of these colors as a best response.□

Symmetric Coordination Games
We next consider the special case of symmetric coordination games in which the common strategy set contains ≥ 4 colors.We can strengthen the characterization result of Theorem 8 in this case.We prove in Theorem 11 that a pure Nash equilibrium is guaranteed to exist if and only if is a generalized weighted Shapley distribution rule.This complements a result of Anshelevich and Sekar [4].
Theorem 11.Let G ,, = {Γ : Γ = (, , , , )} be the set of all symmetric coordination games on , with common strategy set {1, . . ., } for ≥ 4 and distribution rule .Then a pure Nash equilibrium is guaranteed to exist for every game in G ,, if and only if corresponds to a generalized weighted Shapley distribution rule.
Our arguments are conceptually similar to those of Gopolakrishnan et al. [19], however, they are technically diferent.We elaborate on the connection between Theorem 11 and the work in [19] in Appendix B. We essentially show a similar result as in [19], but for a more restricted setting than the resource allocation games considered there.Nevertheless, the result in Theorem 11 allows us to fully characterize which distribution guarantee equilibrium existence, thereby completing results of Anshelevich and Sekar [4], who only partially address this question.
In particular, Anshelevich and Sekar [4] provide an example showing that for general distribution rules, pure Nash equilibria are not guaranteed to exist.On the positive side, they show that if the distribution rule has the so-called correlated coordination condition, then pure Nash equilibria are guaranteed to exist.This condition is actually the same as saying that the local distribution rule corresponds to a weighted Shapley distribution rule, and the proof of Theorem 1 in [4] is essentially a direct consequence of the work of Hart and Mas-Collel [23] who characterize the (weighted) Shapley value in terms of a (weighted) potential function.Theorem 11 allows us to precisely characterize which distribution rules guarantee (pure) equilibrium existence in symmetric coordination games, for arbitrary weight functions and individual preferences .In particular, we note that generalized weighted Shapley distribution rules (see preliminaries) still guarantee equilibrium existence.This follows from [19] by observing that these coordination games are resource allocation games (see Appendix B).We then show that these distribution rules are also necessary in a certain sense, already in the case of four colors.
Proof of Theorem 11.The idea of the proof is similar to that of Theorem 8. We again consider the graph >0 , with its connected components >0 for = 1, . . ., , and the auxiliary graph = (, ) which is deined in the same way.Again, the goal will be to show that 1) The digraph does not have a directed cycle or self-loop, i.e., it is acyclic.2) Within every component = { 1 , . . ., } there exist numbers 1 , . . ., such that + = + whenever , ∈ .For similar reasons as in the proof of Theorem 8, this is suicient to conclude that is a generalized weighted Shapley distribution rule.In order to prove conditions 1) and 2) the idea is again for both conditions to create an instance on a cycle , and derive a contradiction.For the proof here, we are trying to derive a stronger contradiction than that in the proof of Theorem 8, namely that a pure Nash equilibrium does not exist as opposed to showing that best-response dynamics is not guaranteed to converge.On the other hand, we only have coordination edges and no anti-coordination edges, which makes the situation manageable.
We again start with the proof of condition 1) for the case that contains a self-loop corresponding to a cycle = ⟨ 1 , . . ., ℎ ⟩.We start by ixing 1 2 = 1 and then repeatedly choose +1 so that for = 2, . . ., ℎ.Compared to the proof of Theorem 8 the individual preferences of the players will play an important role in this proof.Let = {1, . . ., } and let = .For some arbitrarily ixed > 0, we deine The constant is used to, roughly speaking, mimic asymmetric strategy sets.It will be important, later on, that can be chosen arbitrarily close to zero.The individual preferences for players 2 , . . ., ℎ−2 are chosen diferently.players ℎ−2 , . . ., 1 will play their preferred color.In particular, this means that 1 will play color 4 in .However, then it will be optimal for player ℎ to deviate to color 4 as well.This contradicts that is a pure Nash equilibrium.
The completes the proof that cannot have a self-loop.The case that contains a cycle proceeds along similar lines, because of the same reasons as why these cases are similar in the proof of Theorem 8.
We continue with the proof of condition 2).Set 1 2 = 1 and iteratively deine the weights so that for = 2, . . ., ℎ.We then deine the weights , where is chosen similary as in the proof of Theorem 8.All other edge weights, of edges not in , are set to zero as well as individual preferences of all nodes not in .The weights The individual preferences are set similarly as in the the proof of condition 1) above, but this time we choose , for = 2, . . ., ℎ − 2, to satisfy By choosing , used in the deinition of the individual preferences of ℎ−1 , ℎ and 1 , suiciently small, we can carry out exactly the same argument as in the proof of condition 2) of Theorem 8 in order to make sure that 1 prefers to satisfy { 1 , 2 } over satisfying { ℎ , 1 }.Using the same reasoning as in the case of condition 1) above, it can then be shown that the resulting instance has no pure Nash equilibrium.□

RESULTS FOR ASYMMETRIC COORDINATION GAMES
In this section, we present our results for asymmetric coordination games.We focus on coordination games with equal-split distribution rule and no individual preferences.

Approximate Nash Equilibria
Apt et al. [5] show that the (1, 1)-PoA of coordination games is unbounded if ≥ + 1. Notably, this holds for arbitrary graph topologies with unit weights and without individual preferences.We slightly generalize this observation.We show that the Price of Anarchy is unbounded if and only if ≥ () + 1, where () is the chromatic number of .
If < () then in every strategy proile there is at least one edge ∈ () such that its endpoints have the same color in .This show boundedness.
It remains to proof the upper bound.Consider an instance Γ = (, , ( ) ∈ , 1, , 0).Let be an (, )equilibrium and let * be an optimal strategy proile.Let = ∪ be a partition of the node set, where = { ∈ : () > 0} and = { ∈ : () = 0}.Let , ∈ and suppose that = {, } ∈ .We claim that either = 0, or is unsatisied in * .Suppose that > 0 and is satisied in * .Then, in particular, it follows that and both have a color ′ in their strategy set, i.e., ∩ ≠ ∅.Since () = () = 0, this means that they can (jointly) proitably deviate to ′ , contradicting the fact that is a -equilibrium.That is, either one the players chose ′ in , in which case the other player can deviate to ′ to improve her utility, or and can jointly deviate to ′ which is feasible because ≥ 2.
The above implies that , where ( * ) is the set of satisied edges in * .We now show that the latter summation is at most 2Δ() • (), which completes the proof.First, let ∈ and ∈ , and suppose that = {, } is satisied in * with > 0. The fact that is satisied in * implies that and have a common color ′ in their strategy sets.By deinition, we have () = 0, so it must be that • () ≥ /2 otherwise and could (jointly) proitably deviate to ′ .Secondly, let ∈ and ∈ , and suppose that = {, } is satisied in * with > 0. Similar arguments imply that either • () ≥ /2 or • () ≥ /2 (or both).
In particular, this implies that the edges in { ∈ ( * ) : > 0 and ∩ ≠ ∅} can be partitioned into sets 1 , . . ., | | deined as = {{, } : ≺ ∈ () and • () ≥ /2} for all ∈ , where ≺ is some total ordering on the nodes in .That is, in case both () ≥ /2 and () ≥ /2 we assign edge {, } to the node which is lower in the ordering ≺.Note that | | ≤ Δ().By deinition of the set , we now have that ︁ {, } ∈ ( * ):{, }∩≠∅ and >0 ≤ 2 where the last equality holds because () = 0 for all ∈ = \ .□ We now use this result to bound the (, )-Price of Anarchy for random graphs.Note that by exploiting the topological bound of Theorem 14 it suices to bound the maximum degree of the corresponding random graph.The maximum degree of random graphs drawn according to the Erdős-Rényi random graph model is well understood; see, e.g., the work of Frieze and Karonski [17].
In contrast, we obtain an improved bound for sparse random graphs.Proof.The bounds follow directly from Theorem 14 and the fact that for a random graph ∼ (, /) with = /, we have Δ( ) ≈ O (ln()/ln ln()) with high probability, see, e.g., [17,Chapter 3].□ If, in addition, the strategy sets are drawn according to a sequence of distributions that satisfy the so-called common color property, and all weights are equal to one, corresponding to the games studied in [5], then we can even prove that the (, )-Price of Anarchy is bounded by a constant.
Deinition 16 (Common color property).For a sequence of integers ( ) ∈N , we say that a sequence of probability distributions (F ) ∈N over 2 [ ] \ ∅ satisies the common color property if there exists some constant 0 > 0, independent of , such that for 1  , 2 ∼ F , inf Intuitively, the common color property requires that with positive probability any two players have a color in common in their strategy sets.In particular, this condition is satisied if we draw the strategy sets uniformly at random from 2 [ ] \ ∅ with 0 = 1 2 .We remark that in the deterministic setting the Price of Anarchy does not improve if all players have a color in common [33].
The proof of Theorem 17 relies on the following probabilistic result regarding the maximum size of a matching in Erdős-Rényi random graphs [25].Lemma 18 ([25]).Let > 0 be ixed.Then there is a constant * = * () such that where ( ) is the size of a maximum matching in .
Proof of Theorem 17.Let ( ) be a maximum matching in of size ( ).For a ixed edge = {, } ∈ ( ), the probability that the strategy sets and of players and satisfy ∩ ≠ ∅ is at least 0 by the common color property.Here we implicitly use that the strategy sets are drawn independently from the graph topology.Combining this with the lemma above, it follows that there exists a constant 0 = 0 ( 0 ) such that with high probability there exist 0 pairwise node-disjoint edges in for which the players corresponding to the endpoints have a common color in their strategy set.This follows from standard Chernof bound arguments similaryly as in the proof of Theorem 7. As a consequence, () + () ≥ 1/(2) for any (, )-equilibrium because otherwise player and could jointly deviate, as ≥ 2, to their common color.
This implies that there exists a constant 1 = 1 ( 0 , ) such that with high probability Finally, using again standard Chernof bound arguments as in the proof of Theorem 7, it follows that ( )/ ≤ 2 () with high probability.This completes the proof.□ The statement of Theorem 17 does not hold for = 1.To see this, consider the uniform distribution over strategy sets { 0 , 1 }, . . ., { 0 , }.In the strategy proile where every player picks her color diferent from 0 , at most a constant number of edges will be satisied with high probability.Thus, (, 1)-PoA ≥ for some with high probability.the same color, then the shares are 1/4 12 for player 1, and 3/4 12 for player 2. Roughly speaking, although player 3 never receives a share from edge 12 , he does in fact inluences how the edge-weight is split between players 1 and 2.
For any ixed number of colors, and sets of individual preferences, and any weight 12 , it can be shown that any better-response sequence converges to a pure Nash equilibrium.This claim follows by observing that player 3 is not inluenced by players 1 and 2 and so in any best response sequence she will appear at most once (in the corresponding step she will deviate to the color yielding the highest individual preference).After this step, players 1 and 2 will always converge to a pure Nash equilibirum as their distribution rule corresponds to a weighted Shapley distribution rule.

A.2 Hypergraph coordination games
Another natural extension would be to consider hypergraph coordination games where edges can have size larger than two as well.However, here the Price of Anarchy immediately becomes unbounded already on instances with one hyperedge of size three.This can easily be seen by constructing a symmetric instance with edge weight 1, without individual preferences, and three common colors.If all three players choose a diferent color then that strategy proile is a pure Nash equilibrium with () = 0.If all players play the same color, the resulting strategy proile * is a social optimum with ( * ) = 1.

A.3 Color-dependent edge weights
Another possible extension would be to introduce color-dependent edge weights, so that the edge-weights split between two players might can difer depending on the common color that they have.The characterizations in Theorems 8 and 11 still hold.The results in Theorems 8 and 11 are even stronger, since we can obtain the characterization already in the special case that the edge-weights are actually color-independent.However, the Price of Anarchy becomes unbounded already for symmetric coordination games on a graph with one edge.To see this claim, assume that the players forming the endpoints of the edge have no individual preferences, and that = 2.For the irst color, we set the edge weight ,1 = 0, and for the second color, we set ,2 = 1.If both players play color 1, we obtain a pure Nash equilibrium with () = 0, and if both players play color 2, we obtain the social optimum * with ( * ) = 1.
A well-known result from cooperative game theory states that for any ixed welfare function , there exist real numbers ( ) ⊆ such that where, for ⊆ 2 , : 2 → R is the welfare functions given by () = 1 if ⊆ and zero otherwise.A distribution rule is said to have a base decomposition [19] if it can be written as where (, ) is given by (, ) = 0 if ⊈ , and, if ⊆ , (, ) = where () = 0 if ∉ , and () > 0 for at least one ∈ .This is equivalent to saying that the distribution rules for the welfare functions is a generalized weighted Shapley distribution rule [19].

B.1 Clustering games as resource allocation games
For a ixed graph = ( , ∪ ), distribution rule , and ∈ N, any game in G(, , ) can be modeled as a resource allocation game.That is, for every Γ ∈ G, there exists a resource allocation game Ψ = ( , , ( ) ∈ , , ) with a one-to-one correspondence between the strategy proiles of Γ and Ψ that preserves improving moves.
Here, every resource is equipped with welfare function where = 1 if ∈ or ∈ with ( ) = 1, and = −1 if ∈ with ( ) = 0. Note that the welfare function is independent of and .Moreover, the distribution rule has a base decomposition given by .That is, the value for = {} with ∈ is always given to player , and for ∈ , the corresponding weight ∈ {−1, 1} is split among the players in according to (note that this yields an eicient distribution rule).The modeling of a clustering game as a resource allocation game is done by including many copies of a single resource, a technique also used by Gopalakrishnan et al. [19].The details of this procedure are not hard to derive and left to the reader at this point.

B.2 Interpretation of Theorem 11.
Gopalakrishnan et al. [19] show the impressive result that, for any ixed welfare function , if a distribution rule guarantees the existence of a pure Nash equilibrium in every resource allocation game ( , , ( ) ∈ , , ), for arbitrary , , and ( ) ∈ ), then the distribution rule must be a generalized weighted Shapley distribution rule.We refer the reader to [19] for the formal deinition of generalized weighted Shapley distribution rules for general resource allocation games.Moreover, any generalized weighted Shapley distribution rule guarantees pure Nash equilibrium existence [19].
Roughly speaking, they irst show that if an equilibrium is guaranteed to exist in every game where resources are equipped with welfare function , then the distribution rule must have a base-decomposition (as introduced above).They then continue by showing that generalized weighted Shapley distribution rules (which are base-decomposable by deinition) are the only ones guaranteeing existence among all base-decomposable distribution rules.
In Theorem 11 we essentially give an alternative, but also stronger, proof for this inal step of the proof of Gopalakrishnan et al. [19], in the (very) special case where is of the form (19) and > 0 for all ∈ { , }.That is, we show that if a pure Nash equilibrium is always guaranteed to exist in a coordination game with individual preferences, where there are three common strategies (colors), then the distribution rule must be a generalized weighted Shapley distribution rule.This then implies the result of Gopalakrishnan et al. [19], since coordination games with individual preferences essentially form a subclass of all resource allocation games where resources are equipped with (using the modeling of clustering games as resource allocation games mentioned before).However, Example 19 below illustrates that, in general, this is not true if < 0 for some ∈ .That is, if certain coeicients are negative, then in general it does not suice to focus on the subclass of corresponding clustering games, in order to derive that must be a generalized weighted Shapley distribution rule.In this case, one has to make use of more complex resource allocation games, i.e., more complex than clustering games with individual preferences, in order to guarantee that is a generalized weighted Shapley distribution rule (the resource allocation games used by Gopalakrishnan et al. [19] for this inal step are indeed more complex than clustering games in this case).
Example 19.Consider the instance in Figure 4, and let be some arbitrary local distribution rule.Fix arbitrary weights 12 , 23 and 31 and individual preferences for = 1, 2, 3 and ∈ = {1, . . ., ′ }. (We use ′ here to denote the number of strategies in the common strategy set instead of .)We claim that a pure Nash equilibrium always exists.
Consider the strategy proile in Figure 4 and assume without loss of generality that is the color for which player 1 his individual preference is maximal, i.e., = arg max{ 1 () : ∈ }.If there is some proile ′ = (, , ) with ≠ , (but possibly = ) where and are best-responses for resp.players 2 and 3, then we ind a pure Nash equilibrium, by deinition of .
It now suices to show that for any proile of the form (, , ) where either player 2 or 3 has a best-response to , we can always perform a sequence of best-response moves that end up in a pure Nash equilibrium.Assume without loss of generality, that is a best-response for player 2 in the proile (, , ).This in particular implies that = arg max{ 2 () : ∈ }.We now consider player 3 in the proile (, , ). 1) Player 3 only has as best-response.Then we let player 3 switch to to get the proile (, , ).Note that is still a best-response for player 2 as well, since 23 is non-negative and edge {2, 3} is now satisied as well.To summarize, both players 2 and 3 are playing a best-response in the proile (, , ).Now, if player 1 has a best-response diferent from , say , then in particular remains a best-response for both players 2 and 3 in the proile (, , , ), since edges {1, 2} and {1, 3} are anti-coordination edges and their weights are non-negative.That is, (, , ) is a pure Nash equilibrium.2) Player 3 has only as best-response.Then both players 2 and 3 are playing a best-response in the proile (, , ).Now suppose player 1 has a diferent best-response than .i) Player 1 has a best-response to some color ≠ .Then is still a best-response for player 2 in the proile (, , ).If player 3 now has a best-response other than , then it must be , otherwise he would have had a response better than in the proile (, , ) as well.Clearly, in the proile (, , ) both players 2 and 3 are playing a best-response.If player 1 still has a best-response, it must be (otherwise he would have had a diferent best-response than before).The proile (, , ) is a pure Nash equilibrium.ii) Player 1 only has a best-response to .Then is still a best-response for player 2 in (, , ).Suppose that player 3 has a best-response other than .If is a best-response for player 3, then we reach the pure Nash equilibrium (, , ), since player 2 clearly plays a best-response, and player 1 cannot have a better response, otherwise deviating to in the proile (, , ) was not a best-response.Therefore, suppose player 3 has a best-response diferent from , say .Then is still a best-response for player 1.If player 2 has a better response than , then it must be , otherwise would not have been a best-response in the initial proile.Clearly player 3 plays a best-response in the proile (, , ).If player 1 still has a better response than , then it must be , otherwise would not have been a best-response in the proile (, , ).The resulting proile (, , ) is a pure Nash equilibrium, since player 3 cannot play a better response, otherwise would not have been a best-response in the proile (, , ).
3) Player 3 has ≠ , as best-response.Player 2 now cannot have a best-response to some color ≠ , otherwise would not have been a best-response in the initial proile (, , ).Therefore, suppose that is a bestresponse.Then the resulting proile (, , ) is a pure Nash equilibrium, since player 1 has maximum possible utility, and player 3 clearly has no better response than , otherwise would not have been a best-response in (, , ).We can now assume to be in the proile (, , ) in which players 2 and 3 play a best-response.i) Player 1 has a best-response to ≠ .Then either the resulting proile (, , ) is a pure Nash equilibrium, or player 3 still has a best-response to , but then the resulting proile (, , ) is a pure Nash equilibrium.ii) Player 1 has a best-response to .Then player 2 still has as best-response.Suppose that player 3 now has a better response.If it it , then the resulting proile (, , ) is a pure Nash equilibrium.Therefore, suppose that player 3 has a better response to some color ≠ .Then is still a best-response for player 1.
If is also still best-response for player 2, then (, , ) is a pure Nash equilibrium.Therefore, suppose that player 2 has a better response.Then this must be (similar reasoning as before).Clearly, in the proile (, , ) player 3 is still playing a best-response.Suppose that player 2 still has a better-response, then this must be .The proile (, , ) is a pure Nash equilibrium.
is an undirected graph whose set of edges = ∪ is partitioned into coordination edges and anti-coordination edges ; • ≥ 2 is the total number of colors that are available; • = ( ) ∈ = × ∈ is the cartesian product of the strategy sets, where ⊆ [] with | | ≥ 2 is the subset of colors available to player ∈ ; • = ( ) is the distribution rule, where ≥ 0 speciies a split parameter for every player ∈ and every incident edge {, } ∈ ; • = ( ) ∈ speciies the edge weights, where ≥ 0 is the weight of edge ∈ ; • = ( ) ∈ deines the players' individual preferences, where : → R ≥0 is the individual preference function of player ∈ .

Proof of Corollary 4 .
The upper bound follows directly from Theorem 1.We prove the lower bound by constructing an instance of a coordination game as follows: Let ⊆ be arbitrary and consider the induced subgraph on .Assume without loss of generality that = {1, . . ., } with = | |.Deine the set of colors as = { 1 , . . ., } ∪ { 0 }.We give every player ∈ an individual preference of one for colors and 0 and zero for all other colors.Further, the individual preferences of all nodes in \ are set to zero.The weight of all edges in [] is set to 2 and the weight of all edges in \ [] is set to zero.Consider a strategy proile in which every player ∈ chooses color and every player ∉ chooses an arbitrary color.Then is a Nash equilibrium with social welfare () = | |.On the other hand, the strategy proile * in which every player chooses color 0 is a social optimum with social welfare ( * ) = | | + 2| []|.This implies that ( * )/ () = 1 + 2| []|/| |.The result now follows by choosing as a subset of maximum subgraph density.□

Fig. 1 .
Fig. 1.This is an example with = 6 players.The satisfied edges are bold, and the unsatisfied edges are dashed.The cycle on the let illustrates the initial state 0 , the middle one the situation ater player has deviated, and the right one illustrates the situation ater players , − 1, . . ., 3 have deviated.The same steps are given for Case ii) in Figure 2.

Fig. 2 .
Fig. 2.This is an example with = 6 players.The satisfied edges are bold, and the unsatisfied edges are dashed.The cycle on the let illustrates the initial state 0 , the middle one the situation ater player − 1 has deviated, and the right one illustrates the situation ater players − 1, − 2, . . ., 2 have deviated.The same steps are given for Case i) in Figure 1.

Fig. 4 .
Fig. 4. Counter-example for Theorem 1 in the case of a clustering game with anti-coordination edges.