CHAPTER 15
Cost Sharing
Kamal Jain and Mohammad Mahdian
Abstract
The objective of cooperative game theory is to study ways to enforce and sustain cooperation among agents willing to cooperate. A central question in this field is how the benefits (or costs) of a joint effort can be divided among participants, taking into account individual and group incentives, as well as various fairness properties.
In this chapter, we define basic concepts and review some of the classical results in the cooperative game theory literature. Our focus is on games that are based on combinatorial optimization problems such as facility location. We define the notion of cost sharing, and explore various incentive and fairness properties cost-sharing methods are often expected to satisfy. We show how cost-sharing methods satisfying a certain property termed cross-monotonicity can be used to design mechanisms that are robust against collusion, and study the algorithmic question of designing cross-monotonic cost-sharing schemes for combinatorial optimization games. Interestingly, this problem is closely related to linear-programming-based techniques developed in the field of approximation algorithms.
We explore this connection, and explain a general method for designing cross-monotonic cost-sharing schemes, as well as a technique for proving impossibility bounds on such schemes. We will also discuss anaxiomaticapproach to characterize two widely applicable solution concepts: the Shapley value for cooperative games, and the Nash bargaining solution for a more restricted framework for surplus sharing.
15.1 Cooperative Games and Cost Sharing
Consider a setting where a setAofnagents seek to cooperate in order to generate value.
The value generated depends on the coalitionSof agents cooperating. In general, the set of possible outcomes of cooperation among agents inS ⊆Ais denoted byV(S), where each outcome is given by a vector inRS, whosei’th component specifies the utility that the agenti∈S derives in this outcome. The setAof agents along with the functionVdefines what is called acooperative game(also known as acoalitional game) with nontransferable utilities(abbreviated as anNTU game). A special case, called a cooperative game with transferable utilities(abbreviated as aTU game), is when the
385
f = 21
a b c
f = 2
2 2 2
1 1
Figure 15.1. An example of the facility location game.
value generated by a coalition can be divided in an arbitrary way among the agents inS. In other words, a TU game is defined by specifying a functionv: 2A→R, which gives the valuev(S)∈Rgenerated by each coalitionS. We assumev(∅)=0. The set of all possible outcomes in such a game is defined asV(S)= {x∈RS:i∈Sxi ≤v(S)}.
The notion of a cooperative game was first proposed by von Neumann and Morgenstern. This notion seeks to abstract away all other aspects of the game ex- cept the combinatorial aspect of the coalitions that can form. This is in contrast with noncooperative games, where the focus is on the set of choices (moves) available to each agent.
Note that in the definition of a cooperative game, we did not restrict the values to be nonnegative.1 In fact, the case that all values are nonpositive is the focus of this chapter, as it corresponds to the problem of sharing the cost of a service among those who receive the service (this is by taking the value to be the negative of the cost). Again, the cost-sharing problem can be studied in both the TU and the NTU models. The TU model applies to settings where, for example, a service provider incurs some (monetary) costc(S) in building a network that connects a setSof customers to the Internet, and needs to divide this cost among customers inS. In practice, the cost functioncis often defined by solving a combinatorial optimization problem. One example, which we will use throughout the chapter, is the facility location game defined below.
Definition 15.1 In thefacility location game, we are given a setAof agents (also known as cities, clients, or demand points), a setF of facilities, a facility opening cost fi for every facility i∈F, and a distancedij between every pair (i, j) of points in A∪F indicating the cost of connecting j to i. We assume that the distances come from a metric space; i.e., they are symmetric and obey the triangle inequality. For a setS⊆Aof agents, the cost of this set is defined as the minimum cost of opening a set of facilities and connecting every agent inSto an open facility. More precisely, the cost functioncis defined byc(S)= minF⊆F{
i∈Ffi+
j∈Smini∈Fdij}.
Example 15.2 Figure 15.1 shows an instance of the facility location game with 3 agents {a, b, c} and 2 facilities{1,2}. The distances between some pairs are marked in the figure, and other distances can be calculated using the triangle
1 If all values are nonnegative, the problem is called asurplus sharingproblem.
core of cost-sharing games 387 inequality (e.g., the distance between facility 1 and client c is 2+1+1=4).
The cost function defined by this instance is the following:
c({a})=4, c({b})=3, c({c})=3,
c({a, b})=6, c({b, c})=4, c({a, c})=7, c({a, b, c})=8.
Since the monetary cost may be distributed arbitrarily among the agents, it is natural to model the above example as a TU game. An example of a case where the NTU model is more applicable is a network design problem where the cost incurred by an agentiin the set of agentsS receiving the service corresponds to the delay this agent suffers. There are multiple designs for the network connecting the customers inS, and each design corresponds to a profile of delays that these agents will suffer. The set of possible outcomes for the coalitionS is defined as the collection of all such profiles, and is denoted byC(S). As delays are nontransferrable, this setting is best modeled as an NTU cost-sharing game. For another example of an NTU game, see the housing allocation problem in Section 10.3 of this book.
As most of the work on cost sharing in the algorithmic game theory literature has so far focused on TU games, this chapter is mainly devoted to such games; henceforth, by a cost-sharing game we mean a TU game, unless otherwise stated.
15.2 Core of Cost-Sharing Games
A central notion in cooperative game theory is the notion ofcore. Roughly speaking, the core of a cooperative game is an outcome of cooperation among all agents where no coalition of agents can all benefit by breaking away from the grand coalition. Intuitively, the core of a game corresponds to situations where it is possible to sustain cooperation among all agents in an economically stable manner.
In this section, we define the notion of core for cost-sharing games, and present two classical results on conditions for nonemptiness of the core. We show how the notion of core for TU games can be relaxed to an approximate version suitable for hard combinatorial optimization games, and observe a connection between this notion and the integrality gap of a linear programming relaxation of such problems.
15.2.1 Core of TU Games
Formally, the core of a TU cost-sharing game is defined as follows.
Definition 15.3 Let (A, c) be a TU cost-sharing game. A vectorα∈RA(some- times called acost allocation) is in thecoreof this game if it satisfies the following two conditions:
r Budget balance:
j∈Aαj =c(A).
r Core property:for everyS⊆A,
j∈Sαj ≤c(S).
Example 15.4 As an example, consider the facility location game of Exam- ple 15.2 (Figure 15.1). It is not hard to verify that the vector (4,2,2) lies in the core of this game. In fact, this is not the only cost allocation in the core of this
game; for example, (4,1,3) is also in the core. On the other hand, if a third facility with opening cost 3 and distance 1 to agentsaandcis added to this game, the resulting game will have an empty core. To see this, note that after adding the third facility, we havec({a, c})=5. Now, if there is a vectorαin the core of this game, we must have
αa+αb ≤c({a, b})=6 αb+αc ≤c({b, c})=4 αa +αc ≤c({a, c})=5
By adding the above three inequalities and dividing both sides by 2, we obtain αa+αb+αc≤7.5< c({a, b, c}). Thereforeαcannot be budget balanced.
A classical result in cooperative game theory, known as the Bondareva–Shapley theorem, gives a necessary and sufficient condition for a game to have nonempty core.
To state this theorem, we need the following definition.
Definition 15.5 A vector λ that assigns a nonnegative weight λS to each subset S ⊆A is called a balanced collection of weights if for every j ∈A,
S:j∈SλS =1.
Theorem 15.6 A cost-sharing game (A, c) with transferable utilities has a nonempty core if and only if for every balanced collection of weightsλ, we have
S⊆AλSc(S)≥c(A).
proof By the definition of the core, the game (A, c) has a nonempty core if and only if the solution of the following linear program (LP) is preciselyc(A).
(Note that this solution can never be larger thanc(A).)
Maximize
j∈Aαj Subject to ∀S ⊆A:
j∈Sαj ≤c(S). (15.1)
By strong LP duality, the solution of the above LP is equal to the solution of the following dual program:
Minimize
S⊆A
λSc(S) Subject to ∀j∈A:
S:j∈S
λS =1 (15.2)
∀S ⊆A:λS ≥0.
Therefore, the core of the game is nonempty if and only if the solution of the LP (15.2) is equal to c(A). By definition, feasible solutions of this program are balanced collections of weights. Therefore, the core of the game (A, c) is nonempty if and only if for every balanced collection of weights (λS),
S⊆A
λSc(S)≥c(A).
core of cost-sharing games 389 As an example, the proof of emptiness of the core given in Example 15.4 can be restated by defining a vectorλas follows:λ{a,b}=λ{b,c}=λ{a,c}= 12 andλS =0 for every other setS. It is easy to verify that λis a balanced collection of weights and
S⊆AλSc(S)< c(A).
15.2.2 Approximate Core
As we saw in Example 15.4, a difficulty with the notion of core is that the core of many cost-sharing games, including most combinatorial optimization games based on computationally hard problems, is often empty. Furthermore, when the underlying cost function is hard to compute (e.g., in the facility location game), even deciding whether the core of the game is empty is often computationally intractable. This motivates the following definition.
Definition 15.7 A vectorα ∈RAis in theγ-approximate core (orγ-core, for short) of the game (A, c) if it satisfies the following two conditions:
r γ-Budget balance:γ c(A)≤
j∈Aαj ≤c(A).
r Core property:for everyS⊆A,
j∈Sαj ≤c(S).
For example, in the facility location game given in Example 15.4, the vector (3.5,2.5,1.5) is in the 7.58 -core of the game. Note that the argument given to show the emptiness of the core of this game actually proves that for every γ > 7.58 , the γ-core of this game is empty.
The Bondareva–Shapley theorem can be easily generalized to the following approx- imate version.
Theorem 15.8 For everyγ ≤1, a cost-sharing game(A, c)with transferable utilities has a nonemptyγ-core if and only if for every balanced collection of weightsλ, we have
S⊆AλSc(S)≥γ c(A).
The proof is similar to the proof of the Bondareva–Shapley theorem and is based on the observation that by LP duality, theγ-core of the game is nonempty if and only if the solution of the LP (15.2) is at least γ c(S). Note that if the cost function cis subadditive (i.e.,c(S1∪S2)≤c(S1)+c(S2) for any two disjoint setsS1andS2), then the optimalintegralsolution of the LP (15.2) is preciselyc(A). Therefore,
Corollary 15.9 For any cost-sharing game(A, c)with a subadditive cost func- tion, the largest valueγ for which theγ-core of this game is nonempty is equal to the integrality gap of the LP (15.2).
As it turns out, for many combinatorial optimization games such as set cover, vertex cover, and facility location, the LP formulation (15.2) is in fact equivalent to the standard (polynomial-size) LP formulation of the problem, and hence Corollary 15.9 translates into a statement about the integrality gap of the standard LP formulation of the problem. Here, we show this connection for the facility location game.
We start by formulating the facility location problem as an integer program. In this formulation,xi andyijare variables indicating whether facilityiis open, and whether agentjis connected to facilityi.
Minimize
i∈Ffixi +
i∈F
j∈Adijyij Subject to ∀j ∈A:
i∈Fyij ≥1
∀i∈F, j ∈A: xi ≥yij
∀i∈F, j ∈A: xi, yij ∈ {0,1}.
(15.3)
By relaxing the second constraint toxi, yij ≥0 we obtain an LP whose dual can be written as follows:
Maximize
j∈Aαj
Subject to ∀i∈F, j ∈A: βij ≥αj −dij
∀i∈F :
j∈Aβij ≤fi
∀i∈F, j ∈A: αj, βij ≥0
(15.4)
Note that we may assume without loss of generality that in a feasible solution of the above LP,βij =max(0, αj−dij). Thus, to specify a dual solution it is enough to give α. We now observe that the above dual LP captures the core constraint of the facility location game; i.e., it is equivalent to the LP (15.1).
Proposition 15.10 For any feasible solution(α,β)of the LP (15.4),αsatisfies the core property of the facility location game.
proof We need to show that for every setS⊆A,
j∈Sαj ≤c(S), wherec(S) is the cost of the facility location problem for agents inS. First we note that for any facilityiand set of agentsR⊆A, by adding the first and the second inequalities of the LP (15.4) for facilityiand everyj ∈Rwe obtain
j∈R
αj ≤fi+
j∈R
dij. (15.5)
Now, consider an optimal solution for the set of agentsS, and assumei1, . . . , ik
are facilities that are open andRis the set of agents served by facilityiin this solution. Summing Inequality (15.5) for every (i, R) will yield the result.
By the above proposition, the solution of the dual LP (15.4) (which, by LP duality, is the same as the LP relaxation of (15.3)) is equal to the solution of the LPs (15.1) and (15.2). Furthermore, the optimal integral solution of (15.3) isc(A). Therefore, the integrality gap of (15.3) is the same as that of (15.2) and gives the best budget balance factor that a cost allocation satisfying the core property can achieve. The best known results in the field of approximation algorithms show that this gap (in the worst case) is between1.521 and 1.4631 .
group-strategyproof mechanisms and cross-monotonic 391
15.2.3 Core of NTU Games
We conclude this section with a classical theorem due to Scarf, which gives a sufficient condition for the nonemptiness of the core of NTU games similar to the one given in Theorem 15.6 for TU games. However, unlike in the case of TU games, the condition given in Scarf’s theorem is not necessary for the nonemptiness of the core.
Formally, the core of an NTU cost-sharing game (A, C) is the collection of all cost allocationsα∈C(A) such that there is no nonempty coalitionS⊆Aand cost allocationx∈C(S) for whichxj < αjfor allj ∈S. Note that this definition coincides with Definition 15.3 in the case of a TU game. In the following theorem thesupportof a balanced collection of weightsλdenotes the collection of all setsSwithλS >0.
Theorem 15.11 Let(A, C)be a cost-sharing game with nontransferable utili- ties. Assume for every balanced collection of weightsλand every vectorx∈RA the following property holds: if for every setSin the support ofλ, the restriction of xto the coordinates inSis inC(S), thenx∈C(A). Then(A, C)has a nonempty core.
The proof of the above theorem, which is beyond the scope of this chapter, uses an adaptation of the Lemke–Howson algorithm for computing Nash equilibria (described in Section 3.4 of this book), and is considered an early and important contribution of the algorithmic viewpoint in game theory. However, the worst case running time of this algorithm (like the Lemke–Howson algorithm) is exponential in |A|. This is in contrast to the proof of the Bondareva–Shapley theorem, which gives a polynomial-time algorithm2for computing a point in the core of the game, if the core is nonempty.
15.3 Group-Strategyproof Mechanisms and Cross-Monotonic Cost-Sharing Schemes
The cost-sharing problem defined in this chapter models the pricing problem for a service provider with a given set of customers. In settings where the demand is sensitive to the price, an alternative choice for the service provider is to conduct an auction between the potential customers to select the set of customers who can receive the service based on their willingness to pay and the cost structure of the problem. The goal is to design an auction mechanism that provides incentives for individuals as well as groups of agents to bid truthfully. In this section, we study this problem and exhibit its connection to cost sharing.
We start with the definition of the setting. LetAbe a set ofnagents interested in receiving a service. The cost of providing service is given by a cost functionc: 2A→ R+∪ {0}, wherec(S) specifies the cost of providing service for agents inS. Each agent ihas a valueui ∈Rfor receiving the service; that is, she is willing to pay at mostui to get the service. We further assume that the utility of agenti is given byuiqi −xi,
2 This is assuming a suitable representation for the cost functionc, e.g., by a separation oracle for (15.1), or in combinatorial optimization games where statements like Proposition 15.10 hold.
whereqi is an indicator variable that indicates whether she has received the service or not, andxiis the amount she has to pay. Acost-sharing mechanismis an algorithm that elicits a bidbi ∈Rfrom each agent, and based on these bids, decides which agents should receive the service and how much each of them has to pay. More formally, a cost-sharing mechanism is a function that associates to each vectorbof bids a set Q(b)⊆Aof agents to be serviced, and a vectorp(b)∈Rnof payments. When there is no ambiguity, we writeQandpinstead ofQ(b) andp(b), respectively. We assume that a mechanism satisfies the following conditions:
r No Positive Transfer (NPT): The payments are nonnegative (i.e.,pi ≥0 for alli).
r Voluntary Participation (VP): An agent who does not receive the service is not charged (i.e.,pi =0 fori∈Q), and an agent who receives the service is not charged more than her bid (i.e.,pi≤bi fori∈Q)
r Consumer Sovereignty (CS): For each agenti, there is some bidb∗i such that ifibids bi∗, she will get the service, no matter what others bid.
Furthermore, we want the mechanisms to be approximately budget-balanced. We call a mechanism γ-budget-balanced with respect to the cost function c if the total amount the mechanism charges the agents is betweenγ c(Q) andc(Q) (i.e.,γ c(Q)≤
i∈Qxi ≤c(Q)).
We look for mechanisms, calledgroup strategyproof mechanisms, which satisfy the following property in addition to NPT, VP, and CS. LetS ⊆Abe a coalition of agents, andu,ube two vectors of bids satisfyingui =ui for everyi∈S (we think ofuas the values of agents, andu as a vector of strategically chosen bids). Let (Q,p) and (Q,p) denote the outputs of the mechanism when the bids areuandu, respectively.
A mechanism isgroup strategyproofif for every coalitionSof agents, if the inequality uiqi−pi ≥uiqi −piholds for everyi∈S, then it holds with equality for everyi∈S. In other words, there should not be any coalitionS and vectoru of bids such that if members of S announce u instead of u (their true value) as their bids, then every member of the coalitionS is at least as happy as in the truthful scenario, and at least one person is happier.
Moulin showed that cost-sharing methods satisfying an additional property termed cross-monotonicity can be used to design group-strategyproof cost-sharing mecha- nisms. The cross-monotonicity property captures the notion that agents should not be penalized as the serviced set grows. To define this property, we first need to define the notion of a cost-sharingscheme.
Definition 15.12 Let (A, c) denote a cost-sharing game. Acost-sharing scheme is a function that for each setS ⊆A, assigns a cost allocation forS. More formally, a cost-sharing scheme is a functionξ:A×2A→Rsuch that, for everyS ⊆A and every i∈S, ξ(i, S)=0. We say that a cost-sharing schemeξ is γ-budget balanced if for every setS ⊆A, we haveγ c(S)≤
i∈Sξ(i, S)≤c(S).
Definition 15.13 A cost-sharing schemeξiscross-monotoneif for allS, T ⊆A andi∈S,ξ(i, S)≥ξ(i, S∪T).
group-strategyproof mechanisms and cross-monotonic 393
MechanismMξ
InitializeS ←A. Repeat
LetS ← {i∈S:bi ≥ξ(i, S)}. Until for alli∈S,bi ≥ξ(i, S).
ReturnQ=S andpi =ξ(i, S) for alli.
Figure 15.2. Moulin’s group-strategyproof mechanism.
The following proposition shows that cross-monotonicity is a stronger property than core.
Proposition 15.14 Letξ be anγ-budget-balanced cross-monotonic cost shar- ing scheme for the cost-sharing game(A, c). Thenξ(.,A)is in theγ-core of this game.
proof We need to verify only that ξ(.,A) satisfies the core property, i.e., for every setS ⊆A,
i∈Sξ(i,A)≤c(S). By the cross-monotonicity property, for every i∈S, ξ(i,A)≤ξ(i, S). Therefore,
i∈Sξ(i,A)≤
i∈Sξ(i, S)≤c(S), where the last inequality follows from theγ-budget balance property ofξ.
Given a cross-monotonic cost-sharing schemeξ for the cost-sharing game (A, c), we define a cost-sharing mechanismMξ as presented in Figure 15.2.
The following proposition provides an alternative way to view the mechanismMξ. Proposition 15.15 Assume ξ is a cross-monotonic cost sharing scheme for the cost-sharing game (A, c), and bi ∈R+∪ {0} for every i∈A. Then there is a unique maximal set S ⊆A satisfying the property that for every i∈S, bi ≥ξ(i, S). The mechanismMξreturns this set.
proof Assume that two different maximal sets S1 and S2 satisfy the stated property, i.e., bi ≥ξ(i, S1) for every i∈S1 andbi ≥ξ(i, S2) for every i∈S2. Then for every i∈S1, bi ≥ξ(i, S1)≥ξ(i, S1∪S2), where the last inequality follows from cross-monotonicity ofξ. Similarly, for everyi∈S2,bi ≥ξ(i, S1∪ S2). Therefore, the setS1∪S2 also satisfies this property. This contradicts with the maximality ofS1andS2.
LetS∗denote the unique maximal set satisfyingbi ≥ξ(i, S∗) for alli∈S∗. We claim thatMξ never eliminates any of the agents inS∗from the serviced set S. Consider, for contradiction, the first step where it eliminates an agenti∈S∗ from the serviced setS. This means that we must havebi < ξ(i, S). However, sinceS∗⊆S, by cross-monotonicity we haveξ(i, S)≤ξ(i, S∗), and hencebi <
ξ(i, S∗), contradicting the definition ofS∗. Therefore, the setQreturned byMξ
contains S∗. By maximality of S∗, it cannot contain any other agent, that is, Q=S∗.
We are now ready to prove the following theorem of Moulin.
Theorem 15.16 If ξ is an γ-budget-balanced cross-monotonic cost-sharing scheme, thenMξis group-strategyproof andγ-budget balanced.
proof Assume, for contradiction, that there is a coalition T of agents that benefits from bidding according to the vectoru instead of their true values u.
Agents inT can be partitioned into two setsT+andT−based on whether their bid in u is greater than their bid in u, or not. First, we claim that it can be assumed, without loss of generality, thatT+is empty, i.e., agents cannot benefit from overbidding. To see this, we start from a bid vector where every agent inT bids according tou(and others bid truthfully), and reduce the bids of the agents inT+to their true value one by one. If at any step, e.g., when the bid of agent i∈T+ is reduced fromui to ui, the outcome of the auction changes, then by Proposition 15.15,i must be a winner when she bids according tou, and not a winner when she bids according tou. This means thatui ≥ξ(i, Si)> ui, where Si is the set of winners wheni bids according tou. However, this means that the agentimust pay an amount greater than her true value in the scenario where every agent inT bids according tou. This is in contradiction with the assumption that agents inT all benefit from the collusion. By this argument, the bid of every agent inT+can be lowered to her true value without changing the outcome of the auction. Therefore, we assume without loss of generality thatT+is empty.
Now, letS andS denote the set of winners in the untruthful and the truthful scenarios (i.e., when agents bid according tou andu), respectively. As the bid of each agent in u is less than or equal to her bid inu, by Proposition 15.15, S⊆S. By cross-monotonicity, this implies that the payment of each agent in the untruthful scenario is at least as much as her payment in the truthful scenario.
Therefore, no agent can be strictly happier in the untruthful scenario than in the truthful scenario.
Moulin’s theorem shows that cross-monotonic cost-sharing schemes give rise to group-strategyproof mechanisms. An interesting question is whether the converse also holds, i.e., is there a way to construct a cross-monotonic cost-sharing scheme given a group-strategyproof mechanism? The answer to this question is negative (unless the cost function is assumed to be submodular), as there are examples where a cost function has a group-strategyproof mechanism but no cross-monotonic cost-sharing scheme. A partial characterization of the cost-sharing schemes that correspond to group-strategyproof mechanisms in terms of a property called semi-cross-monotonicity is known; however, finding a complete characterization of cost-sharing schemes arising from group-strategyproof mechanisms remains an open question.
15.4 Cost Sharing via the Primal-Dual Schema
In Section 15.2.2, we discussed how a cost allocation in the approximate core of a game can be computed by solving an LP, and noted that for many combinatorial optimization
cost sharing via the primal-dual schema 395 games, this LP is equivalent to the dual of the standard LP relaxation of the problem and the cost shares correspond to the dual variables. In this section, we explain how a technique called theprimal-dual schemacan be used to compute cost shares that not only are in the approximate core of the game, but also satisfy the cross-monotonicity property, and hence can be used in the mechanism described in the previous section. The primal-dual schema is a standard technique in the field of approximation algorithms, where the focus is on computing an approximately optimal primal solution, and the dual variables (cost shares) are merely a by-product of the algorithm.
The idea of the primal-dual schema, which is often used to solve cost minimization problems, is to write the optimization problem as a mathematical program that can be relaxed into an LP (we refer to this LP as the primal LP). The dual of this LP gives a lower bound on the value of the optimal solution for the problem. Primal-dual algorithms simultaneously construct a solution to the primal problem and its dual.
This is generally done by initially setting all dual variables to zero, and then gradually increasing these variables until some constraint in the dual program goes tight. This constraint hints at an object that can bepaid forby the dual to be included in the primal solution. After this, the dual variables involved in the tight constraint arefrozen, and the algorithm continues by increasing other dual variables. The algorithm ends when a complete solution for the primal problem is constructed. The analysis is based on proving that the values of the constructed primal and dual solutions are close to each other, and therefore they are both close to optimal.3
We will elaborate on two examples in this section: submodular games, where a simple primal-dual algorithm with no modification yields cross-monotonic cost-shares, and the facility location game, where extra care needs to be taken to obtain a cross-monotonic cost-sharing scheme. In the latter case, we introduce a rather general technique of using “ghost duals” to turn the standard primal-dual algorithm for the problem into an algorithm that returns a cross-monotonic cost-sharing scheme.
15.4.1 Submodular Games Let us start with a definition of submodular games.
Definition 15.17 A cost-sharing game (A, c) is called asubmodular gameif the cost functioncsatisfies
∀S, T ⊆A, c(S)+c(T)≥c(S∪T)+c(S∩T).
The above condition is equivalent to the condition of decreasing marginal cost, which says that for every two agentsiandjand every set of agentsS ⊂A\ {i, j}, the marginal cost of adding i to S (i.e., c(S∪ {i})−c(S)) is no less than the marginal cost of addingitoS∪ {j}(i.e.,c(S∪ {i, j})−c(S∪ {j})). Recall that we always assumec(∅)=0.
3 In many primal-dual algorithms, a postprocessing step is required to bring the cost of the primal solution down.
However, this step often does not change the cost shares.
Submodular games (also known asconcavegames) constitute an important class of cost-sharing games with many interesting properties. One example in this class is the multicast problem discussed in Section 14.2.2 of this book.
Consider a submodular game (A, c), and the LPs (15.2) and (15.1) as the primal and the dual programs for this game, respectively. It is not hard to see that by submodularity of c, the solution of the primal program is always c(A), giving a trivial optimal solution for this LP. However, the dual LP (15.1) is nontrivial and its optimal solutions correspond to cost allocations in the core of the game. Letαbe a feasible solution of this LP. We say that a setS ⊆Aistight, if the corresponding inequality in the LP is tight, i.e., if
j∈Sαj =c(S). We need the following lemma to describe the algorithm.
Lemma 15.18 Letαbe a feasible solution of the linear program (15.1). If two setsS1, S2⊆Aare tight, then so isS1∪S2.
proof Sinceα is feasible, we have
j∈S1∩S2αj ≤c(S1∩S2). This, together with the submodularity ofcand tightness ofS1andS2, implies
c(S1∪S2)≤c(S1)+c(S2)−c(S1∩S2)
≤
j∈S1
αj+
j∈S2
αj−
j∈S1∩S2
αj
=
j∈S1∪S2
αj
Therefore,S1∪S2 is tight.
Corollary 15.19 There is a unique maximal tight set. It is simply the union of all the tight sets.
We are now ready to state the algorithm that computes the cost shares. This algorithm is presented in Figure 15.3. Notice that by Lemma 15.18, when a new set goes tight, the new maximal tight set contains the old one, and therefore once an elementi∈T is included in the frozen setF, it will stay in this set until the end of the algorithm.
Thus, the cost shareαj at the end of the algorithm is precisely the first time at which the elementjis frozen. Furthermore, note that the algorithm never allowsαto become an infeasible solution of the LP (15.1), and stops only when the set T goes tight.
Hence, the cost shares computed by the algorithm satisfy the budget balance and the core property. All that remains is to show that they also satisfy the cross-monotonicity property.
Theorem 15.20 The cost sharing scheme defined by the algorithmSubmodu- larCostShare(Figure 15.3) is cross monotone.
proof LetT1 ⊂T2⊆A. We simultaneously run the algorithm forT1 andT2, and call these two runs theT1-run and theT2-run. It is enough to show that at any moment, the set of frozen elements in theT1-run is a subset of that of theT2-run.
Consider a timet (i.e., the moment when all unfrozen cost shares in both runs are
cost sharing via the primal-dual schema 397
AlgorithmSubmodularCostShare
Input:submodular cost sharing game (A, c) setT ⊆Aof agents that receive the service Output:cost sharesαjfor everyj∈T
For everyj, initializeαj to 0.
LetF = ∅
WhileT \F = ∅do
Increase allαj’s forj∈T \F at the same rate, until a new set goes tight.
LetF be the maximal tight set.
Figure 15.3. Algorithm for computing cost shares in a submodular game.
equal tot), and letαlandFl denote the values of the variables and the frozen set at this moment in theTl-run, forl=1,2. We have
c(F1∪F2)≤c(F1)+c(F2)−c(F1∩F2)
≤
i∈F1
αi1+
i∈F2
αi2−
i∈F1∩F2
α1i
=
i∈F1\F2
αi1+
i∈F2
αi2
≤
i∈F1∪F2
αi2,
where the first inequality follows from submodularity of c, the second follows from the tightness of Fl with respect to αl (l=1,2) and the feasibility of α1, and the last follows from the fact that for every i∈F1\F2, since i∈T1⊂T2
and i is not frozen at time t in the T2-run, we have α2i =t ≥α1i. The above inequality implies thatF1∪F2 is tight with respect toα2. Since by definitionF2 is the maximal tight set with respect toα2, we must haveF1∪F2=F2. Hence, F1 ⊆F2, as desired.
15.4.2 The Facility Location Game
We now turn to our second example, the facility location game, and observe how the standard primal-dual scheme for this problem fails to satisfy cross-monotonicity.
Recall the LP formulation (15.3) of the facility location problem, and the observation that one can assume, without loss of generality, that in a solution to this program, we have βij =max(0, αj−dij). In designing a primal-dual algorithm for the facility location game, we think of the quantity max(0, αj−dij) as thecontributionof agentj toward facilityi, and say that a facilityiistightwith respect to the dual solutionα, if
the total contributionireceives inαequals its opening cost, i.e., if
j∈Amax(0, αj− dij)=fi.
Following the general paradigm of the primal-dual schema, let us consider the following algorithm for computing cost shares in the facility location game: initialize the cost sharesαj to zero and gradually increase them until one of these two events occurs: a facilityigoes tight, in which case, this facility is opened, and the cost shares of all agentsj with positive contribution toiare frozen (i.e., no longer increased); or for an agentj and a facilityithat is already opened,αj =dij, in which case the cost share ofjis frozen. This process continues until all cost shares are frozen.
To illustrate this algorithm, consider the facility location game in Example 15.2 (Figure 15.1). In this example, at time 2, facility 2 goes tight as each ofbandcmakes one unit of contribution toward this facility. Therefore, the cost shares ofbandcare frozen at 2. The cost share of the agenta continues to increase to 4, at which point facility 1 also goes tight and the algorithm stops. In this example, the cost allocation (4,2,2) computed by the algorithm is budget balanced and satisfies the core property.
In fact, it is known that in every instance, with a postprocessing step that closes some of the open facilities, the cost of the primal solution can be brought down to at most 3 times the sum of cost shares, and therefore the cost shares are 13-budget-balanced on every instance of the facility location game. However, unfortunately the cross-monotonicity property fails, as can be seen in the example in Figure 15.1. In this example, if only agentsaandbare present, they will both increase their cost share to 3, at which point both facilities go tight and the algorithm stops. Hence, agentahas a smaller cost share in the set{a, b}than in{a, b, c}.
Intuitively, the reason for the failure of cross-monotonicity in the above example is that withoutc,bhelpsapay for the cost of facility 1. However, whencis present, she helpsbpay for the cost of facility 2. This, in turn, hurtsa, asbstops helpingaas soon as facility 2 is opened. This suggests the following way to fix the problem: we modify the algorithm so that even after an agent is frozen, she continues to grow herghost cost share. This ghost cost share is not counted toward the final cost share of the agent, but it can help other agents pay for the opening cost of facilities. For example, in the instance in Figure 15.1 when all three agents are present, even though agentsbandc stop increasing their cost share at time 2, their ghost share continues to grow, until at time 3, facility 1 is opened with contributions from agentsaandb. At this point, the cost share of agentais also frozen and the algorithm terminates. The final cost shares will be 3, 2, and 2. The pseudo-code of this algorithm is presented in Figure 15.4.
Variablesαin this pseudo-code represent the ghost cost shares.
With this modification, it is an easy exercise to show that the cost shares computed by the algorithm are cross-monotone. However, it is not clear that the budget balance property is preserved. In fact, it is natural to expect that having ghost cost shares that contribute toward opening facilities in the primal solution but do not count toward the final cost shares could hurt the budget balance (see, i.e., Exercise 15.3). For the facility location problem, the following theorem shows that this is not the case.
Theorem 15.21 The cost allocation computed by the algorithmFLCostShare (Figure 15.4) is 13-budget balanced.
cost sharing via the primal-dual schema 399
AlgorithmFLCostShare
Input:facility location game (A, c) defined by facility opening costsfi and distancesdij
setT ⊆Aof agents that receive the service Output:cost sharesαj for everyj ∈T
For everyj, initialize bothαjandαjto 0.
LetF = ∅.
WhileT \F = ∅do
Increase allαj’s forj ∈T \F andαj’s forj ∈T at the same rate, until
•for an unopened facilityi,
j∈T max(0, αj −dij)=fi: in this case, open facilityi, and
add every agentj with a positive contribution towarditoF;
•for an open facilityiand an agentj,αj =dij: in this case, addjtoF.
Figure 15.4. Algorithm for computing cost shares in the facility location game.
proof It is enough to show that for every instance of the facility location problem, it is possible to construct a solution whose cost is at most three times the sum of the cost shares computed by FLCostShare. To do this, we perform the following postprocessing step on the solution computed by FLCostShare.
Letti denote the time at which facilityi is opened byFLCostShare, and order all facilities that are opened by this algorithm in the order of increasingti’s. We proceed in this order and for any facilityi, check if there is anyopenfacilityithat comes beforeiin this order and is within distance 2tiofi. If such a facility exists, we close facility i; otherwise, we keep it open. After processing all facilities in this order, letF denote the set of facilities that remain open and connect each agent inT to its closest facility inF. We now show that
j∈Tαj is enough to pay for at least one third of the cost of this solution.
LetSi denote the set of agents within distanceti of facilityi. First, observe that for any two facilitiesiandiinF,SiandSiare disjoint. This is because if there is an agentj inSi∩Si, the distance betweeniandi is at mostti +ti ≤ 2 max(ti, ti), and therefore one ofi andi must have been closed in the above postprocessing step. To complete the proof, it is enough to show two statements.
First, we show that for every facility i ∈F, the cost shares of agents in Si is enough to pay for at least a third of their distances toi(and hence, to their closest facility inF) plus the opening cost ofi. Second, we show that each agentj that is not in∪i∈FSi can pay for at least one third of its distance to its closest facility inF.
To prove the first statement, note that for every facilityi∈F, the ghost cost share of every agent contributing toiat the time of its opening is preciselyti; hence,
j∈Si(ti −dij)=fi. Therefore, if we show that for everyj ∈Si,αj ≥ti/3, we would get
j∈Siαj ≥ 13(fi+
j∈Sidij). Assume, for contradiction, that there is
an agentjwithαj < ti/3 and consider the facilityi1withti1 =αj(i.e., the facility whose opening has caused the cost share ofj to freeze). There must be a facility i2 ∈Fthat is within distance 2ti1ofi1(ifi1∈F, we can leti2=i1). Therefore, the distance betweeniandi2 is at mostdij+di1j+2ti1 ≤ti+3αj <2ti. This contradicts the assumption thati∈F, sinceicomes afteri2in the ordering and i2 ∈F.
To show the second statement, consider an agentj∈T \ ∪i∈FSi, and letibe the facility withti =αj. There must be a facilityi inFthat is within distance 2ti ofi (i can be the same asi). Therefore, the distance from j to its closest facility inFis at mostdij+2ti ≤3αj.
15.5 Limitations of Cross-Monotonic Cost-Sharing Schemes As we saw in Section 15.3, a cost-sharing scheme that is cross-monotone also satisfies the core property. As a result, for any combinatorial cost-sharing game, an upper bound on the budget balance factor of cross-monotonic cost-sharing schemes can be obtained using Theorem 15.8 and the integrality gap examples of the corresponding LP. As we will see in this section, for many combinatorial optimization games, the cross-monotonicity property is strictly stronger than the core property, and better upper bounds on the budget balance factor of such games can be obtained using a technique based on the probabilistic method.
The high-level idea of this technique is as follows: Fix any cross-monotonic cost- sharing scheme. We explicitly construct an instance of the game and look at the cost-sharing scheme on various subsets of this instance. We need to argue that there is a subsetS of agents such that the total cost shares of the elements of S is small compared to the cost of S. This is done using the probabilistic method: we pick a subsetS at random from a certain distribution and show that in expectation, the ratio of the recovered cost to the cost ofS is low. Therefore, there is a manifestation ofS for which this ratio is low. To bound the expected value of the sum of cost shares of the elements ofS, we use cross-monotonicity and bound the cost share of each agent i∈S by the cost share of i in a substructure Ti of S. Bounding the expected cost share ofi inTi is done by showing that for every substructureT, everyi∈T has the same probability of occurring in a structureS in whichTi =T. This implies that the expected cost share ofiinTi (where the expectation is over the choice ofS) is at most the cost ofTidivided by the number of agents inTi. Summing up these values for alli gives us the desired bound.
In the following, we show how this technique can be applied to the facility location problem to show that the factor 1/3 obtained in the previous section is the best possible.
We start by giving an example on which the algorithmFLCostSharein Figure 15.4 recovers only a third of the cost. This example will be used as the randomly chosen structure in our proof.
Lemma 15.22 LetIbe an instance of the facility location problem consisting of m+kagentsa1, . . . , am,a1, . . . , akandmfacilitiesf1, . . . , fmeach of opening cost 3. For everyi andj, the connection costs betweenfi andai and between
limitations of cross-monotonic cost-sharing schemes 401
f1 f
m
am a
(a) (b)
1
a’1 a’k
A1 A
A
A
2
i
k
Figure 15.5. Facility location sample distribution.
fi and aj are all 1, and other connection costs are obtained by the triangle inequality. See Figure 15.5a. Then ifm=ω(k)andktends to infinity, the optimal solution forIhas cost3m+o(m).
proof The solution which opens just one facility, sayf1, has cost 3m+k+ 1=3m+o(m). We show that this solution is optimal. Consider any feasible solution that opensf facilities. The first opened facility can coverk+1 agents with connection cost 1. Each additional facility can cover 1 additional client with connection cost 1. Thus, the number of agents with connection cost 1 isk+f. The remainingm−f agents have connection cost 3. Therefore, the cost of the solution is 3f +k+f +3(m−f)=3m+k+f. As f ≥1, this shows that any feasible solution costs at least as much as the solution we constructed.
Theorem 15.23 Any cross-monotonic cost-sharing scheme for facility location is at most1/3-budget-balanced.
proof Consider the following instance of the facility location problem. There areksetsA1, . . . , Akofmagents each, wherem=ω(k) andk =ω(1). For every subset B of agents containing exactly one agent from each Ai (|B∩Ai| =1 for all i), there is a facilityfB with connection cost 1 to each agent inB. The remaining connection costs are defined by extending the metric, that is, the cost of connecting agentito facilityfB fori∈Bis 3. The facility opening costs are all 3.
We pick a random set S of agents in the above instance as follows: Pick a random i from{1, . . . , k}, and for every j=i, pick an agentaj uniformly at random fromAj. LetT = {aj :j =i}andS=Ai ∪T. See Figure 15.5b for an example. It is easy to see that the setSinduces an instance of the facility location problem almost identical to the instanceIin Lemma 15.22 (the only difference is that here we have more facilities, but it is easy to see that the only relevant facilities are the ones that are present inI). Therefore, the cost of the optimal solution onS is 3m+o(m).
We show that for any cross-monotonic cost-sharing scheme ξ, the average recovered cost over the choice ofS is at mostm+o(m) and thus conclude that
there is someSwhose recovered cost is at mostm+o(m). We start by bounding the expected total cost share using linearity of expectation and cross-monotonicity:
ES
a∈S
ξ(a, S)
=E
⎡
⎣
a∈Ai
ξ(a, S)
⎤
⎦+ E
⎡
⎣
j=i
ξ(aj, S)
⎤
⎦
≤E
⎡
⎣
a∈Ai
ξ(a,{a} ∪T)
⎤
⎦+ E
⎡
⎣
j=i
ξ(aj, T)
⎤
⎦.
Notice that the setT has a facility location solution of cost 3+k−1 and thus by the budget-balance condition the second term in the above expression is at most k+2. The first term in the above expression can be written asmES,a[ξ(a,{a} ∪ T)], where the expectation is over the random choice ofSand the random choice ofafromAi. This is equivalent to the following random experiment: From each Aj, pick an agentajuniformly at random. Then pickifrom{1, . . . , k}uniformly at random and leta=ai andT = {aj :j =i}. From this description it is clear that the expected value ofξ(a,{a} ∪T) is equal to 1kk
j=1ξ(aj,{a1, . . . , ak}).
This, by the budget-balance property and the fact that{a1, . . . , ak}has a solution of costk+3, cannot be more than k+3k . Therefore,
ES
a∈S
ξ(a, S)
≤m
k+3 k
+(k+2)=m+o(m), (15.6)
when m=ω(k) and k=ω(1). Therefore, the expected value of the ratio of recovered cost to total cost tends to 1/3.
15.6 The Shapley Value and the Nash Bargaining Solution One of the problems with the notion of core in cost-sharing games is that it rarely assigns a unique cost allocation to a game: as illustrated in Example 15.4, the core of a game is often either empty (making it useless in deciding how the cost of a service should be shared among the agents), or contains more than one point (making it necessary to have a second criterion for choosing a cost allocation). In this section, we study a solution concept called the Shapley value that assigns a single cost allocation to any given cost-sharing game. We also discuss a solution concept known as the Nash bargaining solution for a somewhat different but related framework for surplus sharing.
In both cases, the solution concept can be uniquely characterized in terms of a few natural axioms it satisfies. These theorems are classical examples of the axiomatic approachin economic theory.
Both the Shapley value and the Nash bargaining solution are widely applicable concepts. For example, an application of the Shapley value in combination with the Moulin mechanism to multicasting is discussed elsewhere in this book (see Section 14.2.2). Also, the Nash solution is related to Kelly’s notion of proportional fairness discussed in Section 5.12, and the Eisenberg-Gale convex program of Section 6.2.