OPTIMAL PRICING IN MARKETS WITH NON-CONVEX COSTS
4.3 Proposed Scheme: Equilibrium-Constrained Pricing
4.3.2 An Efficient Approximation Algorithm
See the appendix for proofs.
Note that there were two potential alternatives to the two-stage optimization in (4.7) for picking a minimum-uplift solution. One may have attempted to enforce uniformity as a constraint. However, the problem with this is that imposing, for example, box constraints onπ’requires knowledge of reasonable upper-bounds on the uplifts, which may not be available; and on the other hand, insisting on exact uniformity makes the problem infeasible in most non-convex cases. The other alternative is to minimize a combination of the two objectives in (4.6) and (4.7). In this case, the weighted objective becomesΓπ
π=1(πππ +πΎπ’π) for some appropriate constantπΎ, and it is not hard to show that the solution will be the same as that of the proposed two-stage optimization.
(4.5)if it satisfies
π
Γ
π=1
ππ=π , (Market Clearing)
π(ππ;πΌ, π½π) βππ(ππ) +π β₯ 0, π =1, . . . , π, (π-Revenue Adequacy) π(ππ;πΌ, π½π) βππ(ππ) +π β₯ π(π0
π;πΌ, π½π) βππ(π0
π), βπ0
π β ππ, π =1, . . . , π, (π-Competitive Equilibrium) and
π
Γ
π=1
π(ππ;πΌ, π½π) β€ πβ+ππ . (π-Economic Efficiency) Given this notion of an approximate solution, we can move towards designing the algorithm. The optimization problem (4.5) looks highly coupled, at first, since the constraints share a lot of common variables. However, one can see that, for a fixed value ofπΌ, the objective becomes additively separable in(ππ, π½π). Furthermore (again for fixedπΌ), constraints (4.5c),(4.5d) involve only theπ-th variables (ππ, π½π) for each π. Although the Market Clearing condition still couples the variables together, this observation allows us to reformulate (4.5) as
πβ = min
π1,...,ππ πΌβA
π
Γ
π=1
ππ(ππ;πΌ) (4.9a)
s.t.
π
Γ
π=1
ππ =π , (4.9b)
where
ππ(π;πΌ) =min
π½πβB
π(π;πΌ, π½π) (4.10a)
s.t. π(π;πΌ, π½π) βππ(π) β₯ 0, (4.10b) π(π;πΌ, π½π) βππ(π) β₯ π(π0;πΌ, π½π) βππ(π0), βπ0β π, (4.10c) for allπ=1, . . . , π.
Therefore, for any fixed value of πΌ and ππ, the optimization over π½π can be done individually, as in (4.10). What remains to address, however, is the coupling of the variables as a result of the Market Clearing constraint. One naive approach would be to simply try all possible choices of (π1, . . . , ππ)and pick the one that has the minimum objective value. This is very inefficient. Instead, we take a dynamic programming approach, and group pairs of variables together, defining a new variable
Figure 4.2: An example of the binary tree defined by Algorithm 1 forπ=8. The faded circles correspond to the added dummy nodes.
as theirparent. We then group the parents together, and continue this process until we reach theroot, i.e., where there is only one node. During this procedure, at each new nodeπ, we need to solve the following (small) problem
ππ(π;πΌ)= min
ππ,ππ
ππ(ππ;πΌ) +ππ(ππ;πΌ) s.t. ππ +ππ =π,
(4.11)
for everyπ, where π andπ are thechildrenofπ. At the root of the tree, we will be able to computeπroot(π;πΌ). Figure 4.2 shows an example of the created binary tree for this procedure forπ =8. This procedure can be repeated for different values ofπΌ, and the optimal value πβcan be computed as minπΌπroot(π;πΌ).
The problem with recursion (4.11) is that it requires an infinite-dimensional compu- tation at every step, since the values ofππ(π;πΌ) need to be computed for every π. To get around this issue, we note that the variablesππlive in the bounded set[0, π], and hence can be discretized to lie in a finite setπ, such that every possibleππ is at mostπΏ(π) away from some point inπ. Similarly, if theπΌand π½πβs are continuous variables, we can discretize the bounded setsAandB into some finite setsA0and B0, such that every point inA(or B) is at most πΏ(π)away, in infinity-norm sense, from some point inA0(orB0). See the appendix for details.
For finding anπ-approximate solution, (4.10) is relaxed to ππ(π;πΌ) = min
π½πβB0
π(π;πΌ, π½π) (4.12a)
s.t. π(π;πΌ, π½π) βππ(π) +π β₯ 0, (4.12b) π(π;πΌ, π½π) βππ(π) +π β₯ π(π0;πΌ, π½π) βππ(π0), βπ0β π,
(4.12c) for allπ =1, . . . , π, and (4.11) remains the same, except the variables (ππ, ππ)take
Algorithm 1Find anπ-approximate solution to the optimal pricing problem (4.5)
1: Input: π,π1(.), . . . , ππ(.), π(.;.),π
2: forπΌinA0do
3: π=1 :π
4: forπinπdo β²for the leaves
5: compute ππ(π;πΌ) for allπinπ, using (4.12)
6: end for
7: while|π| > 2do β²while not reached the root
8: πnew= π(end) +1 : π(end) +l|π|
2
m
9: forπinπnewdo β² for the intermediate nodes
10: [π , π] =indices of children ofπ
11: if π =β thenππ(.;πΌ)=ππ(.;πΌ)
12: else, compute ππ(π;πΌ)for allπinπ, using (4.13) β²it has two children
13: end if
14: end for
15: π=πnew
16: end while
17: [π , π] =π
18: compute πroot(π;πΌ), using (4.13) β²at the root
19: end for
20: πΌβ = arg min
πΌβA0
πroot(π;πΌ)
21: πβ
root =π
22: forπ =root :β1 :π+1do
23: [πβ
π, πβ
π] =π₯π(πβ
π;πΌβ), where [π , π] =indices of children ofπ
24: end for
25: forπ =π:β1 : 1do
26: π½β
π =ππ(πβ
π;πΌβ)
27: end for
28: return (πβ
1, . . . , πβ
π, πΌβ, π½β
1, . . . , π½β
π)
values inπ, i.e.,
ππ(π;πΌ) = min
ππ,ππβπ
ππ(ππ;πΌ) +ππ(ππ;πΌ) s.t. ππ+ππ =π,
(4.13) for all π > π. We denote the optimizer of (4.12) by ππ(π;πΌ), and the optimizer of (4.13), which is a pair of quantities (ππ, ππ), byπ₯π(π;πΌ). The full procedure is summarized in pseudocode in Algorithm 1.
While not immediately clear, the proposed approximation algorithm can be shown to run in time that is polynomial in both πand 1/π (in fact, linear in π). Further, the solution it provides isπ-accurate under a mild smoothness assumption on the cost and price functions, which holds true for almost any function considered in the literature. These two results are summarized in the following theorem, which is proven in the appendix.
Theorem 20. Considerππ(.)and π(.;.)that have at most a finite number of discon- tinuities and are Lipschitz on each continuous piece of their domain. Algorithm 1 finds anπ-approximate solution to the optimal pricing problem(4.5)with running time π π(1/π)π1+π2+2, where π is the number of suppliers, andπ1 and π2 are the number of shared and individual parameters in the price, respectively.
It is worth emphasizing that while there areπ1+ππ2variables in the price functions in total, parameters π1 and π2 do not scale with π, and are typically very small constants. For example, for the so-called linear-plus-uplift price functionsπ1=π2=1.
Therefore, the algorithm is very efficient.
We should also remark that if one requires the total payment in Definition 1 to be at mostπ (rather thanππ) away from the optimal πβ, the running time of our algorithm will still be polynomial in bothπand 1/π, i.e.,π
π3(1
π)π1+π2+2
. See the appendix for details.