OPTIMAL PRICING IN MARKETS WITH NON-CONVEX COSTS
4.7 Concluding Remarks
(a)Total payments as a function of flow capacity.
(b) Total uplifts as a function of flow capacity.
(c)Flow between two markets as a function of flow capacity.
Figure 4.7: An example of two connected markets with a constraint on the flow capacity.
of distributed optimization algorithms, but such approaches have not been possible in non-convex settings due to the lack of pricing rules with the desirable properties laid out in Section 4.2.2. It is now possible to explore whether EC prices can be used as the basis for distributed optimization algorithms in the non-convex setting.
4.A Supplement to Section 4.3.1
In this section, we formally prove the reduction of the optimization problem for the class of linear-plus-uplift functions to (4.6), and then show Propositions 18 and 19.
4.A.1 Reduction
Here, we show that for the class of linear-plus-uplift price functionsπ(ππ;π, π’π,πΛπ)= πππ +π’π1ππ=πΛπ, one can assume Λπβ
π = πβ
π without loss of generality, and therefore the optimization problem (4.5) reduces to (4.6) for this class. The optimization problem (4.5) for price function π(ππ;π, π’π,πΛπ) =πππ+π’π1ππ=πΛπ,π, π’1, . . . , π’π β₯ 0, is as follows
πβ
uplift = min
π1,...,ππ
πβ₯0 π’1,...,π’πβ₯0
Λ π1,...,πΛπ
π
Γ
π=1
(πππ+π’π1ππ=πΛπ) (4.25a)
s.t.
π
Γ
π=1
ππ =π (4.25b)
πππ+π’π1ππ=πΛπ βππ(ππ) β₯0, π =1, . . . , π (4.25c) πππ+π’π1ππ=πΛπ βππ(ππ) β₯max
π0
πβ ππ
ππ0
π+π’π1ππ0=πΛπ βππ(π0
π), π=1, . . . , π (4.25d) The following lemma shows that this optimization problem can be reduced to (4.6), and the optimal uplifts of (4.6) are no larger than those of (4.25).
Lemma 22. Given any solution(qβ, πβ,uβ,qΛβ)to the optimization problem(4.25), (qβ, πβ,π,qβ) is also a solution, where
π’
π =


ο£²

ο£³ π’β
π, ifπΛβ
π =πβ
π
0, o.w.
.
Proof of Lemma 22. Let us first show the feasibility of (qβ, πβ,π,qβ). For anyπ such that Λπβ
π β πβ
π, we have that πβπβ
π βππ(πβ
π) β₯0 πβπβ
π βππ(πβ
π) β₯ max
π0
πβ πβ
π
πβπ0
π+π’β
π1π0π=πΛβ
π
βππ(π0
π) β₯ max
π0
πβ πβ
π
πβπ0
πβππ(π0
π), which implies
πβπβ
π +π’β
π1πβπ=πβ
π
βππ(πβ
π) β₯ 0 πβπβ
π +π’β
π1πβπ=πβ
π βππ(πβ
π) β₯ max
π0
πβ πβ
π
πβπ0
π+π’β
π1ππ0=πΛβ
π βππ(π0
π),
becauseπ’β
π =0. Therefore (qβ, πβ,π,qβ)is feasible.
The objective value of (qβ, πβ,π,qβ)is
π
Γ
π=1
(πβπβ
π +π’
π) = Γ
π: Λπβ
π=πβ
π
(πβπβ
π +π’β
π) + Γ
π: Λπβ
πβ πβ
π
πβπβ
π
=
π
Γ
π=1
(πβπβ
π +π’β
π1πβπ=πΛβ
π
),
which is the same as that of (qβ, πβ,uβ,qΛβ), and is therefore optimal.
Based on this lemma, the optimization problem (4.25) can be reduced to (4.6).
4.A.2 Closed-Form Solutions
Proof of Proposition 18. In the optimization problem (4.6), the order of variables in the minimizations does not matter, and further, for every fixedπ1, . . . , ππandπ, the minimization over eachπ’πcan be done separately. Therefore, this program can be massaged into the following form
πβ
uplift = min
π1,...,ππ
min
πβ₯0 π
Γ
π=1
ππ(ππ;π)
!
(4.26a) s.t.
π
Γ
π=1
ππ=π , (4.26b)
where
ππ(ππ;π) =min
π’πβ₯0
πππ+π’π (4.27a)
s.t. πππ+π’πβππ(ππ) β₯0, (4.27b) πππ+π’πβππ(ππ) β₯ max
π0
πβ ππ
ππ0
πβππ(π0
π). (4.27c) for allπ=1, . . . , π. Constraints (4.27b) and (4.27c) can be expressed as
πππ+π’π β₯ ππ(ππ), πππ+π’π β₯ ππ(ππ) +max
π0
πβ ππ
ππ0
πβππ(π0
π). It follows that
ππ(ππ;π) =πππ+π’β
π =ππ(ππ) +max
0, max
π0
πβ ππ
ππ0
πβππ(π0
π)
.
which is, of course, a function ofπandππ. Therefore, we have minπβ₯0
π
Γ
π=1
ππ(ππ;π) =
π
Γ
π=1
ππ(ππ), and the minimizersπβ are all valuesπfor which max
π0
πβ ππ
ππ0
πβππ(π0
π) β€ 0, which are exactly the elements of Ξ = {πβ₯ 0 |ππ β€ ππ(π), βπ,βπ} (Figure 4.1 provides a pictorial description of these values.) Finally, we have the last minimization, which is
min
π1,...,ππ π
Γ
π=1
ππ(ππ) (4.28a)
s.t.
π
Γ
π=1
ππ =π (4.28b)
and therefore hasπβ
π =π0
π βπas its optimizer. We also haveπ’β
π =ππ(πβ
π) βπβπβ
π, βπ. Proof of Proposition 19. The steps of the proof are exactly the same as in the previous one, except that the additional minimizer picks theπwith the smallest total uplift Γπ
π=1π’π(π), which corresponds to the largest element ofΞ.
4.B Supplement to Section 4.3.2
In this section, we prove Theorem 20, in two parts. First, we show that there exist finite setsπ ,A0,B0for which Algorithm 1 finds anπ-approximate solution, and we quantify the sizes of these sets as a function ofπ. In the second part, we analyze the running time of Algorithm 1.
4.B.1 π-Accuracy
Let us first state a simple but useful lemma.
Lemma 23(πΏ-discretization). Given a set C β [πΏ1, πΏ1] Γ Β· Β· Β· Γ [πΏπ, πΏπ], for any πΏ >0, there exists a finite setC0such that
βπ§β C, βπ§0β C0s.t. kπ§βπ§0kβ β€ πΏ, and further,C0contains at mostπ/πΏπ points, whereπ =Γπ
π=1(πΏπβπΏπ)is a constant (the volume of the box). C0is said to be aπΏ-discretization ofC.
Letπ,A0, andB0denote someπΏ-discretizations of sets[0, π],AandB, respectively.
In other words, for everyπ β [0, π],πΌ β A, andπ½ β B, there existπ0 βπ,πΌ0β A0, and π½0 β B0, such that |πβπ0| β€ πΏ, kπΌβπΌ0kβ β€ πΏ, and kπ½βπ½0kβ β€ πΏ. We can combine all these inequalities as
k (π, πΌ, π½) β (π0, πΌ0, π½0) kβ β€ πΏ.
On the other hand, given that the cost function ππ(.) for each π is Lipschitz on each continuous piece of its domain, there exists a positive constant πΎπ such that
|ππ(π) βππ(π0) | β€πΎπ|πβπ0|, which implies
|ππ(π) βππ(π0) | β€ πΎππΏ. (4.29) Similarly, Lipschitz continuity of π(.;.) implies existence of a positive constantπΎ such that|π(π, πΌ, π½) β π(π0, πΌ0, π½0) | β€πΎk (π, πΌ, π½) β (π0, πΌ0, π½0) kβ, which yields
|π(π, πΌ, π½) β π(π0, πΌ0, π½0) | β€ πΎ πΏ. (4.30) Using Eqs. (4.29),(4.30), we can see that for any solutionπβ
1, . . . , πβ
π, πΌβ, π½β
1, . . . , π½β
πto optimization (4.5), there exists a pointπ1, . . . , ππ, πΌ, π½1, . . . , π½πwithπ1, . . . , ππ βπ, πΌ β A0and π½ β B0, for which constraints (4.5c) and (4.5d) are violated at most by
(πΎ+πΎπ)πΏand(2πΎ+2πΎπ)πΏ, respectively, and the objective is larger than πβat most byππΎ πΏ. As a result, this point will be anπ-approximate solution if
(πΎ+πΎπ)πΏ β€ π βπ, (4.31)
2(πΎ+πΎπ)πΏ β€ π βπ, (4.32)
ππΎ πΏ β€ ππ . (4.33)
These constraints altogether enforce an upper bound on the value ofπΏ as πΏ β€πΆ π ,
for some constantπΆ. Therefore if we pick
πΏ = π
dβe π
πΆ π
, (4.34)
our algorithm is guaranteed to encounter anπ-approximate solution while enumerating the points, andπ ={0, πΏ,2πΏ, . . . , π}is a validπΏ-discretization for[0, π], which has ππ = dβe π
πΆ π
+1=π 1
π
points. The nice thing about this particular choice ofπΏ is that nowπcan be written as a sum ofπelements inπ (because all the elements, including π, are multiples of πΏ), which allows us to satisfy the Market Clearing condition exactly. Based on Lemma (23),A0andB0containππΌ =π
1 πΏπ1
=π 1
ππ1
andππ½ =π 1
πΏπ2
=π 1
ππ2
points.
Finally, if there are any discontinuities in the cost or price functions, we can simply add them to our discrete sets π, A0, and B0, and since there are at most a finite number of them, the sizes of the sets remain in the same order, i.e., ππ =π
1 π
, ππΌ = π
1 ππ1
, and ππ½ = π 1
ππ2
. Next, we calculate the time complexity of Algorithm 1 running on these discrete sets.
4.B.2 Run-Time Analysis
In this section, we show that Algorithm 1 has a time complexity ofπ
π(1
π)π1+π2+2 . For every fixedπΌ, we have the following computations:
1. The leaves: We need to compute ππ(π;πΌ) for every π and every π β π. Computing eachππ(π;πΌ)(i.e., for fixedπ, π, πΌ) takesπ(ππ½ππ). The reason for that is we have to search over all π½π β π΅0, and for each one there are
ππ+1 constraints to check. More explicitly, we need to (a) checkπ(ππ½ππ) constraints, (b) computeππ½objectives, and (c) find the minimum among those ππ½values. All these steps together takeπ(ππ½ππ), and repeating for everyπ andπmakes itπ(π ππ½π2
π).
2. The intermediate nodes: In each new level, there are at most half as many (+1) nodes as in the previous level. For each node π in this level, we need to compute ππ(π;πΌ) for every π β π. For every fixed π, there are π(ππ) possible pairs of(ππ, ππ) that add up toπ, and therefore we need to (a) sum π(ππ)pairs of objective values, and (b) find the minimum among them, which takeπ(ππ). Hence, the computation for each node takesπ(π2
π). There are π(π2 + π4+ Β· Β· Β· +2) =π(π)intermediate nodes in total, and therefore the total complexity of this part isπ(π π2
π).
3. The root: Finally at the root, we need to computeπroot(π;πΌ). There are ππ possible pairs of (ππ, ππ) that add up to π. Therefore, we need to compute ππsums, and find the minimum among the resultingππvalues, which takes π(ππ).
Putting the pieces together, the computation for all values of πΌ takes ππΌ Γ
π(π ππ½π2
π) +π(π π2
π) +π(ππ)
, which in turn is π(π ππΌππ½π2
π). Finally, find- ing the minimum among theππΌvalues simply takesπ(ππΌ).
The backward procedure, which finds the quantities ππand the parameters π½π, takes justπ(π), since it is just a substitution for every node. As a result, the total running time isπ(π ππΌππ½π2
π), which based on the first part (Section 4.B.1) isπ
π(1
π)π1+π2+2 . 4.B.3 Remark on theπ-Approximation
As mentioned at the end of Section 4.3.2, if one requires the total payment in Definition 1 to be at mostπ (rather thanππ) away from the optimalπβ, the running time of our algorithm will still be polynomial in bothπand 1/π, i.e.,π
π3(1
π)π1+π2+2 . To see that, notice in this case (4.31) and (4.32) remain the same, and (4.33) changes toππΎ πΏ β€ π. Therefore, the upper bound enforced by the constraints will beπΏ β€ πΆ π
π
, for some constant πΆ. In this case, our choice of πΏ would be πΏ = π
dβeπ π
πΆ π
, and hence ππ =π
π π
. ππΌ and ππ½ remain the same as before. The running time is π(π ππΌππ½π2
π), as computed previously, which in this case would beπ
π3(1
π)π1+π2+2 .
Figure 4.8: The transformation of an arbitrary-degree tree to a binary tree.
4.C Supplement to Section 4.4
In this section, we first show the transformation of the problem on a tree to one on a binary tree, and then prove Theorem 21.
4.C.1 Transformation into Binary Tree
Lemma 24. Given any tree withπnodes (suppliers), there exists a binary tree with additional nodes which has the same solution(πβ
π, . . . , πβ
π, πΌβ, π½1, . . . , π½π) for those nodes as the original network. The binary tree hasπ(π) nodes.
Proof. Take any nodeπ that hasππ > 2 children. For any two children, introduce a dummy parent node. For any two dummy parent nodes, introduce a new level of dummy parent nodes. Continue this process until there are 2 or less nodes in the uppermost layer, and then connect them to nodeπ (see Fig. 4.8). The capacities of the lines immediately connected to the children are the same as those in the original graph. The capacities of the new lines are infinite.
The total number of introduced dummy nodes by this procedure is π(ππ
2 + ππ
4 + Β· Β· Β· +2) =π(ππ).
Since there are 1+π1+ π2+ Β· Β· Β· + ππ = π nodes in total in the original tree, the number of introduced additional nodes isπ(π1+ Β· Β· Β· +ππ) =π(π). Therefore the total number of nodes in the new (binary) tree isπ(π).
4.C.2 Proof of Theorem 21
Most of the proof is similar to the one presented in Section 4.B. For this reason, we only highlight the main points. The proof consists ofπ-accuracy and run-time, as before.
4.C.2.1 π-Accuracy
Let π1, . . . , ππ, πΉ1, . . . , πΉπ,A0,B0 denote some πΏ-discretizations of sets [0, π1+ πch
1(1)+ πch
2(1)β π1], . . . ,[0, ππ+ πch
1(n)+ πch
2(n)β ππ], [π1, π1], . . . ,[ππ, ππ],A,B, respectively. Note that if any line capacities are infinite, the intervals can be replaced by[0,Γπ
π=1ππ]instead. Similar as in Section 4.B, the constraints enforce an upper bound on the value of πΏ as πΏ β€ πΆ π , for some constantπΆ. Based on Lemma (23), the sizes of the sets will be ππ
π =π
1 π
βπ,ππ
π = π
1 π
βπ, ππΌ =π 1
ππ1
, and ππ½=π
1 ππ2
4.C.2.2 Run-Time Analysis
For every fixedπΌ, the run-time of the required computations is as follows.
1. The time complexity of computingππ(ππ;πΌ) for each nodeπ and each fixed value ofππisπ(ππ½ππ
π). Therefore, computing it for all nodes and all values takesπ(π ππ½π2
π).
2. Computingβπ(ππ;πΌ) for each nodeπand each fixed value of ππ takesπ(π2
π), because there areπ(ππ) Γπ(ππ) pairs of values for (πch
1(π), πch
2(π)) (ππ is automatically determined as the closest point inππtoππ+ πch
1(π)+ πch
2(π) β ππ).
Therefore, its overall computation for all nodes and all values takesπ(π π3
π).
As a result, the overall computation takesππΌΓ π
π ππ½π2
π
+π
π π3
π , which is π
π(1
π)π1+π2+2 +π
π(1
π)π1+3
, or equivalentlyπ
π(1
π)π1+max{π2,1}+2 .
C h a p t e r 5