GLOBAL OPTIMALITY FOR SINGLE-PHASE NETWORKS
2.6 Applications
by definition must be a global optimum as well. The argument on local optimality follows directly from the proof of Theorem 1.
The power flow equations are:
π£π =π£π+2Rπ(π§π ππH
π π) β |π§π π|2βπ π, β(π , π) β E (2.8a) π£π = |ππ π|2
βπ π
, β(π , π) β E (2.8b)
π π = βοΈ
π:πβπ
ππ π β βοΈ
π:πβπ
(ππ π βπ§π πβπ π), βπ β V. (2.8c)
Given a cost function π(π ) : Cπ β R, we are interested in the following OPF problem:
minimize
π₯=(π ,π£ ,β,π)
π(π ) (2.9a)
subject to (2.8) (2.9b)
π£
π β€ π£π β€ π£π (2.9c)
π
π β€ π π β€ π π (2.9d)
βπ π β€ βπ π. (2.9e)
All the inequalities for complex numbers in this section are enforced for both the real and imaginary parts.
Definition 13. A functionπ :RβRisstrongly increasingif there exists realπ > 0 such that for anyπ > π, we have
π(π) βπ(π) β₯π(πβπ). We now make the following assumptions on OPF:
(i) The underlying graph Gis a tree.
(ii) The cost function π is convex, and is strongly increasing in Rπ(π π)(or Iπ(π π)) for each π β V and non-decreasing in Iπ(π π)(or Rπ(π π)).
(iii) The problem (2.9) is feasible.
(iv) The line current limit satisfiesβπ π β€ π£
π|π¦π π|2.
Assumption (i) is generally true for distribution networks and assumption (iii) is typically mild. As for (ii), π is commonly assumed to be convex and increasing in Rπ(π π) and Iπ(π π) in the literature (e.g., [31, 74]). Assumption (ii) is only
slightly stronger since one could always perturb any increasing function by an arbitrarily small linear term to achieve strong monotonicity. Assumption (iv) is not common in the literature but is also mild because of the following reason.
Typicallyππ = (1+ππ)ππππ in per unit whereπ β [β0.1,0.1]and the angle difference ππ π :=ππβππ between two neighboring buses π , πtypically has a small magnitude.
Thus the maximum value of|ππβππ|2=| (1+ππ)ππππ πβ (1+ππ) |2, which is equivalent toβπ π/|π¦π π|2, should be much smaller thanπ£
π which isβ1 per unit.
Problem (2.9) is non-convex, as constraint (2.8b) is not convex. Denote byX the set of (π , π£ , β, π) that satisfy (2.9b)-(2.9e), so (2.9) is in the form of (2.1). We can relax (2.9) by convexifying (2.8b) into a second-order cone [27]:
minimize
π₯=(π ,π£ ,β,π)
π(π ) (2.10a)
subject to (2.8a),(2.8c),(2.9c)β(2.9e) (2.10b)
|ππ π|2 β€ π£πβπ π. (2.10c)
One can similarly regard ΛX as the set of (π , π£ , β, π) that satisfy (2.10b), (2.10c). It is proved in [27] that ifπ
π =ββ βπβfor all π β V, then (2.10) is exact, meaning any optimal solution of (2.10) is also feasible and hence globally optimal for (2.9).
Now we show that the same condition also guarantees that any local optimum of (2.9) is also globally optimal. This implies that a local search algorithm such as the primal-dual interior point method can produce a global optimum as long as it converges.
Theorem 5. Ifπ
π = ββ βπβfor all π β V, then any local optimum of (2.9) is a global optimum.
Proof. Our strategy is to construct appropriate π and {βπ₯} and then prove such construction satsify both Condition (C) and Condition (Cβ). Let
π(π₯) := βοΈ
(π , π)βE
π£πβπ π β |ππ π|2. (2.11) Clearly,π is a valid Lyapunov-like function satisfying Definition 10.
For each π₯ = (π , π£ , β, π) β X \ XΛ , let M be the set of (π , π) β E such that
|ππ π|2< π£πβπ π. For(π , π) β M, the quadratic function ππ π(π) := |π§π π|2
4 π2+ π£π βRπ(π§π ππH
π π)
π+ |ππ π|2βπ£πβπ π
must have a unique positive root as ππ π(0) < 0. We defineΞπ π to be this positive root if (π , π) β Mand 0 otherwise.
Assumption (iv) impliesβπ π β€ π£π|π¦π π|2, and therefore π£πβRπ(π§π ππH
π π) β₯ π£π β |π§π π||ππ π|
β₯π£πβ |π§π π|βοΈ
π£πβπ π β₯ π£πβ |π§π π|βοΈ
π£2
π|π¦π π|2 = 0. It further impliesππ π(π) is strictly increasing forπ β [0,Ξπ π].
Now consider the pathβπ₯(π‘) :=(π Λ(π‘),π£Λ(π‘),βΛ(π‘),πΛ(π‘))forπ‘ β [0,1], where
Λ
π π(π‘) =π π β π‘ 2
βοΈ
π:πβπ
π§π πΞπ π β π‘ 2
βοΈ
π:πβπ
π§π πΞπ π, (2.12a)
Λ
π£π(π‘) =π£π, (2.12b)
Λ
βπ π(π‘) =βπ π βπ‘Ξπ π, (2.12c)
Λ
ππ π(π‘) =ππ π β π‘
2π§π πΞπ π. (2.12d)
Clearly we have βπ₯(0) =π₯. It can be easily checked that βπ₯(π‘)is feasible for (2.10) forπ‘ β [0,1]andβπ₯(1)is feasible for (2.9). Therefore,βπ₯ is indeed[0,1] βXΛ and βπ₯(1) β X.
Since π§π π > 0, both real and imaginary parts of Λπ π(π‘) are strictly decreasing for (π , π) β M and stay unchanged otherwise. By assumption (ii), π(π Λ(π‘)) is also strictly decreasing. To showπ(βπ₯(π‘)) is also decreasing, we notice thatπ(βπ₯(π‘)) equals
βοΈ
(π , π)βE
Λ
π£π(π‘)βΛπ π(π‘) β |πΛπ π(π‘) |2
= βοΈ
(π , π)βMc
π£πβπ π β |ππ π|2+ βοΈ
(π , π)βM
Λ
π£π(π‘)βΛπ π(π‘) β |πΛπ π(π‘) |2
= βοΈ
(π , π)βMc
π£πβπ π β |ππ π|2β βοΈ
(π , π)βM
ππ π(π‘Ξπ π).
Asππ π(π)is strictly increasing forπ β [0,Ξπ π], we conclude thatπ(βπ₯(π‘))is strictly decreasing forπ‘ β [0,1].
By Corollary 1, the set{βπ₯}π₯βX\XΛ is uniformly bounded and uniformly equicontin- uous as allβπ₯(π‘)are linear functions inπ‘. In summary, Condition (C) is satisfied.
Finally, we show Condition (Cβ) also holds. By assumption (ii), there exists some realπ >0 independent ofπ₯ such that for any 0β€ π < π β€ 1,
π(π Λ(π)) β π(π Λ(π))
β₯ π
βοΈ
πβV
Rπ(π Λπ(π) βπ Λπ(π)) +Iπ(π Λπ(π) βπ Λπ(π))
=πβ₯π Λ(π) βπ Λ(π) β₯m. where β₯ Β· β₯m is defined as β₯aβ₯m :=Γ
π|Rπ(ππ) | + |Iπ(ππ) | over the complex vector space. It is easy to checkβ₯ Β· β₯mis a valid norm.
On the other hand, by (2.12) we have β₯π£Λ(π) βπ£Λ(π) β₯m β‘0 and
β₯βΛ(π) ββΛ(π) β₯m β€ 1 min
(π , π)βE
{β₯π§π πβ₯m}β₯π Λ(π) βπ Λ(π) β₯m,
β₯πΛ(π) βπΛ(π) β₯m β€ 1
2β₯π Λ(π) βπ Λ(π) β₯m. Therefore,
β₯βπ₯(π) ββπ₯(π) β₯m β€ 3
2+ 1
min
(π , π)βE
{β₯π§π πβ₯m}
β₯π Λ(π) βπ Λ(π) β₯m
and there exists Λπ > 0 independent ofπ₯ , π, π such that
π(π Λ(π)) β π(π Λ(π)) β₯πΛβ₯βπ₯(π) ββπ₯(π) β₯m.
Therefore Condition (Cβ) is also satisfied, and by Theorem 2, any local optimum of (2.9) is a global optimum.
The results in this subsection only apply to radial networks, which serve as the underlying network for balanced distribution power systems. For transmission systems and unbalanced distribution systems, networks are usually highly meshed.
It has been found that for most of meshed networks, both convex relaxation and local search algorithms can also yield the global optimum for most of testcases [34, 41].
Thus Theorem 3 suggests that there may also exist similar Lyapunov-like function and paths for meshed networks. Finding such Lyapunov-like function and paths would be an interesting future work to extend our results in this chapter.
Low Rank Semidefinite Program
This subsection proves a known result in [17] but using a different approach. Adopt- ing the same notations as [17], we have the following problem.
minimize
πβ₯0 tr(πΆ π) (2.13a)
subject to tr(π΄ππ) =ππ, π =1,Β· Β· Β· , π (2.13b)
rank(π) β€π . (2.13c)
Here, πΆ, π΄π, π are all π-by-π matrices. We assume the problem is feasible and {π β₯ 0|(2.13b)}is compact.
Theorem 6. If(π+1) (π+2)/2> π+1, then any local optimum of (2.13)is either a global optimum or a pseudo local optimum.
Before proving Theorem 6, we consider the convex relaxation of (2.13) as minimize
πβ₯0 tr(πΆ π) (2.14a)
subject to tr(π΄ππ) =ππ, π=1,Β· Β· Β· , π . (2.14b) As a side note, the results in [5, 61] show that if(π +1) (π +2)/2> π, then (2.14) is weakly exact to (2.13). While our theorem is the same as in [17], some insights to findπ and{βπ}are also from the structures first raised in [5, 61].
Proof. Clearly, (2.13) can be reformulated in the form of (2.1) by setting π(π) = tr(πΆ π),X ={π β₯ 0|(2.13b),(2.13c)}and ΛX ={π β₯ 0|(2.13b)}. Defineπ as
π(π) :=
π
βοΈ
π=π+1
ππ(π),
whereππ(π)is theπtheigenvalue ofπ(in decreasing order). This functionπsatisfies Definition 10 and is concave.
For fixedπ βX \ XΛ , we denote rank(π)asπ0 > π. We first constructπ0βπ paths labeled as β1, β2,Β· Β· Β· , βπ
0βπ. When we construct βπ, ifπ > 1 then we assume path βπβ1 has already been constructed and let ππβ1 := βπβ1(1). We let π0 = π. For π β₯ 1, if rank(ππβ1) β€ π0βπ then we let βπ(π‘) β‘ ππβ1 for π‘ β [0,1]. Otherwise, we decompose ππβ1asπΞ£πHwhereΞ£is aπ-by-πpositive definite diagonal matrix withπ =rank(ππβ1) > π0βπ. The linear system
tr(πΆππ πH) =0
tr(π΄πππ πH) =0, π=1,Β· Β· Β· , π (2.15)
must have a non-zero solution for Hermitian matrixπ βCπΓπ. To see this, we have π β₯π0βπ+1β₯ π+1, and thusπ(π+1)/2 β₯ (π+1) (π+2)/2> π+1. As a result, (2.15) has more unknown variables than equations. We simply denote this non-zero solution asπ and for anyπΌβR,πΌπ is also a solution to (2.15). The concavity ofπ also implies thatπ(π(Ξ£+πΌπ)πH) is concave inπΌwhenπ andΞ£are fixed. Since Ξ£ > 0, one of the following two scenarios must be true.
β’ βπ < 0 such thatπ(π(Ξ£+πΌπ)πH)is non-decreasing, rank(π(Ξ£+πΌπ)πH) β€ π forπΌβ [π,0] and rank(π(Ξ£+ππ)πH) β€ πβ1.
β’ βπ > 0 such thatπ(π(Ξ£+πΌπ)πH)is non-increasing, rank(π(Ξ£+πΌπ)πH) β€ π forπΌβ [0, π] and rank(π(Ξ£+ππ)πH) β€ π β1.
Without loss of generality, we supposeπ(π(Ξ£+πΌπ)πH)is non-increasing forπΌβ [0, π](otherwise we takeβπinstead). We then constructβπasβπ(π‘) =π(Ξ£+π‘ ππ)πH for π‘ β [0,1]. By construction, π(βπ(π‘)) is non-increasing and π(βπ(π‘)) stays a constant.
Finally, we construct βπ as the concatenation of pathsβ1,Β· Β· Β· , βπ
0βπ. That is to say, βπ(π‘) :=βπ( (π0βπ)π‘βπ+1)forπ‘ β h πβ1
π0βπ ,
π π0βπ
i .
It is easy to see βπ is continuous and βπ(0) = β1(0) = π. To seeβπ(1) β X, we prove that rank(ππ) β€π0βπ. We first have rank(π0) =rank(π) =π0. Forπ β₯ 1, we have rank(ππ) =rank(ππβ1) if rank(ππβ1) β€ π0βπ and rank(ππ) β€ rank(ππβ1) β1 otherwise. By induction, we can prove rank(ππ) β€π0βπalways holds. As a result, rank(βπ(1)) = rank(βπ
0βπ(1)) β€ π and thus βπ(1) β X. By construction, βπ(π‘) never violates (2.13b) and thus is in ΛX, so is βπ(π‘) for all π‘. Functions π(βπ(π‘)) and π(βπ(π‘)) being non-increasing implies that π(βπ(π‘)) and π(βπ(π‘)) are also non-increasing. Therefore, (C3) is satisfied. By Corollary 1 (C2) also holds for {βπ}. It completes the proof (by Theorem 4).
Remark 4. In [17], Theorem 3.4 claims that any local optimum of (2.13) should also be globally optimal, unless it is harbored in some positive-dimensional face of SDP. The result in this chapter further asserts that if it is indeed harbored in such a face, then there must be some point on the edge of this face whose cost can be further reduced in its neighborhood (i.e., the local optimum is in the same situation as pointπrather thanπ as in Fig. 2.1).
Table 2.2: Sufficient and necessary conditions Condition Relaxation exactness Local optimality Sufficient conditions: β
(C1), (C2) Strong exactness l.o. is p.l.o. or g.o.
(C3), (C2) Weak exactness l.o. is p.l.o. or g.o.
(Cβ) Strong exactness l.o. is g.o.
Necessary condition: β
(C1), (C2) Strong exactness l.o. is g.o.