WITHOUT DELTA CONNECTIONS
3.6 Proof of Sufficient Conditions ReviewReview
The existing works [15, 67] prove that the optimal solution of SDP relaxation is of rank 1 in single phase networks. A crucial step in their proof uses the strong duality to show that the product of the primal optimal solutionπΎβ and the dual matrix π¨β is a zero matrix, and hence the rank ofπΎβcannot exceed the dimension of π¨ββs null space. Under certain conditions [15, 67] prove that π¨ββs null space is of dimension at most 1. Hence the optimal primal solutionπΎβ must be of rank at most 1.
This argument however breaks down in a multiphase network for the following two reasons. First, although the underlying graph forπphase network is still a tree, each
3For example, if Re(π
π
π)is minimized in the objective function, then the lower bound of Re(π
π π) should not be active in the constraints.
bus now has π different phases and might haveπ unbalanced voltages in general.
If we extend each phase to a separate vertex in the new graph and connect every phase pair between every two neighboring buses, then theπ phase network will be transformed into an (π π)-node meshed network with multiple cycles [7, 22, 44].
Hence the theory for single-phase radial network is not applicable. Second, in anπ phase network, it is unknown wether the null space of π¨βat the optimal point is still of dimension 1. It is therefore not clear how to prove rank(πΎβ) = 1 via analyzing the dimension of null(π¨β).
In the following argument, we use a similar proof framework to that in [15], but the proof will be based on the eigenvectors ofπΎβinstead of the dimension of null(π¨β).
From now on, we suppose A1, A2, A3, and A5 hold.
Preliminaries
Our strategy is to prove the exactness of the perturbed OPF problem and then use Lemma 12 to show (3.4) is also exact. It is important to make sure that all the non-active constraints will remain non-active in the perturbation neighborhood.
Lemma 13. For any nonzeroπͺ1, there exists a positive sequenceπβ0such that for each πin the sequence, one can collect (π
π
π(π), ππ
π
(π), π
π π(π), ππ
π
(π)) from at least one of its KKT multiplier tuples satisfying
ππ(π , π)=0 =β ππ
π(π) =ππ
π
(π) =0 (3.14a)
ππ(π , π) β 0 =β ππ(π , π) Β· (π
π
π(π) βππ
π
(π)) β₯0 (3.14b) ππ(π , π)=0 =β π
π
π(π) =ππ
π
(π) =0 (3.14c)
ππ(π , π) β 0 =β ππ(π , π) Β· (π
π
π(π) βππ
π
(π)) β₯0. (3.14d)
Proof. First consider any positive sequence {ππ}β
π=1such that limπββππ =0. Sup- pose the optimal solution to (3.5) underππisπΎπ(if there are multiple solutions then select one of them). As (3.5b) prescribes a compact set, using a similar argument as in the proof of Lemma 12 we know there must be a subsequence of {ππ}β
π=1, denoted by{ππ§
π‘}β
π‘=1, such thatπΎπ§π‘ converges toπΎβin the max norm. The difference
β₯πΎπ§π‘βπΎββ₯βcan be arbitrarily small for sufficiently largeπ‘. Whenπ‘is large enough, the non-active constraints in (3.5b) underπΎβwill remain non-active underπΎπ§π‘, and
the corresponding KKT multipliers will remain 0. As a result, ππ(π , π) =0 =β ππ
π
< tr(π½πππΎβ) < π
π π
=β ππ
π
< tr(π½πππΎπ§π‘) < π
π
π =β π
π π(ππ§
π‘) =ππ
π
(ππ§
π‘) =0, ππ(π , π) =+1 =β ππ
π
< tr(π½πππΎβ)
=β ππ
π
< tr(π½πππΎπ§π‘) =β ππ
π
(ππ§
π‘) =0
=β ππ(π , π) Β· (π
π π(ππ§
π‘) βππ
π
(ππ§
π‘)) β₯ 0, ππ(π , π) =β1 =β tr(π½πππΎβ) < π
π π
=β tr(π½π
ππΎπ§π‘) < π
π
π =β π
π π(ππ§
π‘) =0
=β ππ(π , π) Β· (π
π π(ππ§
π‘) βππ
π
(ππ§
π‘)) β₯0
all hold. A similar argument can also be applied to prove (3.14c) and (3.14d).
Properties of Dual Matrix π¨β(π)
In order to apply Lemma 12, we constructπͺ1 βCπ πΓπ πin the following manner:
[πͺ1]π π =0βCπΓπ, for π β V [πͺ1]π π =0βCπΓπ, for (π , π) βE.
When (π , π) β E, we assume π < π. If neither π nor π is in So βͺ Sc, then we construct[πͺ1]π π =ππ π.
If π β Soβͺ Sc, then A3 guarantees π βSoβͺ Sc. βπ β M, we set [πͺ1]π,:
π π toππ,π π:if π
π π ,rπ =π
π
π ,iπ = ππ(π , π) = ππ(π , π) =0, and to (ππ(π , π) + ππ(π , π)π)ππ,π π:otherwise.
If π β So βͺ Sc, then A3 guarantees π β So βͺ Sc. βπ β M, we similarly set [πͺ1]:,π
π π to (ππ,:
π π)H if π
π π ,rπ = π
π
π ,iπ = ππ(π , π) = ππ(π , π) = 0, and to (ππ(π , π) β ππ(π , π)π) (ππ,:
π π)Hotherwise.
Finally, we set[πͺ1]π π := [πͺ1]H
π π for all π < π to makeπͺ1Hermitian.
The next theorem provides a key intermediate result to prove Theorem 10. Suppose under suchπͺ1, the sequence guaranteed by Lemma 13 is{ππ}β
π=1.
Theorem 11. Under A1, A2, A3, and A5, for each ππ, the dual matrix π¨β(ππ) is G-invertible. 4
4If the KKT multiplier tuple atππis non-unique, thenπ¨β(ππ)is evaluated at the multiplier tuple in Lemma 13 satisfying (3.14).
Proof. The value ofπ¨β(ππ)is the same as the right hand side of (3.11) when all dual variables take values at their corresponding KKT multipliers (with respect to ππ).
If not otherwise specified, all the (π
π π, ππ
π
, π
π π, ππ
π
) in this proof refer to the tuple in Lemma 13 with respect toππ. Since for allπβ π, [π¬ππ]π π and[Ξ (π )]π π are always zero matrices, it is sufficient to show
πΈ:=βοΈ
π ,π
(π
π π βππ
π
)π½π
π + (π
π π βππ
π
)πΏπ
π
+πͺ0+πππͺ1 satisfies the two conditions in Definition 15.5
Forπ β π and (π, π) β E, recall that πͺ0 is the linear combination ofπ½π
π and πΏπ
π. When (π, π) β E, ππ π is a zero matrix and so are all [π½π
π]π π and [πΏπ
π]π π. The construction of πͺ1 also guarantees [πͺ1]π π is all zero. Hence [πΈ]π π is all zero as well.
Now assumeπ < π. If(π, π) β E, we have [πΈ]π π
=βοΈ
π
(π
π πβ ππ
π
+π
π
π,rπ) [π½ππ]π π+ (π
π πβππ
π
+π
π
π,iπ) [πΏππ]π π
+βοΈ
π
(π
π πβ ππ
π
+π
π
π,rπ) [π½π
π]π π+ (π
π πβππ
π
+π
π
π,iπ) [πΏπ
π]π π
+ππ[πͺ1]π π. (3.15)
If neitherπnorπis inSoβͺ Sc, then by definition, for allπ β M there must be π
π π,rπ =π
π
π,iπ = ππ(π, π) = ππ(π, π) =0, (3.16a) π
π π,rπ =π
π
π,iπ = ππ(π, π) = ππ(π, π) =0. (3.16b) Equation (3.15) and Lemma 13 imply[πΈ]π π =ππ[πͺ1]π π. By construction,[πͺ1]π π = ππ π is invertible, and so is [πΈ]π π.
If π β So βͺ Sc, then A3 guarantees π β So βͺ Sc. Thus (3.16b) holds for all π β M. For a given π β M, if (3.16a) holds, then by construction, we have [πΈ]π,:
π π =ππ[πͺ1]π,:
π π =ππππ,:
π π. If (3.16a) does not hold for the givenπ, then we have [πΈ]π,:
π π =(π
π πβ ππ
π
+π
π
π,rπ+2ππππ(π, π))ππ,:
π π
2 + (ππ
πβππ
π
+ππ
π,iπ +2ππππ(π, π))ππ ππ,: 2 π.
5The matrixπΈ itself might not be G-invertible as πΈ might not be positive semidefinite, but π¨ββ₯0 always holds.
Note that Condition A5 and Lemma 13 imply both {π
π π β ππ
π
, ππ(π, π), π
π
π,rπ} and {ππ
πβππ
π
, ππ(π, π), ππ
π,iπ}are sign-semidefinite sets, respectively. When (3.16a) does not hold, at least one of {π
π π,rπ, π
π π,iπ
, ππ(π, π), ππ(π, π)} is non-zero. As a result, there exists some non-zeroπ
π,:
π π βCsuch that [πΈ]π,π π: =π
π,:
π πππ ππ,:. In short, in the case π β Soβͺ Sc, [πΈ]π,π π: is always a non-zero multiple ofππ ππ,:. The invertibility ofππ π
indicates all theππ ππ,:are independent forπβ M, so[πΈ]π πis also invertible.
If π β So βͺ Sc, then A3 guarantees π β So βͺ Sc. Then (3.16a) holds for all π β M. For a given π β M, if (3.16b) holds, then by construction, we have [πΈ]:,π
π π =ππ[πͺ1]:,π
π π =ππ(ππ,:
π π)H. If (3.16b) does not hold, then similar to the previous case, there exists some non-zero π:
,π
π π β C such that [πΈ]:π π,π = π:
,π
π π(ππ ππ,:)H. Hence [πΈ]:π π,πis always a non-zero multiple of(ππ ππ,:)H. The invertibility ofππ πindicates all theππ ππ,:are independent forπ β M, so[πΈ]π π is also invertible.
Proof of Theorem 10
Theorem 11 and Corollary 9 imply that (3.5) is exact under conditions A1, A2, A3, and A5 for anyπ. By Lemma 12, Theorem 10 is proved.
3.7 Discussion and Example