constraint can be written as
Im{xixHj } ≤ |tanθmaxij | ×Re{xixHj },
or equivalently
−tanθmaxij ×Re{xixHj }+ Re{(+i)xixHj } ≤0,
−tanθmaxij ×Re{xixHj }+ Re{(−i)xixHj } ≤0.
(6.40)
Since (6.39) and (6.40) are quadratic in x, they can easily be incorporated into Optimiza- tion (6.38) and its relaxations. However, the edge set {c1ij, c2ij, c3ij, c4ij} should be extended to {c1ij, c2ij, c3ij, c4ij,−1,i,−i} for every (i, j) ∈ G. It is interesting to note that this set is still sign definite and therefore the conclusion made earlier about the exactness of various relaxations is valid under this generalization.
Another interesting case is the optimization of active power flows for lossless networks.
In this case, gij is equal to zero for every (i, j)∈ G. Hence, pji(x) can be simply replaced by −pij(x). Motivated by this observation, define the reduced vector of active powers as pr(x) =
pij(x)
∀(i, j)∈ G , and consider the optimization
x∈Cminn
¯h0(pr(x),y(x)) s.t. ¯hj(pr(x),y(x))≤0, j= 1,2, ..., m
for some functions ¯h0(·,·), ...,h¯m(·,·), which are assumed to be increasing in their first vector argument. Now, each edge (i, j) of the graph G is accompanied by the singleton weight set n−i
bij
o
. Due to Theorem 10, the SDP and reduced SDP relaxations are exact ifG is weakly cyclic. This is the generalization of the result obtained in [105] for optimization over lossless networks.
Example 1: Consider the problem of minimizing the bivariate polynomial
f0(x1, x2) =x41+ax22+bx21x2+cx1x2 (6.41) with the real-valued variables x1 and x2, where the parameters a, b, c ∈ R are known. In order to find the global minimum of this optimization, the standard convex optimization technique cannot readily be used due to the non-convexity of f(x1, x2) for generic values of a,band c. To address this issue, the above unconstrained minimization problem will be converted to a constrained quadratic optimization. More precisely, the problem of minimiz- ing f0(x1, x2) can be reformulated in terms of x1, x2 and two auxiliary variables x3, x4 as:
x∈Rmin4 x23+ax22+bx3x2+cx1x2 (6.42a) subject to x21−x3x4 = 0, x24−1 = 0 (6.42b)
wherex= h
x1 x2 x3 x4
iH
. The above optimization can be recast as follows:
min
x∈R4,X∈R4×4 X33+aX22+bX23+cX12 (6.43a) subject to X11−X34≤0, X44−1 = 0 (6.43b) and subject to the additional constraintX =xxH. Note that X11−X34 ≤0 should have been X11−X34= 0, but this modification does not change the solution. To eliminate the non-convexity induced by the constraintX =xxH, one can use an SDP relaxation obtained by replacing the constraint X =xxH with the convex SDP constraint X = XH 0. To understand the exactness of this relaxation, the weighted graph G capturing the structure of Optimization (6.42) should be constructed. This graph is depicted in Figure 6.3(b). Due to Corollary 1, since G is acyclic, the SDP relaxation is exact for all values of a, b, c. Note that this does not imply that every solutionX of the SDP relaxation has rank 1. However, there is a simple systematic procedure for recovering a rank-1 solution from an arbitrary optimal solution of this relaxation. Note also that one can use an SOCP relaxation instead.
−2 0
2 0 −2
2
−20 0 20 40 60
x1
x2
f 0(x 1,x 2)
Figure 6.4: Function f0(x1, x2) given in (6.41) for a= 3,b=−2 andc= 3.
Now, assume that a set of constraints
fj(x1, x2) =x41+ajx22+bjx21x2+cjx1x2 ≤dj j= 1, ..., m
has been added to Optimization (6.41) for given coefficients aj, bj, cj, dj. In this case, the graph G depicted in Figure 6.3(b) needs to be modified by replacing its edge sets {b}
and {c} with {b, b1, ..., bm} and {c, c1, ..., cm}, respectively. Due to Corollary 1, the SDP relaxation corresponding to the new optimization is exact as long as the sets {c, c1, ..., cm} and {b, b1, ..., bm} are both sign definite. Moreover, in light of Theorem 4, if these sets are not sign definite, then the SDP relaxation will still have a low rank (rank 1 or 2) solution.
Example 2: Consider the optimization min
x∈C7 xHMx s.t. |xi|= 1, i= 1,2, ...,7 (6.44) where M is a given Hermitian matrix. Assume that the weighted graph G depicted in Figure 6.1(c) captures the structure of this optimization, meaning that (i) Mij = 0 for every pair (i, j) ∈ {1,2, ...7} such that (i, j) 6∈ G, (j, i) 6∈ G, and i 6= j, (ii) Mij is equal to the edge weight cij for every (i, j) ∈ G. The SDP relaxation of this optimization is as follows:
min
X∈C7×7 Trace{M X} s.t. X11=· · ·=X77= 1, X=XH 0
DefineO1 andO2as the cycles induced by the vertex sets{1,2,3}and{1,4,5}, respectively.
Now, the reduced SDP and SOCP relaxations can be obtained by replacing the constraint X =XH 0 in the above optimization with certain small-sized constraints based on O1 and O2, as mentioned before. In light of Theorem 11, the following statements hold:
• The SDP, reduced SDP and SOCP relaxations are all exact in the case whenc12, c13, c14, c15, c23, c45are real numbers satisfying the inequalitiesc12c13c23≤0 andc14c15c45≤0.
• The SDP and reduced SDP are exact in the case when c12, c13, c14, c15, c23, c45 are imaginary numbers (note that the SOCP relaxation may not be tight).
• The SDP, reduced SDP and SOCP relaxations are all exact in the case when each of the sets{c12, c13, c23} and {c14, c15, c45} has at least one zero element.
The above results demonstrate how the combined effect of the graph topology and the edge weights make various relaxations become exact for the quadratic optimization (6.44).
Example 3: Consider the optimization
x∈Cminn xHMx s.t. |xj|= 1, j= 1,2, ..., m (6.45) whereM is a symmetric real-valued matrix. It has been proven in [119] that this problem is NP-hard even in the case when M is restricted to be positive semidefinite. Consider the graph G associated with the matrix M. As an application of Theorem 8, the SDP and reduced SDP relaxations are exact for this optimization and therefore this problem is polynomial-time solvable, provided thatGis bipartite and weakly cyclic. To understand how well the SDP relaxation works, we pick G as a cycle with 4 vertices. Consider a randomly generated matrix M:
M =
0 −0.0961 0 −0.1245
−0.0961 0 −0.1370 0 0 −0.1370 0 0.7650
−0.1245 0 0.7650 0
.
After solving the SDP relaxation numerically, an optimal solutionX∗ is obtained as
X∗ =
1.0000 0.1767 −0.5516 0.6505 0.1767 1.0000 0.7235 −0.6327
−0.5516 0.7235 1.0000 −0.9923 0.6505 −0.6327 −0.9923 1.0000
.
This matrix has rank-2 and thus it seems as if the SDP relaxation is not exact. However, the fact is that this relaxation has a hidden rank-1 solution. To recover that solution, one can write X∗ as the sum of two rank-1 matrices, i.e.,X∗ = (u1)(u1)H+ (u2)(u2)H for two real vectors u1 and u1. It is straightforward to inspect that the complex-valued rank-1 matrix (u1+u2i)(u1+u2i)H is another solution of the SDP relaxation. Thus,x∗ =u1+u2i is an optimal solution of Optimization (6.45).
Example 4: Consider the optimization
x∈Cminn xHM0x s.t. xHMjx≤0, j= 1,2, ..., m
where M0, ...., Mm are symmetric real matrices, while x is an unknown complex vector.
Similar to what was done in Example 1, a generalized weighted graphGcan be constructed for this optimization. Regardless of the edge weights, as long as the graphG is acyclic, the SDP, reduced SDP and SOCP relaxations are all tight (see Theorem 6). As a result, this class of optimization problems is polynomial-time solvable.