• Tidak ada hasil yang ditemukan

SYSTEM LEVEL SYNTHESIS FOR LARGE-SCALE SYSTEMS

7.3 Convex Localized Separable System Level Synthesis Problems

7.3.1 Partially Separable Problems

We begin by simplifying the notation of (7.6). We use Φ=

"

R N M L

#

to represent the system response we want to optimize for. DenoteZAB the transfer matrix

h

zI − A −B2 i

andZACthe transfer matrix h

zI −A> −C>

2

i>

. LetJBbe the matrix in the right-hand-side of (7.3a) andJCbe the matrix in the right-hand-side of (7.3b). For the ease of presentation, we also include the subspace constraint (7.3c) into the convex set componentX. The localized SLS problem (7.6) can be written as

minimize

Φ g(Φ) (7.17a)

subject to ZABΦ= JB (7.17b)

ΦZAC = JC (7.17c)

Φ∈ S (7.17d)

withS = L ∩ FT ∩ X. We assume that (7.17) is a convex problem throughout the rest of the section. The goal of this section is to identify the technical conditions on g(·)andSin (7.17) so that the convex localized SLS problem (7.17) can be solved in a localized and scalable way.

The primary difference between the output feedback SLS problem (7.17) and its state feedback simplification (7.8) is the additional constraint (7.17c). The column- wise separation technique does not apply to the output feedback problem (7.17) directly because the constraint (7.17c) does not admit a column-wise separation.

However, similar to the observation we made in the LLQG problem (cf., Chapter 6), we note that the optimization variableΦin constraint (7.17c) can be solved row at a time. Our strategy is to use distributed optimization techniques such as ADMM to decouple (7.17) into a row-wise separable component and a column-wise separable component. If the SLO (7.17a) and the SLC (7.17d) can besplit into a row-wise

separable component and a column-wise separable component, then we can solve the convex localized SLS problem (7.17) in a localized yet iterative way, using the algorithm similar to the one for LLQG.

From the definition of the system response (7.2), the number of rows and columns of Φin (7.17) are given by (nx +nu)and (nx+ ny), respectively. Let {r1, . . . ,rq} be a partition of the set {1, . . . ,nx +nu}, and {c1, . . . ,cp} be a partition of the set {1, . . . ,nx+ny}. We define partially separable SLO as follows.

Definition 22. The convex system level objectiveg(Φ)in(7.17a)is said to bepartially separable with respect to the row-wise partition {r1, . . . ,rq} and the column-wise partition {c1, . . . ,cp} if g(Φ) can be written as the sum of two convex objectives g(r)(Φ)and g(c)(Φ), where g(r)(Φ)is row-wise separable with respect to the row- wise partition{r1, . . . ,rq}andg(c)(Φ)is column-wise separable with respect to the column-wise partition{c1, . . . ,cp}. Specifically, we have

g(Φ) = g(r)(Φ)+g(c)(Φ) g(r)(Φ) =

q

Õ

j=1

g(r)j (Φ(rj,:))

g(c)(Φ) =

p

Õ

j=1

g(c)j (Φ(:,cj)) (7.18) for some convex functionalsg(r)j (·)for j =1, . . . ,q, andg(c)j (·)for j =1, . . . ,p.

Note that the column/row-wise separable SLO defined in Definition 16 and 19 are special cases of partially separable SLOs. We define partially separable SLC (7.17d) as follows.

Definition 23. The convex system level constraintSin(7.17d)is said to bepartially separable with respect to the row-wise partition {r1, . . . ,rq} and the column-wise partition{c1, . . . ,cp} ifS can be written as the intersection of two convex setsS(r) and S(c), where S(r) is row-wise separable with respect to the row-wise partition {r1, . . . ,rq} and S(c) is column-wise separable with respect to the column-wise partition{c1, . . . ,cp}. Specifically, we have

S = S(r) ∩ S(c) Φ ∈ S(r) ⇐⇒

q

Ù

j=1

Φ(rj,:) ∈ S(r)j

Φ∈ S(c) ⇐⇒

p

Ù

j=1

Φ(:,cj) ∈ S(c)j (7.19)

for some convex setsS(r)j for j =1, . . . ,qandS(c)j for j =1, . . . ,p.

Similarly, the column/row-wise separable SLC defined in Definition 17 and 20 are special cases of partially separable SLCs.

Remark 26. Recall that the SLC set S in (7.17) is given by S = L ∩ FT ∩ X. The localized constraintL and the FIR constraintFT are partially separable with respect to arbitrary row-wise and column-wise partition. Therefore, the partial separability of the SLCS is determined by the partial separability of the setX. If X is partially separable, we can express the original SLCS as an intersection of the setsS(r) = L ∩ FT ∩ X(r) andS(c) = L ∩ FT ∩ X(c), whereX(r) is a row-wise separable component ofXandX(c)a column-wise separable component ofX. Note that the localized constraint and the FIR constraint are included in both S(r) and S(c). As will be shown later, this is the key point to allow the subroutines of the ADMM algorithm to be solved in a scalable way.

We now formally define partially separable SLS problems as follows.

Definition 24. The output feedback system level synthesis problem (7.17) is said to be a partially separable problem if the SLO (7.17a) and the SLC (7.17d) are both partially separable with respect to a row-wise partition {r1, . . . ,rq} and a column-wise partition{c1, . . . ,cp}.

The definition of the class of CLS-SLS problems is then straightforward.

Definition 25. A SLS problem(7.5)is said to be aCLS-SLS problemif it is convex, localized, and partially separable.

For a CLS-SLS problem (7.17), letΨbe a duplicate of the optimization variableΦ. We define extended-real-value functionalsh(r)(Φ)andh(c)(Ψ)by

h(r)(Φ)=

( g(r)(Φ) if (7.17c),Φ∈ S(r)

∞ otherwise

h(c)(Ψ)=

( g(c)(Ψ) if (7.17b),Ψ ∈ S(c)

∞ otherwise.

(7.20) The CLS-SLS problem (7.17) can then be reformulated as

minimize

{Φ,Ψ} h(r)(Φ)+h(c)(Ψ)

subject to Φ=Ψ. (7.21)

Problem (7.21) can be solved via the standard ADMM approach [5]. Specifically, the ADMM algorithm for (7.21) is given by

Φk+1= argmin

Φ

h(r)(Φ)+ ρ 2

||Φ−Ψkk||2H

2

(7.22a) Ψk+1= argmin

Ψ

h(c)(Ψ)+ ρ 2

||Ψ−Φk+1−Λk||2H

2

(7.22b)

Λk+1= Λkk+1−Ψk+1, (7.22c)

where the square of theH2norm is computed by (7.10). As the FIR constraintFT is imposed onΦandΨthrough the SLC setsS(r)andS(c), subroutines (7.22a) and (7.22b) are both finite dimensional optimization problems.

Subroutines (7.22a) and (7.22b) can be further decomposed row-wisely and column- wisely, respectively. For instance, the optimization problem corresponding to sub- routine (7.22b) is given by

minimize

Ψ g(c)(Ψ)+ ρ 2

||Ψ−Φk+1−Λk||H2

2 (7.23a)

subject to ZABΨ= JB (7.23b)

Ψ ∈ S(c). (7.23c)

This problem has the same form as (7.8) — the right-hand-side of (7.23b) does not affect the column-wise separability of the problem. The H2 norm regularizer in (7.23a) is column-wise separable with respect to arbitrary column-wise parti- tion. As the objective g(c)(·) and the constraint S(c) are column-wise separable with respect to a given column-wise partition, we can use the column-wise separa- tion technique described in the previous section to decompose problem (7.23) into parallel subproblems. Recall that we have S(c) = L ∩ FT ∩ X(c). We can then exploit the localized constraintL and use the technique described in Section 7.2.1 and Appendix 7.A to further reduce the dimension of each subproblem from global scale to local scale. Overall, subroutine (7.22b) can be solved in a localized and scalable manner. Similarly, subroutine (7.22a) can also be solved via the row-wise separation. Equation (7.22c) can be computed element-wisely since it is a matrix addition. Therefore, (7.22a) - (7.22c) is a scalable algorithm to solve the CLS-SLS problem (7.17) by using row-wise and column-wise separation alternatively and iteratively.

We prove that the ADMM algorithm (7.22a) - (7.22c) converges to an optimal solution of (7.17) (or equivalently, (7.21)) under the following assumptions. The details of the proof, as well as the stopping criteria of the algorithm, can be found in Appendix 7.B.

Assumption 4. Problem(7.17)has a feasible solution in the relative interior of the setS.

Assumption 5. The functionalsg(r)(·)andg(c)(·)are closed, proper, and convex.

Assumption 6. The setsS(r) andS(c) are closed and convex.