• Tidak ada hasil yang ditemukan

SYSTEM LEVEL SYNTHESIS FOR LARGE-SCALE SYSTEMS

7.3 Convex Localized Separable System Level Synthesis Problems

7.3.2 Examples of CLS-SLS Problems

We prove that the ADMM algorithm (7.22a) - (7.22c) converges to an optimal solution of (7.17) (or equivalently, (7.21)) under the following assumptions. The details of the proof, as well as the stopping criteria of the algorithm, can be found in Appendix 7.B.

Assumption 4. Problem(7.17)has a feasible solution in the relative interior of the setS.

Assumption 5. The functionalsg(r)(·)andg(c)(·)are closed, proper, and convex.

Assumption 6. The setsS(r) andS(c) are closed and convex.

matrix h

M L i

row-wise sparse, i.e., have many zero rows. Recall from Theorem 2 that the controller achieving the desired system response can be implemented by (7.4). If the ith row of the transfer matrix h

M L i

is a zero row, then the ith component of the control action ui is always equal to zero. This means that we can remove the control action ui without changing the closed loop response. It is clear that the actuator norm defined in(7.24)is row-wise separable with respect to arbitrary row-wise partition. This still holds true when the actuator norm is defined by the`1/` norm. Similarly, consider the weighted sensor norm given by

||

"

N L

#

λ||Y =

ny

Õ

i=1

λi||

"

N L

# ei||H

2, (7.25)

whereλis a diagonal matrix withλibeing itsith diagonal entry. The sensor norm in (7.25)is a regularizer to make the transfer matrix

h

N> L>

i>

column-wise sparse.

Using the controller implementation (7.4), the sensor norm can be treated as a regularizer on the measurement y. For instance, if theith column of the transfer matrix h

N> L>

i>

is identical to zero, then we can ignore the measurement yi

without changing the closed loop response. The sensor norm defined in (7.24) is column-wise separable with respect to any column-wise partition.

Example 15. From Definition 22, it is straightforward to see that the class of partially separable SLOs with the same partition are closed under summation.

Therefore, we can combine all the partially separable SLOs described above, and the resulting SLO is still partially separable. For instance, consider the SLO given by

g(R,M,N,L)= ||h

C1 D12 i

"

R N M L

# "

B1 D21

#

||2H

2 +||µh M L

i

||U+||

"

N L

# λ||Y, (7.26) where µ and λ are the relative penalty between the H2 performance, actuator and sensor regularizer, respectively. If there exists a permutation matrix Π such that the matrix

h

C1 D12 i

Π is block diagonal, then the SLO (7.26) is partially separable. Specifically, the H2 norm and the actuator regularizer belong to the row-wise separable component, and the sensor regularizer belongs to the column- wise separable component.

Partially Separable Constraints

We consider some examples of partially separable SLCs beside the localized con- straintL and the FIR constraintFT described in Remark 26.

Example 16. Consider theL1norm [11] of a FIR transfer matrixG ∈ FT, which is given by

||G||L1 =max

i

Õ

j T

Õ

t=0

|gi j[t]|. (7.27)

TheL1norm is the induced norm of a`input signal to a`output signal. Suppose that the`norm of the disturbancewin(7.2)is bounded, and we want to bound the

`norm of the state vectorxand the control actionu. We can impose the constraint

||

"

R N M L

# "

B1 D21

#

||L

1 ≤ γ (7.28)

in the optimization problem(7.17)for someγ. The solution of (7.28)forms a convex set because it is a sublevel set of a convex function. Therefore,(7.28) is a convex SLC. From the definition(7.27), the SLC(7.28)is row-wise separable with respect to any row-wise partition.

Example 17. From Definition 23, the class of partially separable SLCs with the same partition are closed under intersection. Therefore, we can combine all the partially separable SLCs described above, and the resulting SLC is still partially separable. For instance, the combination of the localized constraint L, the FIR constraintFT, and theL1constraint in(7.28)is partially separable. This property is extremely useful because it provides a unified framework to deal with all kinds of partially separable constraints at once.

Partially Separable Problems

With some examples of partially separable SLOs and SLCs, we now consider two CLS-SLS problems: (i) localizedH2optimal control problem with sensor actuator regularization, and (ii) localized mixedH2/L1optimal control. These two problems are used in Section 7.4 as case study examples.

Example 18. The localizedH2optimal control with sensor actuator regularization is formulated by

minimize

{R,M,N,L} (7.26) (7.29a)

subject to (7.3a)−(7.3c) (7.29b)

"

R N M L

#

∈ C ∩ L ∩ FT, (7.29c)

where C is the communication delay SLC. If there exists a permutation matrixΠ such that the matrix

h

C1 D12 i

Π or the matrix h

B>

1 D>

21

i

Π is block diagonal, then(7.29)is partially separable.

Remark 27. When the penalty of the sensor and actuator norms are zero, problem (7.29)reduces to a LLQG problem (cf., Chapter 6). If the system is state feedback and we only have actuator regularizer, then (7.29) reduces to the LLQR problem with actuator regularization (cf., Section 5.3).

Problem (7.29) can be used to co-design a localized optimal controller and its sensing and actuation interface by choosing the relative weight among the H2 performance, actuator and sensor norm. We emphasize the fact that (7.29) is a CLS- SLS problem, and thus can be solved by the ADMM algorithm (7.22) with O(1) parallel computational complexity. In other words, we can co-design a localized optimal controller and its sensing and actuation interface in a localized and scalable way. The LLQR with actuator regularization problem described in Section 5.3 is a special case of (7.29), and thus can be solved in a localized way using the ADMM algorithm (7.22) as well.

The weightsµiandλiin the regularizers (7.24) and (7.25) can be properly chosen to further enhance row/column-wise sparsity. For instance, we can use the reweighted

`1algorithm proposed in [6] to iteratively set the weights and solve (7.29) multiple times. Let µ(i0) = µ0 for i = 1, . . . ,nu and λi(0) = λ0 for i = 1, . . . ,ny. Let (R(k),M(k),N(k),L(k))be the optimal solution of (7.29) when the weights are given by{µ(k)i }nu

i=1and{λ(k)i }ny

i=1. We update the weights at iteration(k +1)by µ(ki +1) = ( ||e>i

h

M(k) L(k) i

||H

2+)1 λi(k+1) = ( ||

"

N(k) L(k)

# ei||H

2 +)1 (7.30a)

for some small . It is shown in [6] that this reweighted scheme usually results in sparser solution.

Next, we consider the localized mixedH2/L1optimal control given as follows:

minimize

{R,M,N,L} ||

"

R N M L

# "

B1 D21

#

||H2

2 (7.31a)

subject to (7.3a)−(7.3c),(7.28),(7.29c). (7.31b)

The localized mixed H2/L1 optimal control problem can be used to design the tradeoff between average-case performance and worst-case performance. Specifi- cally, the H2 objective in (7.31a) is the expected value of the energy of the state and control for AWGN disturbances, which measures the average-case performance of the closed loop response. The L1 constraint in (7.28) is the `-` induced norm, which measures the worst-case state and control deviation for ` bounded disturbances.