• Tidak ada hasil yang ditemukan

Chapter IV: Distributed Structured Robust Control

4.3 Separability and Computation

We present definitions of separability and compatibility, extending concepts from [3]. We describe how separable objectives and constraints in (4.1) translate to distributed computation for both optimal and robust control.

Separability

Definition 4.1. Define transfer matrices Ξ¦ = argmin𝑓(Ξ¦) and Ξ¨, where (Ξ¨)𝑖, 𝑗 = argmin𝑓𝑖 𝑗( (Ξ¨)𝑖, 𝑗) and 𝑓, 𝑓𝑖 𝑗 are some functionals. 𝑓 is anelement-wise separable functionalif there exist sub-functionals 𝑓𝑖 𝑗 such that 𝑓(Ξ¦)= 𝑓(Ξ¨).

Relevant element-wise separable functionals includeβˆ₯Β·βˆ₯H2 and βˆ₯Β·βˆ₯1β†’βˆž(i.e. maxi- mum absolute element). The latter is the norm associated with𝜈 robustness [13].

Definition 4.2. LetPrepresent some constraint set. Pis anelement-wise separable constraintif there exist sub-constraint setsP𝑖 𝑗 such thatΦ∈ P ⇔ (Ξ¦)𝑖, 𝑗 ∈ P𝑖 𝑗. Sparsity constraints onΞ¦are element-wise separable.

Definition 4.3. Define transfer matrices Ξ¦ = argmin𝑓(Ξ¦) and Ξ¨, where (Ξ¨)𝑖,: = argmin𝑓𝑖( (Ξ¨)𝑖,:)and 𝑓, 𝑓𝑖are some functionals. 𝑓 is arow-separable functionalif there exist sub-functionals 𝑓𝑖such that 𝑓(Ξ¦)= 𝑓(Ξ¨).

Definition 4.4. LetPrepresent some constraint set. Pis arow separable constraint if there exist sub-constraint setsP𝑖such thatΦ∈ P ⇔ (Ξ¦)𝑖,: ∈ P𝑖.

Column separable functionals and constraints are defined analogously. Clearly, any element-wise separable functional or constraint is also both row and column separable. A relevant row separable functional isβˆ₯Β·βˆ₯βˆžβ†’βˆž (i.e. maximum absolute row sum), which is the norm associated with L1 robustness. A relevant column separable functional is βˆ₯Β·βˆ₯1β†’1(i.e. maximum absolute column sum), which is the norm associated with L∞ robustness. We also see that constraints of the form 𝐺Φ = 𝐻 are column separable, while constraints of the form Φ𝐺 = 𝐻 are row separable. Thus, the state feedback achievability constraint is column separable, and the full control achievability constraint is row separable. For the output feedback achievability constraints, (4.7a) is column separable, (4.7b) is row separable, and (4.7c) is element-wise separable.

Definition 4.5. Optimization problem (4.1) is a fully row separable optimization when all objectives and constraints are row separable. It is afully column separable optimizationwhen all objectives and constraints are column separable.

Definition 4.6. Optimization problem (4.1) is a partially separable optimization when all objectives and constraints are either row or column separable, but the overall problem is not fully separable.

The separability of the SLS state feedback, full control, and output feedback prob- lems are inherently limited by their achievability constraints, since these must always be enforced. The state feedback problem can be fully column separable, but not fully row separable; the opposite is true for the full control problem1. The out- put feedback problem cannot be fully separable, since it includes a mix of column and row separable achievability constraints. We now describe the computational implications of separability.

Scalability and computation

A fully separable optimization is easily solved via distributed computation. For instance, if problem (4.1) is fully row separable, we can solve the subproblem

min

(Ξ¦)𝑖 ,:

𝑓𝑖( (Ξ¦)𝑖,:) s.t. (Ξ¦)𝑖,:∈ Sπ‘Žπ‘– ∩ P𝑖 (4.8) at each subsystem𝑖in the system, in parallel.

We often enforce sparsity onΞ¦to constrain inter-subsystem communication to local neighborhoods of size 𝑑. This translates to (Ξ¦)𝑖, 𝑗 =0 βˆ€π‘— βˆ‰ N𝑑(𝑖), whereN𝑑(𝑖) is the set of subsystems that are in subsystem 𝑖’s local neighborhood, and |N𝑑(𝑖) | depends on 𝑑. Then, the size of the decision variable in subproblem (4.8) scales with 𝑑. Each subsystem solves a subproblem in parallel; overall, computational complexity scales with𝑑instead of system size𝑁. This is highly beneficial for large systems, where we can choose𝑑much smaller than 𝑁.

For partially separable problems, we apply the alternating direction method of multipliers (ADMM) [14]. We first rewrite (4.1) in terms of row separable and column separable objectives and constraints

minΦ

𝑓(row)(Ξ¦) + 𝑓(col)(Ξ¦)

s.t. Ξ¦ ∈ Sπ‘Ž(row) ∩ Sπ‘Ž(col) ∩ P(row)∩ P(col)

(4.9)

1The only exception is if all system matrices are diagonal; in this case, all achievability constraints are element-wise separable.

Note that an element-wise separable objective can appear in both 𝑓(row) and 𝑓(col); similarly, an element-wise separable constraint P can appear in both P(row) and P(col). We now introduce duplicate variableΞ¨and dual variable Ξ›. The ADMM algorithm is iterative; for each iterationπ‘˜, we perform the following computations

Ξ¦π‘˜+1 =argmin

Ξ¦

𝑓(row)(Ξ¦) + 𝛾

2βˆ₯Ξ¦βˆ’Ξ¨π‘˜ +Ξ›π‘˜βˆ₯𝐹

s.t. Φ∈ Sπ‘Ž(row) ∩ P(row)

(4.10a)

Ξ¨π‘˜+1=argmin

Ξ¨

𝑓(col)(Ξ¨) + 𝛾

2βˆ₯Ξ¦π‘˜+1βˆ’Ξ¨+Ξ›π‘˜βˆ₯𝐹

s.t. Ξ¨ ∈ Sπ‘Ž(col) ∩ P(col)

(4.10b) Ξ›π‘˜+1=Ξ›π‘˜+Ξ¦π‘˜+1βˆ’Ξ¨π‘˜+1 (4.10c) ADMM separates (4.1) into a row separable problem (4.10a) and a column separable problem (4.10b), and encourages consensus betweenΞ¦andΞ¨(i.e. Ξ¦ =Ξ¨) via the 𝛾-weighted objective and (4.10c). When both βˆ₯Ξ¦π‘˜+1βˆ’Ξ¨π‘˜+1βˆ₯𝐹 and βˆ₯Ξ¦π‘˜+1βˆ’Ξ¦π‘˜βˆ₯𝐹

are sufficiently small, the algorithm converges;Ξ¦π‘˜ is the optimal solution to (4.1).

Optimizations (4.10a) and (4.10b) are fully separable; as described above, they enjoy complexity that scales with local neighborhood size instead of global system size.

Additionally, due to sparsity constraints on Ξ¦ and Ξ¨, only local communication is required between successive iterations. Thus, partially separable problems also enjoy computational complexity that scale with local neighborhood size instead of global system size [3], [15]. However, partially separable problems require itera- tions, making them more computationally complex than fully separable problems.

The separability of objectives and constraints in (4.1) can also affect convergence rate, as we will describe next.

Definition 4.7. Let β„œ, β„­ ∈ Z+ indicate some sets of indices. Define 𝑓

sub, a functional of (Ξ¦)β„œ,β„­, and 𝑓, a functional ofΞ¦. Define transfer matricesΞ¦and Ξ¨, and let (Ξ¦)𝑖, 𝑗 = (Ξ¨)𝑖, 𝑗 βˆ€(𝑖, 𝑗) βˆ‰ (β„œ,β„­). 𝑓

sub is a compatible sub-functional of functional 𝑓 if minimizing𝑓

subis compatible with minimizing 𝑓, i.e. 𝑓

sub( (Ξ¦)β„œ,β„­) ≀ 𝑓sub( (Ξ¨)β„œ,β„­) β‡’ 𝑓(Ξ¦) ≀ 𝑓(Ξ¨).

Definition 4.8. Letβ„œ,β„­ ∈Z+indicate some sets of indices. Constraint setPsubis a compatible sub-constraintof constraint setP ifΦ∈ P β‡’ (Ξ¦)β„œ,β„­ ∈ Psubfor some set of elements(Ξ¦)β„œ,β„­.

Compatibility complements separability. If functional 𝑓 is separable into sub- functionals 𝑓𝑖, then each 𝑓𝑖 is a compatible sub-functional of 𝑓; similar arguments apply for constraints.

Definition 4.9. Letβ„œ,β„­ ∈Z+indicate some sets of indices. The partially separable problem (4.9) can be decomposed into row and column sub-functionals and sub- constraints. We say (4.9) is abalanced ADMM problem if, for any set of elements (Ξ¦)β„œ,β„­ which appear together in a sub-constraint, there exists a matching sub- functional 𝑓

sub that is compatible with 𝑓, and depends only on(Ξ¦)β„œ,β„­.

Intuitively, a balanced partially separable problem converges faster than an unbal- anced one. For example, consider an output feedback problem with a row separable objective. This is an unbalanced partially separable problem; though all row sub- constraints have a matching sub-objective, none of the column sub-constraints have matching sub-objectives. Thus, we are only able to minimize the objective in the row computation (4.10a); this results in slow convergence, and places more burden on consensus betweenΞ¦andΞ¨than a balanced problem would. More unbalanced examples include a state feedback problem with a row separable objective, or a full control problem with a column separable objective. To balance an output feedback problem, we require an element-wise separable objective 𝑓.

To summarize: for both fully separable and partially separable problems, computa- tional complexity scales independently of system size. However, partially separable problems require iteration, while fully separable problems do not. For partially separable problems, we prefer a balanced problem to an unbalanced problem due to faster convergence. Thus, element-wise separability (e.g. H2 optimal control,𝜈 robustness) is desirable for two reasons: firstly, for state feedback and full control, element-wise separable objectives give rise to fully separable problems. Secondly, for output feedback, where ADMM iterations are unavoidable, element-wise sep- arable objectives give rise to balanced problems. Finally, we remark that H∞ robust control problems are not at all separable, and make for highly un-scalable computations; this motivates our use ofL1, L∞, and𝜈robustness.