Chapter IV: Distributed Structured Robust Control
4.4 Robust Stability
Compatibility complements separability. If functional ๐ is separable into sub- functionals ๐๐, then each ๐๐ is a compatible sub-functional of ๐; similar arguments apply for constraints.
Definition 4.9. Letโ,โญ โZ+indicate some sets of indices. The partially separable problem (4.9) can be decomposed into row and column sub-functionals and sub- constraints. We say (4.9) is abalanced ADMM problem if, for any set of elements (ฮฆ)โ,โญ which appear together in a sub-constraint, there exists a matching sub- functional ๐
sub that is compatible with ๐, and depends only on(ฮฆ)โ,โญ.
Intuitively, a balanced partially separable problem converges faster than an unbal- anced one. For example, consider an output feedback problem with a row separable objective. This is an unbalanced partially separable problem; though all row sub- constraints have a matching sub-objective, none of the column sub-constraints have matching sub-objectives. Thus, we are only able to minimize the objective in the row computation (4.10a); this results in slow convergence, and places more burden on consensus betweenฮฆandฮจthan a balanced problem would. More unbalanced examples include a state feedback problem with a row separable objective, or a full control problem with a column separable objective. To balance an output feedback problem, we require an element-wise separable objective ๐.
To summarize: for both fully separable and partially separable problems, computa- tional complexity scales independently of system size. However, partially separable problems require iteration, while fully separable problems do not. For partially separable problems, we prefer a balanced problem to an unbalanced problem due to faster convergence. Thus, element-wise separability (e.g. H2 optimal control,๐ robustness) is desirable for two reasons: firstly, for state feedback and full control, element-wise separable objectives give rise to fully separable problems. Secondly, for output feedback, where ADMM iterations are unavoidable, element-wise sep- arable objectives give rise to balanced problems. Finally, we remark that Hโ robust control problems are not at all separable, and make for highly un-scalable computations; this motivates our use ofL1, Lโ, and๐robustness.
๐ซ
๐ณ ๐ ๐ฐ
Figure 4.1: Feedback interconnection of transfer matrixGand uncertaintyโ. Gis the nominal closed-loop response from disturbancewto regulated outputz.
SLS formulation to formulate robust control problems as distributed optimization problems.
Robust stability conditions
Let transfer matrix G map disturbance w to regulated output z. Generally, z is a linear function of state x and input u, i.e. G = ๐ปฮฆ for some constant matrix ๐ป. Thus, G is strictly casual and LTI. Assume that we have some fixed closed-loop map ฮฆ, and therefore fixed G. We construct positive constant magnitude matrix ๐ = รโ
๐=1|G(๐) |, where G(๐) are spectral elements of G, and | ยท | denotes the element-wise absolute value. LetD be the set of positive diagonal matrices
D ={๐ท โR๐ร๐ : (๐ท)๐, ๐ =0 โ๐ โ ๐ ,(๐ท)๐,๐ >0 โ๐} (4.11) Lemma 4.1. The interconnection in Figure 4.1 is robustly stable in theL1sense for all DNLTVโsuch that โฅโโฅโโโ < 1
๐ฝ if and only ifinf๐ทโDโฅ๐ท ๐ ๐ทโ1โฅโโโ โค ๐ฝ. Proof in [7].
Lemma 4.2. The interconnection in Figure 4.1 is robustly stable in the Lโ sense for all DNLTVโsuch thatโฅโโฅ1โ1 < 1
๐ฝ if and only ifinf๐ทโDโฅ๐ท ๐ ๐ทโ1โฅ1โ1 โค ๐ฝ. Proof: equivalent to applying Lemma 4.1 to ๐โค.
Lemma 4.3. The interconnection in Figure 4.1 is robustly stable in the๐ sense for all DNLTVโsuch thatโฅโโฅโโ1 < 1
๐ฝ ifinf๐ทโDโฅ๐ท ๐ ๐ทโ1โฅ1โโ โค ๐ฝ. Additionally, ifโ๐ท โ Ds.t. ๐ท ๐ ๐ทโ1is diagonally maximal, then this condition is both sufficient andnecessary. Proof: Theorem 4 in [13].
Definition 4.10. Matrix ๐ด โ R๐ร๐ is diagonally maximal if โ๐ s.t. | (๐ด)๐ , ๐| = max๐, ๐ | (๐ด)๐, ๐|, i.e. the maximum element in ๐ดlies on its diagonal.
In general, computing the โฅยทโฅโโ1 norm is NP-hard; for diagonal โ, โฅโโฅโโ1 = ร
๐| (ฮ)๐ , ๐|[13].
Let the nominal performance beโฅ๐ฮฆโฅperffor some normโฅยทโฅperfand some constant matrix ๐. Then, leveraging the robust analysis results from Lemmas 4.1-4.3, we can pose thesynthesisproblem for nominal performance and robust stability as
min
ฮฆ, ๐ , ๐ท
โฅ๐ฮฆโฅperf+ โฅ๐ท ๐ ๐ทโ1โฅstab
s.t. ๐ =
๐
โ๏ธ
๐=1
|๐ปฮฆ(๐) |, ฮฆ โ S๐โฉ P, ๐ท โ D (4.12) where for ease of computation we assume that ฮฆis finite impulse response (FIR) with horizon๐. We intentionally leave norms ambiguous; โฅยทโฅstab can be โฅยทโฅโโโ,
โฅยทโฅ1โ1, or โฅยทโฅ1โโ for L1, Lโ, and ๐ robust stability, respectively. โฅ๐ท ๐ ๐ทโ1โฅ corresponds to the robust stability margin 1๐ฝ; robust stability is guaranteed for all
โ such that โฅโโฅ < 1
๐ฝ, for the appropriate norm on โ. Smaller ๐ฝ corresponds to stability guarantees for a larger set of โ. The nominal performance norm โฅยทโฅperf may be different from โฅยทโฅstab.
D-ฮฆiteration
Problem (4.12) is nonconvex, and does not admit a convex reformulation. Inspired by the D-K iteration method from Hโ robust control [8], we adopt an iterative approach. We heuristically minimize (4.12) by iteratively fixing ๐ทand optimizing overฮฆin the "ฮฆstep", then fixingฮฆand optimizing (or randomizing) over๐ทin the
"D step".
We remark that problem (4.12) poses both nominal performance and robust stability as objectives. If we already know the desired robust stability margin ๐ฝโ1
max, we can omit the stability objective and instead enforce a constraint โฅ๐ท ๐ ๐ทโ1โฅstab โค ๐ฝ
max, as is done in D-K iteration; similarly, if we already know the desired nominal performance๐ผ, we can omit the performance objective and enforce โฅ๐ฮฆโฅperf โค ๐ผ. Theฮฆstep solves the following problem
ฮฆ, ๐=argmin
ฮฆ, ๐
โฅ๐ฮฆโฅperf
s.t. โฅ๐ท ๐ ๐ทโ1โฅstab โค ๐ฝ ๐ =
๐
โ๏ธ
๐=1
|๐ปฮฆ(๐) |, ฮฆ โ S๐โฉ P
(4.13)
for some fixed value๐ฝand scaling matrix๐ท โ D. The separability of problem (4.13) is characterized by the separability of its objective and constraints. As previously mentioned, P typically consists of sparsity constraints on ฮฆ; this constraint is element-wise separable. The separability of other constraints and objectives in (4.13) are as follows:
โข If๐ is separably diagonal, โฅ๐ฮฆโฅperf has the same separability as โฅยทโฅperf; If not, it is column separable if and only ifโฅยทโฅperf is column separable.
โข โฅ๐ท ๐ ๐ทโ1โฅstab < ๐ฝhas the same separability as โฅยทโฅstab.
โข If๐ป is separably diagonal, ๐ = ร๐
๐=1|๐ปฮฆ(๐) | is element-wise separable; if not, it is column separable.
โข ฮฆโ S๐is column separable for state feedback, row separable for full control, and partially separable for output feedback.
Definition 4.11. For state feedback, the product๐ฮฆmay be written as๐๐ฅฮฆ๐ฅ+๐๐ขฮฆ๐ข
for some matrices๐๐ฅand๐๐ข. ๐isseparably diagonalif both๐๐ฅand๐๐ขare diagonal matrices. Analogous definitions apply to the full control and output feedback case.
Table 4.1 summarizes the separability of (4.13) for state feedback, full control, and output feedback problems with Hโ, L1, Lโ, and ๐ robustness, where we assume that๐ and ๐ป are separably diagonal and โฅยทโฅperf has the same type of separability as โฅยทโฅstab. Note that Hโ is not separable for any problem. For state feedback, Lโ and ๐ are the preferred stability criteria; for full control, L1 and ๐ are the preferred stability criteria. For output feedback,๐is the only criterion that produces a balanced partially separable problem. Overall, ๐ robustness is preferable in all three cases, resulting in either fully separable formulations that require no iterations, or balanced partially separable formulations, which have preferable convergence properties. Though convergence properties vary, theฮฆstep (4.13) can be computed with complexity that scales with local neighborhood size๐instead of global system size for all non-Hโcases.
The D step solves the following problem ๐ท =argmin
๐ท
โฅ๐ท ๐ ๐ทโ1โฅstab s.t. ๐ท โ D (4.14)
Table 4.1: Separability ofฮฆstep of D-ฮฆiteration
State Feedback Full Control Output Feedback
Hโ No No No
L1 Partial, Unbalanced Full Partial, Unbalanced Lโ Full Partial, Unbalanced Partial, Unbalanced
๐ Full Full Partial, Balanced
for some fixed magnitude matrix๐. For๐robustness, the minimizing D step (4.14) can be recast as a linear program
๐๐, ๐=argmin
๐๐,๐
๐ s.t. log(๐)๐, ๐ +๐๐โ๐๐ โค ๐
โ(๐)๐, ๐ โ 0, 1 โค๐, ๐ โค ๐
(4.15)
The optimal solution๐ทcan be recovered as๐ท =diag(exp(๐1),exp(๐2), . . .exp(๐๐)). Problem (4.15) can be distributedly computed using ADMM consensus [14]. Let ๐ฅ๐ =
"
๐๐ ๐ฟ๐@๐
#
be the variable at subsystem๐, where๐๐is subsystem๐โs value of๐, and ๐ฟ๐@๐ is a vector containing ๐๐@๐: subsystem๐โs values of๐๐ for all ๐ โ N๐(๐). The goal is for all subsystems to reach consensus on๐and๐๐,1 โค ๐ โค ๐. We introduce dual variable๐ฆ๐and averaging variable๐ฅยฏ๐ =
"
ยฏ ๐๐ ๐ฟยฏ๐@๐
#
. For each iteration๐, subsystem ๐performs the following computations
๐ฅ๐+1
๐ =argmin
๐ฅ๐
๐๐+ (๐ฆ๐
๐)โค(๐ฅ๐โยฏ๐ฅ๐
๐) +๐พโฅ๐ฅ๐โ๐ฅยฏ๐
๐โฅ22 s.t. โ๐ โ N๐(๐),(๐)๐, ๐ +๐๐@๐โ๐๐@๐ โค ๐๐
(4.16a)
ยฏ ๐๐+1
๐ = 1
|N๐(๐) |
โ๏ธ
๐โN๐(๐)
๐๐
ยฏ๐๐ = 1
|N๐(๐) |
โ๏ธ
๐โN๐(๐)
๐๐@๐, ยฏ๐๐+1
๐@๐ =๐ยฏ๐,โ๐ โ N๐(๐)
(4.16b)
๐ฆ๐+1
๐ =๐ฆ๐
๐ + ๐พ 2(๐ฅ๐+1
๐ โ๐ฅยฏ๐+1
๐ ) (4.16c)
where ๐พ is a user-determined parameter, and iterations stop when consensus is reached, i.e. differences between relevant variables are sufficiently small. The size of optimization variable ๐ฅ๐ depends only on local neighborhood size ๐; thus, the complexity of (4.16a) scales independently of global system size. Computation (4.16b) requires communication, but only within the local neighborhood. Also, by
the definition ofฮฆand the construction of ๐, (๐)๐, ๐ =0 โ๐ โN๐(๐). Thus, for a fully connected system, solving (4.16) is equivalent to solving (4.15). Additionally, consensus problems are balanced as per Definition 4.9, so (4.16) converges relatively quickly.
ForL1robustness, (4.14) is solved by setting๐ท =diag(๐ฃ1, ๐ฃ2, . . . ๐ฃ๐)โ1, where๐ฃis the eigenvector corresponding to the largest-magnitude eigenvalue of ๐ [7]. This computation does not lend itself to scalable distributed computation. To ameliorate this, we propose an alternative formulation that randomizes instead of minimizing over๐ท. This can be written in the form of an optimization problem as
๐ท =argmin
๐ท
0 s.t. โฅ๐ท ๐ ๐ทโ1โฅstab โค ๐ฝ (4.17) The randomizing formulation lends itself to distributed computation. Also, we remark that (4.14) can be solved by iteratively solving (4.17) to search for the lowest feasible value of ๐ฝ.
Define vectors
๐ฃ =
๏ฃฎ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฐ (๐ท)1,1
(๐ท)2,2
.. . (๐ท)๐,๐
๏ฃน
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃป
, ๐ฃโ1=
๏ฃฎ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฐ (๐ท)โ1
1,1
(๐ท)โ1
2,2
.. . (๐ท)โ1๐,๐
๏ฃน
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃป
(4.18)
We can rewrite constraint โฅ๐ท ๐ ๐ทโ1โฅstab โค ๐ฝas๐ ๐ฃโ1 โค ๐ฝ๐ฃโ1 forL1stability, and ๐โค๐ฃ โค ๐ฝ๐ฃfor Lโ stability. Then, problem (4.17) can be formulated as a scalable distributed ADMM consensus problem using similar techniques as (4.15).
Both versions of the D step ((4.14) and (4.17)) for D-ฮฆiteration are simpler than the D step in D-K iteration [8], which requires a somewhat involved frequency fitting process. Also, all separable versions of the proposed D step are less computationally intensive than theฮฆstep (4.13), since the decision variable in the D step is much smaller. Table 4.2 summarizes the scalability of different versions of the D step for different robustness criteria. "Minimize" refers to solving (4.14) directly; "Iteratively Minimize" refers to solving (4.14) by iteratively solving (4.17) to search for the lowest feasible ๐ฝ; "Randomize" refers to solving (4.14). โ indicates that we can use scalable distributed computation, andโ indicates that no scalable distributed formulation is available; by scalable, we mean complexity that scales independently of global system size. For iterative minimization, there is the obvious caveat of iterations incurring additional computational time; however, forL1andLโ, iterative minimization is more scalable than direct minimization. Additionally, we show in
Table 4.2: Scalability of D step of D-ฮฆiteration Minimize Iteratively Minimize Randomize
Hโ โ โ โ
L1 โ โ โ
Lโ โ โ โ
๐ โ โ โ
the next section that algorithms using the randomizing D step perform similarly as algorithms using the minimizing D step; thus, iterative minimization may be unnecessary. Overall,๐robustness appears to be preferable for scalability purposes for both theฮฆstep and D step.
We now present two algorithms for D-ฮฆ iteration. Algorithm 4.1 is based on minimizing over ๐ท (4.14), while Algorithm 4.2 is based on randomizing over ๐ท (4.17). Both algorithms compute the controllerฮฆwhich achieves optimal nominal performance for some desired robust stability margin๐ฝโ1
max. Algorithm 4.1D-ฮฆiteration with minimizing D step
input : ๐ฝ
step > 0, ๐ฝ
max > 0 output : ฮฆ, ๐ฝ
1: Initialize ๐ฝ๐=0 โ โ,๐ โ1
2: Set ๐ฝ๐ โ ๐ฝ๐โ1โ๐ฝ
step. Solve (4.13) to obtainฮฆ๐, ๐๐ if (4.13) is infeasible:
returnฮฆ๐โ1, ๐ฝ๐โ1
3: Solve (4.14) to obtain ๐ท. Set๐ฝ๐ โ โฅ๐ท ๐๐๐ทโ1โฅstab if ๐ฝ๐ โค ๐ฝ
max: returnฮฆ๐, ๐ฝ๐
4: Set ๐ โ ๐+1and return to step 2
In Algorithm 4.1, we alternate between minimizing over ฮฆ and minimizing over ๐ท, and stop when no more progress can be made or when ๐ฝ
max is attained. No initial guess of๐ท is needed; at iteration๐ =1, ๐ฝ๐ =โ, and theโฅ๐ท ๐ ๐ทโ1โฅstab โค ๐ฝ constraint in (4.13) of step 2 is trivially satisfied. In Algorithm 4.2, we alternate between minimizingฮฆ and randomizing ๐ท. There are two main departures from Algorithm 4.1 due to the use of the randomizing D step:
1. An initial guess of ๐ท is required to generate ๐ฝ๐=1, which is then used as an input to the randomizing D step. ๐ท = ๐ผ is a natural choice, although we may also randomize or minimize over the initial๐ท.
Algorithm 4.2D-ฮฆiteration with randomizing D step input : ๐ฝ
step > 0, ๐ฝ
max > 0 output : ฮฆ, ๐ฝ
1: Initialize ๐ฝ๐=0 โ โ,๐ โ1, ๐ท โ ๐ผ
2: Set ๐ฝ๐ โ ๐ฝ๐โ1โ๐ฝ
step. Solve (4.13) to obtainฮฆ๐, ๐๐ if ๐ =1:
Set ๐ฝ๐ โ โฅ๐ท ๐๐๐ทโ1โฅstab if (4.13) is infeasible:
Solve (4.17)
if (4.17) is infeasible: returnฮฆ๐โ1, ๐ฝ๐โ1 else :
Solve (4.13) to obtainฮฆ๐, ๐๐ if ๐ฝ๐ โค ๐ฝ
max: returnฮฆ๐, ๐ฝ๐
3: Solve (4.17) to obtain ๐ท
4: Set ๐ โ ๐+1and return to step 2
2. In step 2, when we cannot find a newฮฆ to make progress on ๐ฝ, instead of stopping, attempt to find a new๐ท to make progress on ๐ฝ. If such a ๐ท can be found, find the new optimalฮฆ, then continue iterating.
Parameter๐ฝ
stepappears in both algorithms, and indicates the minimal robust stability margin improvement per step. For both algorithms, computational complexity is dominated by theฮฆstep problem (4.13) and D step problem (4.14) or (4.17). All of these problems can be distributedly computed, and all enjoy complexity that scales independently of global system size; thus, the complexity of the overall algorithm also scales independently of global system size.