• Tidak ada hasil yang ditemukan

Chapter IV: Distributed Structured Robust Control

4.4 Robust Stability

Compatibility complements separability. If functional ๐‘“ is separable into sub- functionals ๐‘“๐‘–, then each ๐‘“๐‘– is a compatible sub-functional of ๐‘“; similar arguments apply for constraints.

Definition 4.9. Letโ„œ,โ„ญ โˆˆZ+indicate some sets of indices. The partially separable problem (4.9) can be decomposed into row and column sub-functionals and sub- constraints. We say (4.9) is abalanced ADMM problem if, for any set of elements (ฮฆ)โ„œ,โ„ญ which appear together in a sub-constraint, there exists a matching sub- functional ๐‘“

sub that is compatible with ๐‘“, and depends only on(ฮฆ)โ„œ,โ„ญ.

Intuitively, a balanced partially separable problem converges faster than an unbal- anced one. For example, consider an output feedback problem with a row separable objective. This is an unbalanced partially separable problem; though all row sub- constraints have a matching sub-objective, none of the column sub-constraints have matching sub-objectives. Thus, we are only able to minimize the objective in the row computation (4.10a); this results in slow convergence, and places more burden on consensus betweenฮฆandฮจthan a balanced problem would. More unbalanced examples include a state feedback problem with a row separable objective, or a full control problem with a column separable objective. To balance an output feedback problem, we require an element-wise separable objective ๐‘“.

To summarize: for both fully separable and partially separable problems, computa- tional complexity scales independently of system size. However, partially separable problems require iteration, while fully separable problems do not. For partially separable problems, we prefer a balanced problem to an unbalanced problem due to faster convergence. Thus, element-wise separability (e.g. H2 optimal control,๐œˆ robustness) is desirable for two reasons: firstly, for state feedback and full control, element-wise separable objectives give rise to fully separable problems. Secondly, for output feedback, where ADMM iterations are unavoidable, element-wise sep- arable objectives give rise to balanced problems. Finally, we remark that Hโˆž robust control problems are not at all separable, and make for highly un-scalable computations; this motivates our use ofL1, Lโˆž, and๐œˆrobustness.

๐šซ

๐ณ ๐† ๐ฐ

Figure 4.1: Feedback interconnection of transfer matrixGand uncertaintyโˆ†. Gis the nominal closed-loop response from disturbancewto regulated outputz.

SLS formulation to formulate robust control problems as distributed optimization problems.

Robust stability conditions

Let transfer matrix G map disturbance w to regulated output z. Generally, z is a linear function of state x and input u, i.e. G = ๐ปฮฆ for some constant matrix ๐ป. Thus, G is strictly casual and LTI. Assume that we have some fixed closed-loop map ฮฆ, and therefore fixed G. We construct positive constant magnitude matrix ๐‘€ = รโˆž

๐‘=1|G(๐‘) |, where G(๐‘) are spectral elements of G, and | ยท | denotes the element-wise absolute value. LetD be the set of positive diagonal matrices

D ={๐ท โˆˆR๐‘›ร—๐‘› : (๐ท)๐‘–, ๐‘— =0 โˆ€๐‘– โ‰  ๐‘— ,(๐ท)๐‘–,๐‘– >0 โˆ€๐‘–} (4.11) Lemma 4.1. The interconnection in Figure 4.1 is robustly stable in theL1sense for all DNLTVโˆ†such that โˆฅโˆ†โˆฅโˆžโ†’โˆž < 1

๐›ฝ if and only ifinf๐ทโˆˆDโˆฅ๐ท ๐‘€ ๐ทโˆ’1โˆฅโˆžโ†’โˆž โ‰ค ๐›ฝ. Proof in [7].

Lemma 4.2. The interconnection in Figure 4.1 is robustly stable in the Lโˆž sense for all DNLTVโˆ†such thatโˆฅโˆ†โˆฅ1โ†’1 < 1

๐›ฝ if and only ifinf๐ทโˆˆDโˆฅ๐ท ๐‘€ ๐ทโˆ’1โˆฅ1โ†’1 โ‰ค ๐›ฝ. Proof: equivalent to applying Lemma 4.1 to ๐‘€โŠค.

Lemma 4.3. The interconnection in Figure 4.1 is robustly stable in the๐œˆ sense for all DNLTVโˆ†such thatโˆฅโˆ†โˆฅโˆžโ†’1 < 1

๐›ฝ ifinf๐ทโˆˆDโˆฅ๐ท ๐‘€ ๐ทโˆ’1โˆฅ1โ†’โˆž โ‰ค ๐›ฝ. Additionally, ifโˆƒ๐ท โˆˆ Ds.t. ๐ท ๐‘€ ๐ทโˆ’1is diagonally maximal, then this condition is both sufficient andnecessary. Proof: Theorem 4 in [13].

Definition 4.10. Matrix ๐ด โˆˆ R๐‘›ร—๐‘› is diagonally maximal if โˆƒ๐‘˜ s.t. | (๐ด)๐‘˜ , ๐‘˜| = max๐‘–, ๐‘— | (๐ด)๐‘–, ๐‘—|, i.e. the maximum element in ๐ดlies on its diagonal.

In general, computing the โˆฅยทโˆฅโˆžโ†’1 norm is NP-hard; for diagonal โˆ†, โˆฅโˆ†โˆฅโˆžโ†’1 = ร

๐‘˜| (ฮ”)๐‘˜ , ๐‘˜|[13].

Let the nominal performance beโˆฅ๐‘„ฮฆโˆฅperffor some normโˆฅยทโˆฅperfand some constant matrix ๐‘„. Then, leveraging the robust analysis results from Lemmas 4.1-4.3, we can pose thesynthesisproblem for nominal performance and robust stability as

min

ฮฆ, ๐‘€ , ๐ท

โˆฅ๐‘„ฮฆโˆฅperf+ โˆฅ๐ท ๐‘€ ๐ทโˆ’1โˆฅstab

s.t. ๐‘€ =

๐‘‡

โˆ‘๏ธ

๐‘=1

|๐ปฮฆ(๐‘) |, ฮฆ โˆˆ S๐‘Žโˆฉ P, ๐ท โˆˆ D (4.12) where for ease of computation we assume that ฮฆis finite impulse response (FIR) with horizon๐‘‡. We intentionally leave norms ambiguous; โˆฅยทโˆฅstab can be โˆฅยทโˆฅโˆžโ†’โˆž,

โˆฅยทโˆฅ1โ†’1, or โˆฅยทโˆฅ1โ†’โˆž for L1, Lโˆž, and ๐œˆ robust stability, respectively. โˆฅ๐ท ๐‘€ ๐ทโˆ’1โˆฅ corresponds to the robust stability margin 1๐›ฝ; robust stability is guaranteed for all

โˆ† such that โˆฅโˆ†โˆฅ < 1

๐›ฝ, for the appropriate norm on โˆ†. Smaller ๐›ฝ corresponds to stability guarantees for a larger set of โˆ†. The nominal performance norm โˆฅยทโˆฅperf may be different from โˆฅยทโˆฅstab.

D-ฮฆiteration

Problem (4.12) is nonconvex, and does not admit a convex reformulation. Inspired by the D-K iteration method from Hโˆž robust control [8], we adopt an iterative approach. We heuristically minimize (4.12) by iteratively fixing ๐ทand optimizing overฮฆin the "ฮฆstep", then fixingฮฆand optimizing (or randomizing) over๐ทin the

"D step".

We remark that problem (4.12) poses both nominal performance and robust stability as objectives. If we already know the desired robust stability margin ๐›ฝโˆ’1

max, we can omit the stability objective and instead enforce a constraint โˆฅ๐ท ๐‘€ ๐ทโˆ’1โˆฅstab โ‰ค ๐›ฝ

max, as is done in D-K iteration; similarly, if we already know the desired nominal performance๐›ผ, we can omit the performance objective and enforce โˆฅ๐‘„ฮฆโˆฅperf โ‰ค ๐›ผ. Theฮฆstep solves the following problem

ฮฆ, ๐‘€=argmin

ฮฆ, ๐‘€

โˆฅ๐‘„ฮฆโˆฅperf

s.t. โˆฅ๐ท ๐‘€ ๐ทโˆ’1โˆฅstab โ‰ค ๐›ฝ ๐‘€ =

๐‘‡

โˆ‘๏ธ

๐‘=1

|๐ปฮฆ(๐‘) |, ฮฆ โˆˆ S๐‘Žโˆฉ P

(4.13)

for some fixed value๐›ฝand scaling matrix๐ท โˆˆ D. The separability of problem (4.13) is characterized by the separability of its objective and constraints. As previously mentioned, P typically consists of sparsity constraints on ฮฆ; this constraint is element-wise separable. The separability of other constraints and objectives in (4.13) are as follows:

โ€ข If๐‘„ is separably diagonal, โˆฅ๐‘„ฮฆโˆฅperf has the same separability as โˆฅยทโˆฅperf; If not, it is column separable if and only ifโˆฅยทโˆฅperf is column separable.

โ€ข โˆฅ๐ท ๐‘€ ๐ทโˆ’1โˆฅstab < ๐›ฝhas the same separability as โˆฅยทโˆฅstab.

โ€ข If๐ป is separably diagonal, ๐‘€ = ร๐‘‡

๐‘=1|๐ปฮฆ(๐‘) | is element-wise separable; if not, it is column separable.

โ€ข ฮฆโˆˆ S๐‘Žis column separable for state feedback, row separable for full control, and partially separable for output feedback.

Definition 4.11. For state feedback, the product๐‘„ฮฆmay be written as๐‘„๐‘ฅฮฆ๐‘ฅ+๐‘„๐‘ขฮฆ๐‘ข

for some matrices๐‘„๐‘ฅand๐‘„๐‘ข. ๐‘„isseparably diagonalif both๐‘„๐‘ฅand๐‘„๐‘ขare diagonal matrices. Analogous definitions apply to the full control and output feedback case.

Table 4.1 summarizes the separability of (4.13) for state feedback, full control, and output feedback problems with Hโˆž, L1, Lโˆž, and ๐œˆ robustness, where we assume that๐‘„ and ๐ป are separably diagonal and โˆฅยทโˆฅperf has the same type of separability as โˆฅยทโˆฅstab. Note that Hโˆž is not separable for any problem. For state feedback, Lโˆž and ๐œˆ are the preferred stability criteria; for full control, L1 and ๐œˆ are the preferred stability criteria. For output feedback,๐œˆis the only criterion that produces a balanced partially separable problem. Overall, ๐œˆ robustness is preferable in all three cases, resulting in either fully separable formulations that require no iterations, or balanced partially separable formulations, which have preferable convergence properties. Though convergence properties vary, theฮฆstep (4.13) can be computed with complexity that scales with local neighborhood size๐‘‘instead of global system size for all non-Hโˆžcases.

The D step solves the following problem ๐ท =argmin

๐ท

โˆฅ๐ท ๐‘€ ๐ทโˆ’1โˆฅstab s.t. ๐ท โˆˆ D (4.14)

Table 4.1: Separability ofฮฆstep of D-ฮฆiteration

State Feedback Full Control Output Feedback

Hโˆž No No No

L1 Partial, Unbalanced Full Partial, Unbalanced Lโˆž Full Partial, Unbalanced Partial, Unbalanced

๐œˆ Full Full Partial, Balanced

for some fixed magnitude matrix๐‘€. For๐œˆrobustness, the minimizing D step (4.14) can be recast as a linear program

๐‘™๐‘–, ๐œ‚=argmin

๐‘™๐‘–,๐œ‚

๐œ‚ s.t. log(๐‘€)๐‘–, ๐‘— +๐‘™๐‘–โˆ’๐‘™๐‘— โ‰ค ๐œ‚

โˆ€(๐‘€)๐‘–, ๐‘— โ‰ 0, 1 โ‰ค๐‘–, ๐‘— โ‰ค ๐‘›

(4.15)

The optimal solution๐ทcan be recovered as๐ท =diag(exp(๐‘™1),exp(๐‘™2), . . .exp(๐‘™๐‘›)). Problem (4.15) can be distributedly computed using ADMM consensus [14]. Let ๐‘ฅ๐‘– =

"

๐œ‚๐‘– ๐ฟ๐‘—@๐‘–

#

be the variable at subsystem๐‘–, where๐œ‚๐‘–is subsystem๐‘–โ€™s value of๐œ‚, and ๐ฟ๐‘—@๐‘– is a vector containing ๐‘™๐‘—@๐‘–: subsystem๐‘–โ€™s values of๐‘™๐‘— for all ๐‘— โˆˆ N๐‘‘(๐‘–). The goal is for all subsystems to reach consensus on๐œ‚and๐‘™๐‘–,1 โ‰ค ๐‘– โ‰ค ๐‘. We introduce dual variable๐‘ฆ๐‘–and averaging variable๐‘ฅยฏ๐‘– =

"

ยฏ ๐œ‚๐‘– ๐ฟยฏ๐‘—@๐‘–

#

. For each iteration๐‘˜, subsystem ๐‘–performs the following computations

๐‘ฅ๐‘˜+1

๐‘– =argmin

๐‘ฅ๐‘–

๐œ‚๐‘–+ (๐‘ฆ๐‘˜

๐‘–)โŠค(๐‘ฅ๐‘–โˆ’ยฏ๐‘ฅ๐‘˜

๐‘–) +๐›พโˆฅ๐‘ฅ๐‘–โˆ’๐‘ฅยฏ๐‘˜

๐‘–โˆฅ22 s.t. โˆ€๐‘— โˆˆ N๐‘‘(๐‘–),(๐‘€)๐‘–, ๐‘— +๐‘™๐‘–@๐‘–โˆ’๐‘™๐‘—@๐‘– โ‰ค ๐œ‚๐‘–

(4.16a)

ยฏ ๐œ‚๐‘˜+1

๐‘– = 1

|N๐‘‘(๐‘–) |

โˆ‘๏ธ

๐‘—โˆˆN๐‘‘(๐‘–)

๐œ‚๐‘—

ยฏ๐‘™๐‘– = 1

|N๐‘‘(๐‘–) |

โˆ‘๏ธ

๐‘—โˆˆN๐‘‘(๐‘–)

๐‘™๐‘–@๐‘—, ยฏ๐‘™๐‘˜+1

๐‘—@๐‘– =๐‘™ยฏ๐‘—,โˆ€๐‘— โˆˆ N๐‘‘(๐‘–)

(4.16b)

๐‘ฆ๐‘˜+1

๐‘– =๐‘ฆ๐‘˜

๐‘– + ๐›พ 2(๐‘ฅ๐‘˜+1

๐‘– โˆ’๐‘ฅยฏ๐‘˜+1

๐‘– ) (4.16c)

where ๐›พ is a user-determined parameter, and iterations stop when consensus is reached, i.e. differences between relevant variables are sufficiently small. The size of optimization variable ๐‘ฅ๐‘– depends only on local neighborhood size ๐‘‘; thus, the complexity of (4.16a) scales independently of global system size. Computation (4.16b) requires communication, but only within the local neighborhood. Also, by

the definition ofฮฆand the construction of ๐‘€, (๐‘€)๐‘–, ๐‘— =0 โˆ€๐‘— โˆ‰N๐‘‘(๐‘–). Thus, for a fully connected system, solving (4.16) is equivalent to solving (4.15). Additionally, consensus problems are balanced as per Definition 4.9, so (4.16) converges relatively quickly.

ForL1robustness, (4.14) is solved by setting๐ท =diag(๐‘ฃ1, ๐‘ฃ2, . . . ๐‘ฃ๐‘›)โˆ’1, where๐‘ฃis the eigenvector corresponding to the largest-magnitude eigenvalue of ๐‘€ [7]. This computation does not lend itself to scalable distributed computation. To ameliorate this, we propose an alternative formulation that randomizes instead of minimizing over๐ท. This can be written in the form of an optimization problem as

๐ท =argmin

๐ท

0 s.t. โˆฅ๐ท ๐‘€ ๐ทโˆ’1โˆฅstab โ‰ค ๐›ฝ (4.17) The randomizing formulation lends itself to distributed computation. Also, we remark that (4.14) can be solved by iteratively solving (4.17) to search for the lowest feasible value of ๐›ฝ.

Define vectors

๐‘ฃ =

๏ฃฎ

๏ฃฏ

๏ฃฏ

๏ฃฏ

๏ฃฏ

๏ฃฏ

๏ฃฏ

๏ฃฏ

๏ฃฏ

๏ฃฐ (๐ท)1,1

(๐ท)2,2

.. . (๐ท)๐‘›,๐‘›

๏ฃน

๏ฃบ

๏ฃบ

๏ฃบ

๏ฃบ

๏ฃบ

๏ฃบ

๏ฃบ

๏ฃบ

๏ฃป

, ๐‘ฃโˆ’1=

๏ฃฎ

๏ฃฏ

๏ฃฏ

๏ฃฏ

๏ฃฏ

๏ฃฏ

๏ฃฏ

๏ฃฏ

๏ฃฏ

๏ฃฐ (๐ท)โˆ’1

1,1

(๐ท)โˆ’1

2,2

.. . (๐ท)โˆ’1๐‘›,๐‘›

๏ฃน

๏ฃบ

๏ฃบ

๏ฃบ

๏ฃบ

๏ฃบ

๏ฃบ

๏ฃบ

๏ฃบ

๏ฃป

(4.18)

We can rewrite constraint โˆฅ๐ท ๐‘€ ๐ทโˆ’1โˆฅstab โ‰ค ๐›ฝas๐‘€ ๐‘ฃโˆ’1 โ‰ค ๐›ฝ๐‘ฃโˆ’1 forL1stability, and ๐‘€โŠค๐‘ฃ โ‰ค ๐›ฝ๐‘ฃfor Lโˆž stability. Then, problem (4.17) can be formulated as a scalable distributed ADMM consensus problem using similar techniques as (4.15).

Both versions of the D step ((4.14) and (4.17)) for D-ฮฆiteration are simpler than the D step in D-K iteration [8], which requires a somewhat involved frequency fitting process. Also, all separable versions of the proposed D step are less computationally intensive than theฮฆstep (4.13), since the decision variable in the D step is much smaller. Table 4.2 summarizes the scalability of different versions of the D step for different robustness criteria. "Minimize" refers to solving (4.14) directly; "Iteratively Minimize" refers to solving (4.14) by iteratively solving (4.17) to search for the lowest feasible ๐›ฝ; "Randomize" refers to solving (4.14). โœ“ indicates that we can use scalable distributed computation, andโœ— indicates that no scalable distributed formulation is available; by scalable, we mean complexity that scales independently of global system size. For iterative minimization, there is the obvious caveat of iterations incurring additional computational time; however, forL1andLโˆž, iterative minimization is more scalable than direct minimization. Additionally, we show in

Table 4.2: Scalability of D step of D-ฮฆiteration Minimize Iteratively Minimize Randomize

Hโˆž โœ— โœ— โœ—

L1 โœ— โœ“ โœ“

Lโˆž โœ— โœ“ โœ“

๐œˆ โœ“ โœ“ โœ“

the next section that algorithms using the randomizing D step perform similarly as algorithms using the minimizing D step; thus, iterative minimization may be unnecessary. Overall,๐œˆrobustness appears to be preferable for scalability purposes for both theฮฆstep and D step.

We now present two algorithms for D-ฮฆ iteration. Algorithm 4.1 is based on minimizing over ๐ท (4.14), while Algorithm 4.2 is based on randomizing over ๐ท (4.17). Both algorithms compute the controllerฮฆwhich achieves optimal nominal performance for some desired robust stability margin๐›ฝโˆ’1

max. Algorithm 4.1D-ฮฆiteration with minimizing D step

input : ๐›ฝ

step > 0, ๐›ฝ

max > 0 output : ฮฆ, ๐›ฝ

1: Initialize ๐›ฝ๐‘˜=0 โ† โˆž,๐‘˜ โ†1

2: Set ๐›ฝ๐‘˜ โ† ๐›ฝ๐‘˜โˆ’1โˆ’๐›ฝ

step. Solve (4.13) to obtainฮฆ๐‘˜, ๐‘€๐‘˜ if (4.13) is infeasible:

returnฮฆ๐‘˜โˆ’1, ๐›ฝ๐‘˜โˆ’1

3: Solve (4.14) to obtain ๐ท. Set๐›ฝ๐‘˜ โ† โˆฅ๐ท ๐‘€๐‘˜๐ทโˆ’1โˆฅstab if ๐›ฝ๐‘˜ โ‰ค ๐›ฝ

max: returnฮฆ๐‘˜, ๐›ฝ๐‘˜

4: Set ๐‘˜ โ† ๐‘˜+1and return to step 2

In Algorithm 4.1, we alternate between minimizing over ฮฆ and minimizing over ๐ท, and stop when no more progress can be made or when ๐›ฝ

max is attained. No initial guess of๐ท is needed; at iteration๐‘˜ =1, ๐›ฝ๐‘˜ =โˆž, and theโˆฅ๐ท ๐‘€ ๐ทโˆ’1โˆฅstab โ‰ค ๐›ฝ constraint in (4.13) of step 2 is trivially satisfied. In Algorithm 4.2, we alternate between minimizingฮฆ and randomizing ๐ท. There are two main departures from Algorithm 4.1 due to the use of the randomizing D step:

1. An initial guess of ๐ท is required to generate ๐›ฝ๐‘˜=1, which is then used as an input to the randomizing D step. ๐ท = ๐ผ is a natural choice, although we may also randomize or minimize over the initial๐ท.

Algorithm 4.2D-ฮฆiteration with randomizing D step input : ๐›ฝ

step > 0, ๐›ฝ

max > 0 output : ฮฆ, ๐›ฝ

1: Initialize ๐›ฝ๐‘˜=0 โ† โˆž,๐‘˜ โ†1, ๐ท โ† ๐ผ

2: Set ๐›ฝ๐‘˜ โ† ๐›ฝ๐‘˜โˆ’1โˆ’๐›ฝ

step. Solve (4.13) to obtainฮฆ๐‘˜, ๐‘€๐‘˜ if ๐‘˜ =1:

Set ๐›ฝ๐‘˜ โ† โˆฅ๐ท ๐‘€๐‘˜๐ทโˆ’1โˆฅstab if (4.13) is infeasible:

Solve (4.17)

if (4.17) is infeasible: returnฮฆ๐‘˜โˆ’1, ๐›ฝ๐‘˜โˆ’1 else :

Solve (4.13) to obtainฮฆ๐‘˜, ๐‘€๐‘˜ if ๐›ฝ๐‘˜ โ‰ค ๐›ฝ

max: returnฮฆ๐‘˜, ๐›ฝ๐‘˜

3: Solve (4.17) to obtain ๐ท

4: Set ๐‘˜ โ† ๐‘˜+1and return to step 2

2. In step 2, when we cannot find a newฮฆ to make progress on ๐›ฝ, instead of stopping, attempt to find a new๐ท to make progress on ๐›ฝ. If such a ๐ท can be found, find the new optimalฮฆ, then continue iterating.

Parameter๐›ฝ

stepappears in both algorithms, and indicates the minimal robust stability margin improvement per step. For both algorithms, computational complexity is dominated by theฮฆstep problem (4.13) and D step problem (4.14) or (4.17). All of these problems can be distributedly computed, and all enjoy complexity that scales independently of global system size; thus, the complexity of the overall algorithm also scales independently of global system size.