LOCALIZED LINEAR QUADRATIC GAUSSIAN
6.3 Localized Controller Synthesis
In this section, we propose a scalable algorithm to solve the LLQG problem (6.6) with Assumption 3 in a localized way. Note that the LLQR decomposition technique cannot be applied to optimization problem (6.6) for plants (6.2) corresponding to output feedback problems. This is because the constraints (6.5b) and (6.5c) admit incompatible decompositions: constraint (6.5b) can be decomposed column-wise, whereas constraint (6.5c) can be decomposed row-wise, introducing a coupling be- tween all optimization variables. The ADMM has proven very useful in “breaking”
such coupling between optimization variables, allowing for large-scale problems to be decomposed and solved efficiently. Our approach to developing a scalable solution to the LLQG problem (6.6) is to combine the ADMM technique with the LLQR decomposition introduced in the previous chapter.
To reduce notational clutter, we assume that B1 = h I 0
i
and D21 = h 0 σyI
i , whereσy is the relative magnitude between process disturbance and sensor distur- bance.1Using these values for B1and D21, the LLQG problem (6.6) can be written as
minimize
{R,M,N,L} k h
C1 D12 i
"
R σyN M σyL
# kH2
2 (6.8a)
subject to (6.5b)−(6.5d) (6.8b)
"
R N M L
#
∈ C ∩ Ld∩ FT. (6.8c)
1The methods in this section extend in a natural way to the case where B>
1 D>
21
>
is block- diagonal. By solving the transpose of the LLQG problem, the method can also extend to the case where
C1 D12
is block-diagonal.
6.3.1 ADMM Algorithm
We now make a series of observations that motivate the use of the ADMM algorithm to solve the LLQG problem (6.8). First, notice that if we remove constraint (6.5c) from problem (6.8), then the resulting optimization problem admits a column-wise LLQR separation, which as described in the previous chapter allows for the global problem to be decomposed into subproblems of size defined by that of the(d+2)- outgoing sets of the subsystems. Through a dual argument, we can show that verifying the feasibility of constraint (6.5c) can be done row at a time, resulting in a feasibility problem that admits a row-wise LLQR separation, once again allowing for the global problem to be decomposed into easily solved subproblems. In order to exploit the decomposition properties of each of these modified problems, we leverage the standard ADMM technique of shifting the coupling from the difficult to enforce constraints (6.5b) and (6.5c) to a simple equality constraint through the introduction of a redundant variable: we make this approach precise in what follows.
We use
Φ=
"
R N M L
#
to denote the system response that we are solving for. Let Ψ be a duplicate of the optimization variable Φ. Following [5], we define the extended-real-value functionalsh(r)(Φ)andh(c)(Ψ)by
h(r)(Φ)= (
0 if (6.5c),(6.5d),(6.8c)
∞ otherwise h(c)(Ψ)=
(
(6.8a) if (6.5b),(6.5d),(6.8c)
∞ otherwise. (6.9)
Here, h(r)(Φ) can be considered as a row-wise separable component of (6.8), and h(c)(Ψ)can be considered as a column-wise separable component of (6.8).
Remark 20. Note that the constraints(6.5d)and(6.8c)are included in the definition of bothh(r)(·)andh(c)(·). This is a key point to allow the subroutines of the ADMM algorithm to be solvable in a localized way, as shown in the following.
Using these definitions, we can rewrite the LLQG optimization problem (6.8) as minimize
{Φ,Ψ} h(r)(Φ)+h(c)(Ψ)
subject to Φ=Ψ. (6.10)
The form of optimization problem (6.10) is precisely that needed by the ADMM approach [5], and can be solved via the iteration
Φk+1= argmin
Φ
h(r)(Φ)+ ρ 2
||Φ−Ψk +Λk||2H
2
(6.11a) Ψk+1= argmin
Ψ
h(c)(Ψ)+ ρ 2
||Ψ−Φk+1−Λk||2H
2
(6.11b)
Λk+1= Λk +Φk+1−Ψk+1. (6.11c)
Recall that the system responseΦandΨare constrained to lie in the FIR subspace FT, and hence is a finite dimensional variable: it follows that each of the problems specified by the ADMM algorithm (6.11) can be formulated as finite dimensional optimization problems by associating the FIR transfer matrices with their matrix representations. We now focus on the problem specifying theΨk+1iterates (6.11b), which can be written as
minimize
{R,M,N,L} k h
C1 D12 i
"
R σyN M σyL
# k2H
2+ ρ 2 k
"
R N M L
#
−Φk+1−ΛkkH2
2
subject to
h
zI −A −B2 i
"
R N M L
#
= h I 0
i
R,M,N ∈ 1
zRH∞, L∈ RH∞
"
R N M L
#
∈ C ∩ Ld∩ FT. (6.12)
From the form of this problem, it is apparent that an analogous argument to that presented in Chapter 5 applies — we first perform a column-wise separation of the optimization problem (6.12), then exploit the d-localized SLC Ld to reduce the dimension of each subproblem from global scale to the localized region defined by the (d + 2)-outgoing set of each disturbance. Similarly, subproblem (6.11a) admits a row-wise LLQR separation, and the Lagrange multiplier update equation (6.11c) decomposes element-wise. Thus if the ADMM weightρis shared between subsystems prior to the synthesis procedure, the optimization problems specifying the ADMM algorithm (6.11) decompose into subproblems specified by the(d+2)- outgoing and(d+2)-incoming sets of the system.
An added benefit of the LLQG framework is the ability to perform real-time re- synthesis of optimal controllers. In particular, suppose that the dynamics (6.1) describing the dynamics of a collection D of subsections change — in order to suitably update the LLQG optimal controller, only the components of the system
response (R,M,N,L) corresponding to the response of subsystems j satisfying Inj(d+2) ∩ D , ∅or Outj(d+2) ∩ D ,∅need to be updated.
Next we show that the problems specifying the iterates Φk+1 and Ψk+1 can be solved in closed form allowing for the update equations (6.11a) and (6.11b) to be implemented via matrix multiplication. We end the section with a discussion of conditions guaranteeing the convergence of the iteratesΦk+1andΨk+1to the optimal solution to the LLQG problem (6.8).
Remark 21. The ADMM approach specified in (6.11) can be used with other objective functions that admit a column-wise separation and/or row-wise separation.
We will generalize the application of the algorithm (6.11) to a broader class of problems, which is called the convex localized separable SLS (CLS-SLS) problems, in Chapter 7. An interesting special case is that we can solve problem (6.8) for arbitrary B1 and D21 if [C1 D12] is block-diagonal — in particular, this means that a LLQG controller can be synthesized in a scalable way using our proposed algorithm even if the process and sensor noise are globally correlated, so long as the subsystem’s performance objectives are decoupled.
6.3.2 Analytic Solution
We now focus on optimization problem (6.12), which specifies the iterates Ψk+1. Following the LLQR method described in Section 5.2, we perform a column-wise separation of the objective and constraints of (6.12), and exploit thed-localized SLC of the system response to reduce the dimensionality of each resulting subproblem.
Specifically, for each disturbanceδxjorδyj at subsystemj, we solve an optimization of the same form as (6.12) except with all decision variables, state-space parameters and constraints restricted to the(d +2)-outgoing set of subsystem j. The result is an optimization problem similar to (5.12). We also note that optimization problem (5.12) and the dimensionality reduced version of optimization problem (6.12) are least-squares problems subject to affine constraints. Consequently, the optimal solution is specified as an affine function of the problem data Ψk+1 andΛk in the reduced dimension. The affine function for each dimensionality reduced subproblem only need to be evaluatedonce, after which the updates to the iterates (6.11b) can be performed via multiple matrix multiplication, all in the reduced dimension. We defer a detailed mathematical derivation of the analytic solution of ADMM updates in Appendix 7.C in Chapter 7. Note that theΦk+1iterate (6.11a) can be carried out using matrix multiplication in a similar fashion.
Thus using this approach to solving the iterate updates (6.11a) and (6.11b), the LLQG optimization problem (6.8) can be solved nearly as quickly as the state-feedback problem, as the update equations require first solving a least-squares problem defined on the(d+2)-incoming and(d+2)-outgoing sets of the system and then using matrix multiplication.
6.3.3 Convergence and Stopping Criteria
Assume that the optimization problem (6.10) is feasible, and letΨ∗ be an optimal solution. Further assume that the matrix [C1 D12] has full column rank, and [B1;D21]has full row rank. In this case, the objective function is strongly convex with respect toΨ, and hence any optimal solutionΨ∗is the unique optimal solution.
As the extended-real-value functionsh(r)(·)andh(c)(·)specified in (6.9) are closed, proper, and convex, we have that strong duality holds and that optimization problem (6.10) satisfies the convergence conditions state in [5]. From [5], the objective of (6.10) converges to its optimal value. As the objective function is a continuous function of Ψ and the optimal solution Ψ∗ is unique, it follows that the primal variable iterates converge to Ψ∗, i.e., Ψk → Ψ∗ and Φk → Ψ∗. Note that the rank condition on the objective function matrices is only a sufficient condition for primal variable convergence. A less restrictive conditions for the convergence of the ADMM algorithm will be discussed in Appendix 7.B in Chapter 7. The design of the stopping criteria for the ADMM algorithm (6.11) can also be found in Appendix 7.B.