Chapter III: Recursive Feasibility, Convergence, and Asymptotic Stability
3.4 Feasible and Stable DLMPC
We first show 1). The extended-real-value functional β(π½)Λ is defined for this algorithm as
β(π½)Λ =



ο£²



ο£³
π(π1π½{Λ 1}π₯0) ifππ΄ π΅π2π»Λβ π½Λ =πΌ , π½Λ β Lπ,π½Λπ₯0β P,
β otherwise,
where Λπ»β is the left inverse of Λπ» from (2.11); Λπ» has full column rank. When formulating the DLMPC problem (3.1) as an ADMM problem, we perform variable duplication to obtain problem (2.11). We can write (2.11) in terms of β(π½)Λ with the constraintπ½Λ =π»ΛπΏ:Λ
minπ½Λ,πΏΛ
β(π½)Λ s.t. π½Λ =π»ΛπΏΛ.
By assumption, π(π1π½Λπ₯0) is closed, proper, and convex, and ΛP is a closed and convex set. The remaining constraints ππ΄ π΅π½Λ = πΌ andπ½Λ β Lπ are also closed and convex. Hence,β(π½)Λ is closed, proper, and convex.
We now show2). This condition is equivalent to showing that strong duality holds [68]. Since problem (3.1) is assumed to have a feasible solution in the relative interior ofPby means of Lemma 4 (given that the first iteration is feasible), Slaterβs condition is automatically satisfied, and therefore the unaugmented Lagrangian of the problem has a saddle point.
Both conditions of the ADMM convergence result from [52] are satisfiedβAlgorithm 1 converges in residue, objective function, and dual variable. β‘ Remark 6. It is important to note that the communication network might be a directed graphβdepending on the elements in the incoming as outgoing sets. The proof outlined here holds as well for the case where the communication graph is directed. For additional details on ADMM convergence over directed networks readers are referred to [69] and references therein.
algorithms provided are distributed and localized, with computational complexity that is independent of the global system size.
Offline synthesis of the terminal setXπ
Our first result is to provide an offline algorithm to compute terminal setXπ from (3.2) in a distributed and localized manner. As discussed in Β§3.3, we compute the maximal robust positive invariant set for the unconstrained localized closed-loop system. We use SLS-based techniques to obtain a localized closed-loop mapπ½.
Our algorithm is based on Algorithm 10.4 from [1]. The advantage of implementing this algorithm in terms of the localized closed-loop map π½ is twofold: i) we can work with locally-bounded and polytopic disturbance setsWby leveraging Lemmas 1 and 2, and ii) the resulting robust invariant set is automatically localized given the localized structure of the closed-loop map.
We start by finding a localized closed-loop mapπ½for system (2.1). To do this, we need to solve
minπ½
π(π½) s.t. ππ΄ π΅π½=πΌ , π½β Lπ. (3.5) This is an SLS problem with a separable structure for most standard cost functions π [51]. The separable structure admits localized and distributed computation. Even when separability is not apparent, a relaxation can often be found that allows for distributed computation (see for example [70]). The infinite-horizon solution for quadratic cost is presented in [71]. For other costs, computing an infinite horizon solution to (3.5) remains an open questionβin these cases, we can use finite impulse response SLS with sufficiently long time horizon.
Once a localized closed-map π½ has been found, we can compute the associated maximal positive invariant set. In the robust case, we leverage results from Lemmas 1 and 2, which use duality arguments to tackle specific formulations of W. We denoteπas the upper bound ofβ₯ [πΏ]πβ₯βfor allπ, whereβ₯Β·β₯βis the dual norm ofβ₯Β·β₯π. Also, Eachππ is the ππ‘ β vector in the standard basis.
β’ Nominal (i.e., no disturbance):
S :={π₯0 βRπ: π½{1}π₯0 β P }.
β’ Locally bounded disturbance:
S :={π₯0 βRπ: [π»]π[π½{1}]π[π₯0]π+βοΈ
π
π πβΊ
π[π»]π[π½{2 :π}]π
β β€ [β]πβπ}.
β’ Polytopic disturbance:
S :={π₯0 βRπ: π»π½{1}π₯0+Ξπ β€ β,where Ξ(π ,:) = min
Ξπβ₯0
Ξππs.t.π»(π ,:)π½{2 :π} = ΞππΊ βπ}. As per Lemma 4, we calculateSby iteratively computingSπ+1fromSπ.1 Assume Sπ can be written as
Sπ = {π₯0 βRπ : Λπ» π₯ β€ βΛ}. Then, we can writeSπ+1as
Sπ+1={π₯0βRπ : Ξ¦π₯ ,1[1]π₯0 β F (Sπ),Ξ¦π’,0[0]π₯0 β U}, (3.6) whereF (Sπ) :=Sπin the nominal case. In the case of locally bounded disturbance,
F (Sπ) :={π₯ βRπ: [π»Λ]π[π₯]π +βοΈ
π
π πβΊ
π[π»Λ]π
β β€ [βΛ]π βπ}, and for polytopic disturbance,
F (Sπ) :={π₯ βRπ : Λπ» π₯+Ξπ β€ β,Λ whereΞ(π ,:) = min
Ξπβ₯0Ξππs.t. Λπ»(π ,:) = ΞππΊβπ}.
These formulations use the simplifying fact thatΞ¦π₯ ,0[0] = πΌfor all feasible closed- loop dynamics. Also, notice that by usingSπ to calculateSπ+1, the only elements ofπ½that we require areΞ¦π₯ ,1[1]andΞ¦π’,0[0].
The conditions stated in (3.6) are row- and column-wise separable in π½ (and Ξ);
this allows us to calculateSπ in a distributed way. Furthermore,F (Sπ) andU are localizable by Assumption 1, in both nominal and robust settings. Sinceπ½is also localized, we can exploit the structure ofπ½to and rewriteππ+1as the intersection of local sets, i.e.,ππ+1=π1
π+1β© Β· Β· Β· β©ππ
π+1, where Sππ+1= {[π₯0]inπ(π) βR[π]π :
[Ξ¦π₯ ,1[1]]inπ(π)[π₯0]inπ(π) β F (Sinπ(π)
π ),
[Ξ¦π₯ ,1[1]]in
π(π)[π₯0]in
π(π) β Uinπ(π)}.
(3.7)
We now present Algorithm 2 to compute the terminal setXπ := S in a distributed and local manner. Each subsystemπ computes its own local terminal setSπ using only local information exchange. This algorithm is inspired by Algorithm 10.4 in [1].
1For simplicity of presentation, we write outSonly for finite-horizonπ½; the proposed algorithm to synthesizeSworks for infinite-horizonπ½as well.
Algorithm 2Subsystemπterminal set computation input: [Ξ¦π₯ ,1[1]]inπ(π),[Ξ¦π’,0[0]]inπ(π),Uinπ(π)
for polytopic noise, also [πΊ]inπ(π),[π]inπ(π)
1: Sπ
0 β Xπ, π β β1
2: repeat:
3: π β π+1
4: ShareSπ
π withoutπ(π). ReceiveSπ
π from all π βinπ(π).
5: ComputeSπ
π+1via (3.7).
6: Sππ+
1β Sπ
π β© Sππ+
1 7: until: Sπ
π+1 =Sπ
π for allπ
8: Xπ
π β Sπ
π
output: Xπ
π
If state and input constraints X and U induce no coupling (as assumed in Algo- rithm 1), the resulting terminal setXπ will beπ-localized. IfXandUinduceπ-local coupling, the terminal set will be 2π-localized sinceβin the presence of couplingβ
Alg. 2 requires communication with not only local patch neighbors, but also neigh- bors of those neighbors. Convergence is guaranteed for system (2.1) if it is stable whenπ’ = 0, π€ = 0, and when the constraint and disturbance setsX, U, W are bounded and contain the origin, as per [72].
Computational complexity of the algorithm: In Algorithm 2, step 4 refers to communication exchanges, while steps 5, 6, and 7 tackle computational steps. In terms of the computational complexity of Algorithm 2, the order of magnitude of the computations at each local subsystemπ is π(πππ), whereππ is the number of constraints on subsystemπ. Notice that in the case of a coupled scenario, the number of constraints at subsystemπ, ππ, is in general larger than in the uncoupled case, since additional constraints couple neighboring subsystems. Notice also that the communication exchanges occur only within eachπ-local neighborhood, and there- fore the computation of the maximal robust positive invariant set can be computed in a purely distributed and localized manner. Hence, its scalability is independent from the size of the whole system. It is also worth noting that this computation occurs offline, where both communication and computational complexity are not so critical.
Online synthesis of DLMPC
DLMPC Algorithm 1 was developed under the assumption that no coupling is intro- duced through cost or constraints. We now extend this algorithm to accommodate local coupling, which is allowed as per Assumption 1. This allows us to incorpo-
rate the terminal set and cost introduced in the previous sections, both of which generally induce local coupling. We will use notation that assumes all constraints and costs (including the terminal set) areπ-localized. In the case that the terminal set is 2π-localized, subsystems will need to exchange information with neighbors up to 2π-hops away; simply replace {outπ(π),inπ(π) } with {outπ(2π), inπ(2π) } wherever they appear in Algorithms 1, 2, and 3.
Appending the terminal set (3.2) and terminal cost (3.4) to the DLMPC problem (2.11) gives:
minΛ π½,πΏ,πΛ
π(π1π½{Λ 1}π₯0) +π (3.8) s.t. ππ΄ π΅π2πΏΛ =πΌ , π₯0=π₯(π), π½Λ,πΏΛ β Lπ,
π½Λ β PΛ, π»Λπ½Λ =πΏΛ,
π1π½Λπ{1}π₯0 βπXπ,0 β€π β€ 1,
whereπ½Λπ represents block rows ofπ½Λ that correspond to time horizonπβe.g., in the nominal setting, the last block row ofπ½π₯, and the set ΛPis defined so that translates the constraint onπ½Λπ₯0 β P into a constraint directly onπ½.Λ
Remark 7. In the robust setting, we incorporate the terminal constraint intoPΛ to ensure robust satisfaction of the terminal constraint. However,Xπ still appears as nominal constraint in order to integrate the definition of the terminal cost(3.4)into the DLMPC formulation(3.1).
In the original formulation (2.11), we assume no coupling; as a result, all expres- sions involving π½Λ are row-separable, and all expressions involvingπΏΛ are column- separable. If we allow local coupling as per Assumption 1, we lose row-separability;
row-separability is also lost in the new formulation (3.8) due to local coupling in the terminal set. This is because without coupling,[π₯]πand[π’]π(which correspond to a fixed set of rows inπ½), can only appear in ππandPπ; thus, each row ofπ½(andπ½) isΛ solved by exactly one subsystem in step 3 of Algorithm 1. With coupling, [π₯]π and [π’]π can appear in ππ andPπ for any π β inπ(π); now, each row ofπ½is solved for by multiple local subsystems at once, and row-separability is lost. This only affects step 3 of Algorithm 1 (i.e., the row-wise problem); other steps remain unchanged.
We write out the row-wise problem corresponding to (3.8):
minπ½Λ
π(π1π½{1}Λ π₯0) +π+π(π½Λ,πΏΛπ,π²π) (3.9) s.t. π½Λ βP β© LΛ π, π1π½Λπ{1}π₯0 βπXπ,
0 β€π β€ 1, π₯0= π₯(π), whereπ(π½Λ,πΏΛ,π²) = π2
π½Λ βπ»ΛπΏΛ +π²
2 πΉ
,andπ²is the additional variable introduced by ADMM.
Note that if P induces coupled linear inequality constraints, row-separability is maintained. Though [π₯]π and [π’]π appear in multiple Pπ, this corresponds to distinct rows ofπ, each solved by one subsystem, as opposed to one row ofπ½that is solved for by multiple subsystems. A similar idea applies if coupling is induced by some quadratic cost, i.e., π(π½π₯0) =β₯πΆπ½π₯0β₯2
πΉ for some matrixπΆwhich induces coupling. In this case, we can modify Λπ» such that we enforceπ½=πΆπΏ, and rewrite the cost as β₯π½π₯0β₯2
πΉ; now, each row ofπ½ is solved by exactly one subsystem, and row-separability is maintained.2 However, this technique does not apply to coupling in the general case; nor does it apply to the coupling induced by the terminal costβit is generally true that each row ofπ½Λ will be need to be solved for by several subsystems.
We introduce a new vector variable
X:= π1π½{Λ 1}π₯0.
This variable facilitates consensus between subsystems who share the same row(s) ofπ½. Each subsystem solves for componentsΛ [X]inπ(π), and comes to a consensus with its neighboring subsystems on the value of these components. In the interest of efficiency, we directly enforce consensus on elements ofXinstead of enforcing consensus on rows of π½. We introduce a similar variable for terminal costΛ π; we define vector πΌ :=
h
π1, . . . , ππ i
. Subsystemπ solves forππ, which is its own copy ofπ. It then comes to a consensus on this value with its neighbors, i.e., ππ has the same value βπ β inπ(π). Assuming that G(π΄, π΅) is connected, this guarantees that ππ has the same valueβπ β {1. . . π}, i.e., all subsystems agree on the value ofπ. We combine these two consensus-facilitating variables into the augmented vector variable
XΛ :=
h
XβΊ πΌβΊ iβΊ
.
With this setup, we follow a variable duplication strategy and apply ADMM for consensus [73]. In particular, we duplicate variables so each subsystem hasits own
2We would also need to use[Ξ¨π’,0[0]]π for the algorithm output instead of[Ξ¦π’,0[0]]π.
copyof the components ofX. Problem (3.9) becomes:Λ
Λmin
π½,XΛ,YΛ
Λ
π(X) +Λ π(π½Λ,πΏΛπ,π²π) (3.10) s.t. π½Λ,XΛ β Q, XΛ βXΛπ, π½Λ β Lπ, π₯0= π₯(π),
[XΛ]π= [π1π½]Λ ππ[π₯0]π, [XΛ]π = [YΛ]π βπ βinπ(π) βπ where we define Λπ, ΛXπ, andQas follows:
β’ πΛ(XΛ) := π(X) + 1
π
Γπ
π=1ππ,
β’ XΛ βXΛπ if and only if π1π½Λπ{1}π₯0 βππXπ βπ,
β’ π½Λ,XΛ β Qif and only if0β€ πΌ β€ 1 and{π½Λ βPΛ if ΛPinduces linear inequalities or ΛX β Potherwise}.3
The structure of problem (3.10) allows us to solve it in a distributed and localized manner via ADMM-based consensus. In particular, each subsystemπ solves:


ο£²


ο£³
[π½]Λ π+1,π+1
ππ,
[XΛ]π+1
inπ(π)


ο£½


ο£Ύ
=




ο£²



ο£³ argmin
[π½]Λ ππ,[X]Λ inπ(π)
Λ ππ
π(XΛ,π½)+Λ π
2βπ(XΛ,YΛπ,ZΛπ) s.t. [π½]Λ ππβQπ, [XΛπ]πβXΛππ β© Qπ,
[XΛ]π = [π1π½]Λ ππ[π₯0]π




ο£½



ο£Ύ
(3.11a)
[YΛ]π+π 1 = βπ(XΛπ+1,0,ZΛπ)
|inπ(π) | , (3.11b)
[ZΛ]ππ π+1 =[ZΛ]π ππ + [XΛ]ππ+1β [YΛ]ππ+1, (3.11c) where to simplify notation, we define
Λ ππ
π(XΛ,π½)Λ := πΛπ( [X]Λ inπ(π)) +π [π½]ππ β [πΏ]ππ
π + [π²]ππ
π
, βπ(XΛ,YΛπ,ZΛπ) :=βοΈ
πβinπ(π)
[X]π β [Y]ππ+ [Z]ππ π
2 πΉ
.
Consensus iterations are denoted byπ, outer-loop (i.e., Algorithm 1) iterations are denoted by π, and πis the ADMM consensus parameter. Intuitively, Λππ
π represents the original objective from (3.9), andβπ represents the consensus objective.
The subroutine described by (3.11) allows us to accommodate local coupling induced by cost and constraint (including terminal), and can be implemented in a distributed
3By the assumptions in [53],P(and therefore ΛP) must induce linear inequalities in the robust setting. The only case where ΛPmay induce something other than linear inequalities is in the nominal setting.
and localized manner. As stated above, this subroutine solves the row-wise problem (3.9), corresponding to step 3 of Algorithm 1. Thus, in order to accommodate local coupling (including terminal set and cost), we need only to replace step 3 of Algorithm 1 with the subroutine defined by Algorithm 3 below. Convergence is guaranteed by a similar argument to Lemma 6.
Algorithm 3Subsystemπimplementation of step 3 in Algorithm 1 when subject to localized coupling
input: tolerance parametersππ₯, ππ§, π > 0.
1: πβ0.
2: Solve optimization problem (3.11a).
3: Share[X]ππ+1withoutπ(π). Receive the corresponding[X]ππ+1from π βinπ(π).
4: Perform update (3.11b).
5: Share[Y]π+1
π withoutπ(π). Receive the corresponding[Y]π+1
π from π βinπ(π).
6: Perform update (3.11c).
7: if [X]π+1
π β [Z]π+1
π
πΉ
< ππ₯ and
[Z]π+1
π β [Z]π
π
πΉ
< ππ§ : Go to step 4 in Algorithm 1.
else: Setπβ π+1 and return to step 2.
Computational complexity of the algorithm: The presence of coupling induces an increase in computational complexity compared to the uncoupled scenario (i.e., Algorithm 1); however, the scalability properties from the uncoupled scenario still apply. Complexity in the current algorithm is determined by steps 2, 4, 6 of Algorithm 3 and steps 5 and 7 of Algorithm 1. Except for step 2 in Algorithm 3, all other steps can be solved in closed form. Sub-problems solved in step 2 require π(π2π2)optimization variables in the robust setting (π(ππ2)in the nominal setting) andπ(π2π)constraints. All other steps enjoy less complexity since their evaluation reduces to multiplication of matrices of dimensionπ(π2π2) in the robust setting, and π(π2π) in the nominal setting. The difference in complexity between the nominal and robust settings is consistent with the uncoupled scenario. Compared to the uncoupled scenario, additional computation burden is incurred by the consensus subroutine in Algorithm 3, which increases the total number of iterations. The consensus subroutine also induces increased communication between subsystems, as it requires local information exchange. However, this exchange is limited to a π-local region, resulting in small consensus problems that converge quickly, as we illustrate empirically in Β§3.5. As with the original uncoupled algorithm, complexity of this new algorithm is determined by the size of the local neighborhood and does not increase with the size of the global network.