Chapter V: Efficient Distributed Model Predictive Control
5.4 Algorithmic Implementation of Optimal Locality Selection
πβπ=
h03 π1 0 π2 π1 π2 03 π1 0 π2 π1 π2 03 π1 0 π2 π1 π2 i
(5.25) πβπ(πΌβπΉβ πΉ)=
03 π1 02 π1 04 π1 0 π2 π1 π2 05 π2 0 π2
(5.26) (π2):,π(πΌβπ»β π»)=
02 π1 0 π1 03 π1 0 π2 π1 π2 03 π2 π2
(5.27)
Two observations are in order. First, we notice that the rank of πβπ (= 2) is low compared to the number of nonzero columns (= 12), especially when π₯0 is dense. Additionally, the structure ofπβπ is highly repetitive; the only two linearly independent columns areπ1andπ2, and each appears 6 times inπβπ. Furthermore, the specific values ofπ₯0do not affect the rank of these matrices β only the placement of nonzeros and zeros inπ₯0matters.
Second, notice that post-multiplyingπβπby(πΌβπΉβ πΉ)effectively zeros out columns of πβπ. However, due to the repetitive structure of πβπ, this does not result in decreased rank for πβπ(πΌ β πΉβ πΉ). In fact, it is difficult to find a feasible locality constraint that results in decreased rank. This observation is corroborated by simulations in the later portion of this chapter, in which we find that locality constraints that are feasible also typically preserve global performance. For more complex systems and larger predictive horizons, post-multiplication of πβπ by (πΌβπΉβ πΉ)no longer cleanly corresponds to zeroing out columns, but similar intuition applies.
We now apply the locality-first formulation. To check the conditions of Theorem 5.2, we must construct the matrix(π2):,π(πΌβπ»β π»)and check its rank. This matrix is written out in equation (5.27). As expected, the rank of this matrix is also equal to two. Additionally, notice that(π2):,π(πΌβπ»β π»)contains the same nonzero columns asπβπ(πΌβπΉβ πΉ): π1andπ2are each repeated four times, in slightly different orders.
This is unsurprising, as the two formulations are equivalent.
it is preferable to use small values ofπ to minimize computational complexity. For a given system and predictive horizon length, Algorithm 5.2 will return theoptimal locality sizeπβ the smallest value ofπthat attains optimal global performance.
As previously described, the specific values ofπ₯0do not matter β only the placement of nonzeros and zeros inπ₯0matters. We will restrict ourselves to considering dense values ofπ₯0. For simplicity, our algorithm will work with the vector of ones as π₯0
β the resulting performance guarantees hold foranydenseπ₯0.
To check if a given locality constraint preserves global performance, we must check the two conditions of Theorem 5.2. First, we must check whether there exists some Ξ¨that satisfies both dynamics and locality constraints; this is equivalent to checking whether (5.19) is feasible. We propose to check whether
β₯π»(π»β π) βπβ₯β β€ π (5.28)
for some tolerance π. Condition (5.28) can be distributedly computed due to the block-diagonal structure ofπ». Define partitions [π»]π such that
π» =blkdiag( [π»]1,[π»]2. . .[π»]π) (5.29) In general,π» has ππ₯ blocks, where ππ₯ is the number of states. Since π β€ ππ₯ and one subsystem may contain more than one state, we are able to partition π» into π blocks as well. Then, (5.28) is equivalent to
β₯ [π»]π( [π»]β
π[π]π) β [π]πβ₯β β€ π βπ (5.30) If this condition is satisfied, then it remains to check the second condition of Theorem 5.2. To so, we must construct matrix π½ := (π2):,π(πΌ βπ»β π») and check its rank.
Notice that π½ can be partitioned into submatrices π½π, i.e. π½ :=
h
π½1 π½2 . . . π½π i
, where each block π½π can be constructed using only information from subsystem π, i.e. [π»]π, [π]π, etc. Thus, π½ can be constructed in parallel β each subsystemπ performs Subroutine 5.1 to constructπ½π.
Subroutine 5.1 checks whether the dynamics and locality constraints are feasible by checking (5.30), and if so, returns the appropriate submatrixπ½π. Notice that the quantity[π»]πis used in both the feasibility check and inπ½π. Also,π₯0does not appear, as we are using the vector of ones in its place.
Having obtainedπ½corresponding to a given locality constraint, we need to check its rank to verify whether global performance is preserved, i.e. rank(π½) = ππ’π, as per
Subroutine 5.1Local sub-matrix for subsystemπ input : [π»]π, [π]π,π
output : π½π
1: Computeπ€ =[π»]β
π[π]π 2: if β₯ [π»]ππ€β [π]πβ₯β > π :
π½πβFalse else :
π½πβ πΌβ [π»]β
π[π»]π
return π½π
Theorem 5.2. As previously described, we restrict ourselves to locality constraints of theπ-local neighborhood type, preferring smaller values ofπas these correspond to lower complexity for the localized MPC algorithm [27]. Thus, in Algorithm 5.2, we start with the smallest possible value ofπ =1, i.e. subsystems communicate only with their immediate neighbors. If π = 1does not preserve global performance, we iteratively increment π, construct π½, and check its rank, until optimal global performance is attained.
Algorithm 5.2Optimal local region size input : π΄,π΅,π,π
output : π for π =1. . . π :
1: forπ =1. . . π : Construct [π»]π, [π]π
Run Subroutine 5.1 to obtain π½π if π½πis False:
continue
2: Constructπ½ =
π½1 . . . π½π
3: if rank(π½) =ππ’π : returnd
In step 1 of Algorithm 5.2, we call Subroutine 5.1 to check for feasibility and construct submatricesπ½π. If infeasibility is encountered, or if optimal global perfor- mance is not attained, we increment π; otherwise, we return optimal locality size π.
Complexity
To analyze complexity, we first make some simplifying scaling assumptions. As- sume that the number of statesππ₯ and inputs ππ’ are proportional to the number of subsystemsπ, i.e. π(ππ₯+ππ’)=π(π). Also, assume that the number of nonzeros
ππ for locality constraint corresponding to parameterπ are related to one another asπ(ππ) =π(π ππ).
Steps 1 and 3 determine the complexity of the algorithm. In step 1, each subsystem performs operations on matrix[π»]π, which has size of approximately ππ₯(π +1) by ππ β the complexity will vary depending on the underlying implementations of the pseudoinverse and matrix manipulations, but will generally scale according to π( (ππ)2ππ₯π), orπ(π3π2π). In practice,πandπ are typically much smaller than π, and this step is extremely fast; we show this in the next section.
In step 3, we perform a rank computation on a matrix of size (ππ₯ + ππ’)π by ππ. The complexity of this operation, if Gaussian elimination is used, isπ( (ππ₯+ ππ’)1.38π1.38ππ), or π(π2.38π2.38π). Some speedups can be attained by using techniques from [41], which leverage sparsity β typically, π½ is quite sparse, with less than 5% of its entries being nonzero. In practice, step 3 is the dominating step in terms of complexity.
We remark that this algorithm needs only to be run once offline for any given localized MPC problem. Given a system and predictive horizon, practitioners should first determine the optimal locality sizeπusing Algorithm 5.2, then run the appropriate online algorithm from [27].