Chapter VI: Explicit Solution and GPU Parallelization
6.3 An Explicit Solution for DLMPC
In this section, we derive an explicit solution for DLMPC. First, we use the results from Chapter 2 to reformulate problem (6.1) via the SLS parametrization. Next, we leverage the resulting optimization problem to provide an explicit analytic solution.
System Level Synthesis reformulation
By virtue of Chapter 2, the MPC subproblem (6.1) can equivalently be reformulated as:
min
π½{0}
β₯ [πΆ π·]π½{0}π₯0β₯2πΉ
s.t. π₯0 =π₯(π), ππ΄ π΅π½{0} =πΌ ,
"
xπππ uπππ
#
β€ π½{0}π₯0 β€
"
xπ ππ₯ uπ ππ₯
# , π½{0} β Lπ,
(6.2)
where the matrices πΆ and π· are constructed by arranging π
1 2
π‘ and π
1 2
π‘ for all π‘ = 1, . . . , π, respectively, in a block diagonal form. We are leveraging the result in [105], by which the cost function in problem (6.2) encodes for the π»2-norm of the system responses. For the remainder of the chapter, we will once again overload notation and writeπ½in place ofπ½{0}, given that no driving noise is present, only the first block columns of the system responses need to be computed.
As shown in Chapter 2, problem (6.2) can be separated by virtue of Assumption 2, and distributed through ADMM [52]. The resulting distributed subroutine to be solved by each subcontrollerπin the case of problem (6.1) becomes:
[π½]ππ+1
π =



ο£²



ο£³ argmin
[π½]ππ
[πΆΛ]π[π½]ππ[π₯0]ππ
2 πΉ+ π
2
[π½]ππ β [πΏ]ππ
π + [π²]ππ
π
2 πΉ
s.t.
"
xπππ uπππ
#
ππ
β€ [π½]ππ[π₯0]ππ β€
"
xπ ππ₯ uπ ππ₯
#
ππ
, [π₯0]ππ =[π₯(π)]ππ



ο£½



ο£Ύ
(6.3a)
[πΏ]ππ+1
π = [π½]ππ+1
π
+ [π²]ππ
π
+ [ππ΄ π΅]β
ππ
[πΌ]ππ β [ππ΄ π΅]ππ [π½]ππ+1
π
+ [π²]ππ
π
, (6.3b) [π²]ππ+1
π =[π²]ππ
π + [π½]ππ+1
π β [πΏ]ππ+1
π
, (6.3c)
where we define[πΆΛ]π:= [ [πΆ]π [π·]π], and the partitions[π½]ππ and[π½]ππ are chosen according toLπ for the rows and the columns ofπ½, respectively.
Notice that subroutine (6.3) can be solved via Algorithm 1, where the subproblems solved by each subcontrollerπ are of dimension π βͺ π. However, the step from subproblem (6.3) (step 3 in Algorithm 1) requires solving an optimization problem online, which is the bottleneck in terms of computational overhead. In the next subsection, we illustrate how we can provide an explicit analytical solution for subproblem (6.3).
Explicit solution
We start by introducing the following algebraic result:
Lemma 16. LetΞ¦andπbe row vectors,π₯0column vector of compatible dimension, andπ1, π2scalars. Then, the optimal solution to
minΞ¦ |Ξ¦π₯0| + π
2 β₯Ξ¦βπβ₯22 s.t. π2 β€ Ξ¦π₯0 β€ π1,
(6.4)
is
Ξ¦β = π πβππ₯βΊ
0
π , (6.5)
where
π=



ο£²


ο£³
π π π π₯0βπ1 π₯βΊ
0π π₯0 ifπ π π π₯0βπ1 > 0
π π π π₯0βπ2 π₯βΊ
0π π₯0 ifπ π π π₯0βπ2 < 0
0 otherwise
and π := 2π₯0π₯βΊ
0 +π πΌβ1
.
Proof. Apply the KKT conditions to optimization (6.4). In particular, the station- arity condition is:
βΞ¦ Ξ¦β π₯0
+ π
2
Ξ¦β βπ
2 2
+π1βΞ¦
Ξ¦β π₯0βπ1
+π2βΞ¦
βΞ¦β π₯0+π2
=0, where π1 and π2 represent two scalar Lagrange multipliers whose values are un- known. This leads to the following result for the optimal Ξ¦ as a function of the unknownπ1andπ2:
Ξ¦β = π πβ (π1βπ2)π₯βΊ
0
2π₯0π₯βΊ
0 +π πΌβ1
. (6.6)
Notice that by Slaterβs condition (Chapter 5 in [68]) strong duality holds for problem (6.4) unless π1 = π2, in which case a closed form can be found directly with a
proximity operator (we omit this degenerated case in the following discussion).
Hence, we can make use of the dual problem to find the optimal solution. The dual problem can be written as:
max
π1,π2β₯0
Ξ¦β π₯0 +π
2
Ξ¦β βπ
2
2βπ1(π1βΞ¦β π₯0) βπ2(βπ2+Ξ¦β π₯0).
After substituting Ξ¦β into the dual problem above, the cost function becomes a quadratic function of π := [π1 π2]βΊ. In particular, after some algebraic manipu- lations one can conclude that the dual problem is a quadratic program equivalent to:
maxπβ₯0
πβΊπ2π+π1π, (6.7)
whereπ2= 1
2π₯βΊ
0π π₯0
"
β1 1
1 β1
#
andπ1= [π π π π₯0βπ1 βπ π π π₯0+π2].
In order to compute the value ofπ, we exploit complementary slackness:
π1(Ξ¦π₯0βπ1) =0, and π2(βΞ¦π₯0+π2) =0.
This condition makes evident that π1 and π2 cannot be both nonzero, since by assumptionπ1 < π2. Hence, let us assume without loss of generality thatπ2 = 0.
The solution to problem (6.7) forπ1is as follows:
π1=
( π π π π₯0βπ1 π₯βΊ
0π π₯0 ifπ1 > 0, 0 otherwise.
The form for π1 = 0 follows a similar structure. Notice that the matrix π is by definition positive definite. Hence,π₯βΊ
0π π₯0 > 0 for allπ₯0 β 0 and the sign ofπ1is purely determined by the sign of π π π₯0β π1. This allows us to define the closed form solution forπ, and therefore forΞ¦, in a piecewise manner depending on the region. The criteria are specified in Table 6.1.
Recall that from optimization (6.4), the problem is only feasible ifπ1 < π2, hence the regions defined in Table 6.1 are disjoint and well-defined. Leveraging the entries of Table 6.1 and equation (6.6), one can find the explicit solution (6.5). β‘ Remark 9. Given the structure of the matrix2π₯0π₯βΊ
0 +π πΌ, π can be computed in a very efficient manner using the ShermanβMorrison formula.
We now apply Lemma 16 to subproblem (6.3a). Given the separability properties of the Frobenius norm and the constraints, this optimization problem can further be
Region in whichπ₯0lies Corresponding solution forπ π π π π₯0βπ1 >0 π1= π π π π₯0βπ1
π₯βΊ
0π π₯0 ,π2=0
βπ π π π₯0+π2 > 0 π1=0,π2 = βπ π π π₯0+π2
π₯βΊ
0π π₯0
π π π π₯0βπ1 <0,
βπ π π π₯0+π2 <0 π1=0,π2 =0
Table 6.1: Partition of the space ofπ₯0into the different regions that lead to different solutions forπ.
separated into single rows of [π½]ππ and [πΏ]π
ππ
β [π²]π
ππ. Notice that this is true for the first term of the objective function as well, since [πΆΛ]π is a diagonal matrix by Assumption 2, so its components can be treated as factors multiplying each of the rows accordingly.
It is important to note that by definition of [π½]ππ, [x]π = [π½]ππ[π₯0]ππ. Hence, each row of [π½]ππ multiplied with [π₯0]ππ precisely corresponds to a given component of the a state or input instance, i.e., [π₯π‘]π or [π’π‘]π. We can now consider one of the single-row subproblems resulting from this separation and rename its variables, whereΞ¦represents the given row of [π½]ππ, π represents the corresponding row of [πΏ]π
ππ
β [π²]π
ππ andπ₯0, π1andπ2correspond to the elements of[π₯0]π°
π―π¦,[π₯πππ
π‘ ]π/[π’πππ
π‘ ]π
and [π₯π ππ₯
π‘ ]π/[π’π ππ₯
π‘ ]π, respectively. Without loss of generality we set each nonzero component of [πΆΛ]π to be equal to 1. By noting that for the inner product Ξ¦π₯0, it holds that β₯Ξ¦π₯0β₯2πΉ = |Ξ¦π₯0|, and for any vector, the Frobenius norm is equivalent to the 2-norm, i.e., β₯Ξ¦βπβ₯2πΉ = β₯Ξ¦βπβ₯22, we can directly apply Lemma 16 to each of the single-row subproblems in which the problem (6.3) can be separated.
Hence, by Lemma 16, an explicit solution exists for optimization (6.3a). Thus, step 3 in Algorithm 1 can be solved explicitly for problem (6.1). Notice that all other computation steps in Algorithm 1 have closed-form solutions.
Computational complexity: In terms of complexity reduction, the solution pre- sented in 16 consists of a point location problem followed by a matrix multiplication.
The complexity of solving for each row of [π½]ππ results inπ(π2), since the point location problem involves only 3 regions and the size of the matrices is π(π2).
Given that each subsystem performs this operation sequentially for each of the rows in [π½]ππ, the complexity of subproblem (6.3a) is alsoπ(π2π). This is in contrast with the general solution in Chapter 2, where step 3 consists of solving an optimal problem with π(π2π) optimization variables and π(ππ) constraints. The signifi- cant overhead reduction given by the explicit solution (6.5) is due to the simplicity of the point location problem since the space of the solution is partitioned into 3
regions per state/input instances independently of the size of the global system π, the size of the locality region π and the total number of constraints. Hence, the complexity is only dominated by the matrix multiplication needed to compute the explicit solutionΞ¦β .
Differences with standard Explicit MPC:Contrary toConventional explicit MPC (where regions are computed offline and the online problem reduces to a point location problem), our approach is to solve step 3 in Algorithm 1 explicitly. This ensures that all the steps in the algorithm are solved in closed form or via an explicit solution, hence we refer to our approach as explicit MPC. Another difference between standard explicit MPC and our formulation is that, in our case, the regions are not defined by polytopes of π₯0 [48]. In our case, π depends on π₯0, thus, at each MPC iteration, new regions are computed as an explicit function of π₯0, and for the subsequent ADMM iterations (within each MPC iteration) the parameter of the optimization problem is the corresponding row of [πΏ]π
ππ
β [π²]π
ππ denoted as π in (6.5). The regions are indeed affine with respect to this parameter. Note that π₯0 remains fixed within each MPC iteration. This idea is illustrated in Figure 6.1, where we illustrate the different regions involved in the computation of a given row of [π½]ππ for two MPC iterations. In order to not overload notation, in this example we denote a single row of matrices[π½]ππ and [πΏ]π
ππ
β [π²]π
ππ with the same notation as the whole matrices themselves, i.e., [π½]ππ and[πΏ]π
ππ
β [π²]π
ππ denote a row of the homonymous matrices.
t+ 1
<latexit sha1_base64="mLxz/koAWvZP5dEolt4WvHd6gJ4=">AAAB6nicbVBNS8NAEJ34WetX1aOXxSIIQkmqoMeiF48V7Qe0oWy2m3bpZhN2J0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekEhh0HW/nZXVtfWNzcJWcXtnd2+/dHDYNHGqGW+wWMa6HVDDpVC8gQIlbyea0yiQvBWMbqd+64lrI2L1iOOE+xEdKBEKRtFKD3ju9Uplt+LOQJaJl5My5Kj3Sl/dfszSiCtkkhrT8dwE/YxqFEzySbGbGp5QNqID3rFU0YgbP5udOiGnVumTMNa2FJKZ+nsio5Ex4yiwnRHFoVn0puJ/XifF8NrPhEpS5IrNF4WpJBiT6d+kLzRnKMeWUKaFvZWwIdWUoU2naEPwFl9eJs1qxbuoVO8vy7WbPI4CHMMJnIEHV1CDO6hDAxgM4Ble4c2Rzovz7nzMW1ecfOYI/sD5/AG3041s</latexit>
([ ]<latexit sha1_base64="0pZ2HmsKQgP+XbNURJ8N/uqC69Q=">AAACHnicbVDLSsNAFJ3UV62vqEs3wSJUhJJURZdFNy5cVLAPSEKYTCft0MkkzEyEEvIlbvwVNy4UEVzp3zhpA9XWAwOHc+69c+/xY0qENM1vrbS0vLK6Vl6vbGxube/ou3sdESUc4TaKaMR7PhSYEobbkkiKezHHMPQp7vqj69zvPmAuSMTu5TjGbggHjAQEQakkTz+v2U4I5dAPUqclSOZ6KfFSnmUnM/1WjevDmXVc8fSqWTcnMBaJVZAqKNDy9E+nH6EkxEwiCoWwLTOWbgq5JIjirOIkAscQjeAA24oyGGLhppPzMuNIKX0jiLh6TBoT9XdHCkMhxqGvKvOVxbyXi/95diKDSzclLE4kZmj6UZBQQ0ZGnpXRJxwjSceKQMSJ2tVAQ8ghkirRPARr/uRF0mnUrdN64+6s2rwq4iiDA3AIasACF6AJbkALtAECj+AZvII37Ul70d61j2lpSSt69sEfaF8/vzmjeg==</latexit> ir+ [β€]ir) First component of
- - - - -
- - - -
- -
- - -
-- - - - -
([ ]ir+ [β€]ir)
<latexit sha1_base64="0pZ2HmsKQgP+XbNURJ8N/uqC69Q=">AAACHnicbVDLSsNAFJ3UV62vqEs3wSJUhJJURZdFNy5cVLAPSEKYTCft0MkkzEyEEvIlbvwVNy4UEVzp3zhpA9XWAwOHc+69c+/xY0qENM1vrbS0vLK6Vl6vbGxube/ou3sdESUc4TaKaMR7PhSYEobbkkiKezHHMPQp7vqj69zvPmAuSMTu5TjGbggHjAQEQakkTz+v2U4I5dAPUqclSOZ6KfFSnmUnM/1WjevDmXVc8fSqWTcnMBaJVZAqKNDy9E+nH6EkxEwiCoWwLTOWbgq5JIjirOIkAscQjeAA24oyGGLhppPzMuNIKX0jiLh6TBoT9XdHCkMhxqGvKvOVxbyXi/95diKDSzclLE4kZmj6UZBQQ0ZGnpXRJxwjSceKQMSJ2tVAQ8ghkirRPARr/uRF0mnUrdN64+6s2rwq4iiDA3AIasACF6AJbkALtAECj+AZvII37Ul70d61j2lpSSt69sEfaF8/vzmjeg==</latexit>
First component of ([]ir+[β€]ir)<latexit sha1_base64="0pZ2HmsKQgP+XbNURJ8N/uqC69Q=">AAACHnicbVDLSsNAFJ3UV62vqEs3wSJUhJJURZdFNy5cVLAPSEKYTCft0MkkzEyEEvIlbvwVNy4UEVzp3zhpA9XWAwOHc+69c+/xY0qENM1vrbS0vLK6Vl6vbGxube/ou3sdESUc4TaKaMR7PhSYEobbkkiKezHHMPQp7vqj69zvPmAuSMTu5TjGbggHjAQEQakkTz+v2U4I5dAPUqclSOZ6KfFSnmUnM/1WjevDmXVc8fSqWTcnMBaJVZAqKNDy9E+nH6EkxEwiCoWwLTOWbgq5JIjirOIkAscQjeAA24oyGGLhppPzMuNIKX0jiLh6TBoT9XdHCkMhxqGvKvOVxbyXi/95diKDSzclLE4kZmj6UZBQQ0ZGnpXRJxwjSceKQMSJ2tVAQ8ghkirRPARr/uRF0mnUrdN64+6s2rwq4iiDA3AIasACF6AJbkALtAECj+AZvII37Ul70d61j2lpSSt69sEfaF8/vzmjeg==</latexit>Second component of
([]ir+[β€]ir)<latexit sha1_base64="0pZ2HmsKQgP+XbNURJ8N/uqC69Q=">AAACHnicbVDLSsNAFJ3UV62vqEs3wSJUhJJURZdFNy5cVLAPSEKYTCft0MkkzEyEEvIlbvwVNy4UEVzp3zhpA9XWAwOHc+69c+/xY0qENM1vrbS0vLK6Vl6vbGxube/ou3sdESUc4TaKaMR7PhSYEobbkkiKezHHMPQp7vqj69zvPmAuSMTu5TjGbggHjAQEQakkTz+v2U4I5dAPUqclSOZ6KfFSnmUnM/1WjevDmXVc8fSqWTcnMBaJVZAqKNDy9E+nH6EkxEwiCoWwLTOWbgq5JIjirOIkAscQjeAA24oyGGLhppPzMuNIKX0jiLh6TBoT9XdHCkMhxqGvKvOVxbyXi/95diKDSzclLE4kZmj6UZBQQ0ZGnpXRJxwjSceKQMSJ2tVAQ8ghkirRPARr/uRF0mnUrdN64+6s2rwq4iiDA3AIasACF6AJbkALtAECj+AZvII37Ul70d61j2lpSSt69sEfaF8/vzmjeg==</latexit>Second component of at the ADMM iteration[ ]kir+ [β€]kir
<latexit sha1_base64="fLL1EchFre4w6fyjwJHSh46Zm4E=">AAACI3icbVDLSsNAFJ34rPUVdekmWARBKEkVFFdFNy5cVLAPSGKYTCbt0MmDmYlQhvyLG3/FjQuluHHhvzhpA9rWAwOHc+69c+/xU0q4MM0vbWl5ZXVtvbJR3dza3tnV9/Y7PMkYwm2U0IT1fMgxJTFuCyIo7qUMw8inuOsPbwq/+4QZJ0n8IEYpdiPYj0lIEBRK8vQr24mgGPihdFqc5K4niSdZnj/KYX76692piQGctT29ZtbNCYxFYpWkBkq0PH3sBAnKIhwLRCHntmWmwpWQCYIozqtOxnEK0RD2sa1oDCPMXTm5MTeOlRIYYcLUi4UxUf92SBhxPop8VVkszee9QvzPszMRXrqSxGkmcIymH4UZNURiFIEZAWEYCTpSBCJG1K4GGkAGkVCxVlUI1vzJi6TTqFtn9cb9ea15XcZRAYfgCJwAC1yAJrgFLdAGCDyDV/AOPrQX7U0ba5/T0iWt7DkAM9C+fwDD3qbT</latexit>
k= 0
<latexit sha1_base64="7Y5WECu9jn9tYKG34AbZ9RCrSbM=">AAAB63icdVDLSgNBEOz1GeMr6tHLYBA8LbtR1ByEoBePEcwDkiXMTmaTITOzy8ysEJb8ghcPinj1h7z5N84mEXwWNBRV3XR3hQln2njeu7OwuLS8slpYK65vbG5tl3Z2mzpOFaENEvNYtUOsKWeSNgwznLYTRbEIOW2Fo6vcb91RpVksb804oYHAA8kiRrDJpdGFV+yVyp5b9fzqqY9+E9/1pijDHPVe6a3bj0kqqDSEY607vpeYIMPKMMLppNhNNU0wGeEB7VgqsaA6yKa3TtChVfooipUtadBU/TqRYaH1WIS2U2Az1D+9XPzL66QmOg8yJpPUUElmi6KUIxOj/HHUZ4oSw8eWYKKYvRWRIVaYGBtPHsLnp+h/0qy4/rFbuTkp1y7ncRRgHw7gCHw4gxpcQx0aQGAI9/AIT45wHpxn52XWuuDMZ/bgG5zXD2nijdY=</latexit>
MPC region partition at iteration at timet<latexit sha1_base64="btWuKJH9/rrCxCKL5tGKBdwWU5A=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48t2FpoQ9lsN+3azSbsToQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IJHCoOt+O4W19Y3NreJ2aWd3b/+gfHjUNnGqGW+xWMa6E1DDpVC8hQIl7ySa0yiQ/CEY3878hyeujYjVPU4S7kd0qEQoGEUrNbFfrrhVdw6ySrycVCBHo1/+6g1ilkZcIZPUmK7nJuhnVKNgkk9LvdTwhLIxHfKupYpG3PjZ/NApObPKgISxtqWQzNXfExmNjJlEge2MKI7MsjcT//O6KYbXfiZUkiJXbLEoTCXBmMy+JgOhOUM5sYQyLeythI2opgxtNiUbgrf88ipp16reRbXWvKzUb/I4inACp3AOHlxBHe6gAS1gwOEZXuHNeXRenHfnY9FacPKZY/gD5/MH4XeM/A==</latexit> MPC region partition at iteration at time
k<latexit sha1_base64="QkOQd5MwXxT9lzlZmzg+DqFodbA=">AAAB7XicbVBNS8NAEJ34WetX1aOXxSJ4KkkV9Fj04rGC/YA2ls12067dbMLuRCih/8GLB0W8+n+8+W/ctjlo64OBx3szzMwLEikMuu63s7K6tr6xWdgqbu/s7u2XDg6bJk414w0Wy1i3A2q4FIo3UKDk7URzGgWSt4LRzdRvPXFtRKzucZxwP6IDJULBKFqpOXrIcDjplcpuxZ2BLBMvJ2XIUe+Vvrr9mKURV8gkNabjuQn6GdUomOSTYjc1PKFsRAe8Y6miETd+Nrt2Qk6t0idhrG0pJDP190RGI2PGUWA7I4pDs+hNxf+8TorhlZ8JlaTIFZsvClNJMCbT10lfaM5Qji2hTAt7K2FDqilDG1DRhuAtvrxMmtWKd16p3l2Ua9d5HAU4hhM4Aw8uoQa3UIcGMHiEZ3iFNyd2Xpx352PeuuLkM0fwB87nD+avj1c=</latexit> th
k<latexit sha1_base64="7Y5WECu9jn9tYKG34AbZ9RCrSbM=">AAAB63icdVDLSgNBEOz1GeMr6tHLYBA8LbtR1ByEoBePEcwDkiXMTmaTITOzy8ysEJb8ghcPinj1h7z5N84mEXwWNBRV3XR3hQln2njeu7OwuLS8slpYK65vbG5tl3Z2mzpOFaENEvNYtUOsKWeSNgwznLYTRbEIOW2Fo6vcb91RpVksb804oYHAA8kiRrDJpdGFV+yVyp5b9fzqqY9+E9/1pijDHPVe6a3bj0kqqDSEY607vpeYIMPKMMLppNhNNU0wGeEB7VgqsaA6yKa3TtChVfooipUtadBU/TqRYaH1WIS2U2Az1D+9XPzL66QmOg8yJpPUUElmi6KUIxOj/HHUZ4oSw8eWYKKYvRWRIVaYGBtPHsLnp+h/0qy4/rFbuTkp1y7ncRRgHw7gCHw4gxpcQx0aQGAI9/AIT45wHpxn52XWuuDMZ/bgG5zXD2nijdY=</latexit> = 0
Figure 6.1: Illustration of the regions and parameter location over two MPC itera- tions, and the necessary ADMM iterations untilConvergence in each of the MPC iterations. For simplicity in the representation, we consider the parameters in two dimensions.