• Tidak ada hasil yang ditemukan

Distributed and Localized Model Predictive Control

N/A
N/A
Protected

Academic year: 2023

Membagikan "Distributed and Localized Model Predictive Control"

Copied!
145
0
0

Teks penuh

Introduction

Prior work

Prior to this thesis, a closed-loop DMPC algorithm that (i) computes structured feedback policies via convex optimization and (ii) can be solved to scale via distributed optimization does not currently exist in the literature. Since these approaches require the collection of past trajectories of the global system, their scalability and applicability in the distributed environment is hindered.

Contributions and Roadmap

Through numerical experiments, we validate these results and further confirm that the complexity of the subproblems is solved. We use the separability of the DLMPC problem to provide explicit analytical solutions for the optimization problem solved by each subcontroller.

Notation

Next, we show that imposing X𝑇 as the terminal set of the DLMPC problem (3.1) ensures recursive feasibility. The cost realized by the optimal controller versus the size of the system response location region.

Synthesis and Implementation

Introduction

The first approach, which we use here, is to compute dynamic structured closed-loop policies using suitable parameters. These methods allow distributed closed-loop control policies to be synthesized via convex optimization; however, the resulting optimization problem lacks structure and is not amenable to distributed optimization techniques.

Problem Formulation

Note that the introduction of 𝛾𝑡(·) causes the formulation to be a closed loop at the expense of increasing the complexity of the formulation. Therefore, we can enforce the 𝑑-constraint on local information exchange in the MPC problem (2.3)—where the size of the local neighborhood𝑑 is a design parameter—by forcing it to respect the policy of each subcontroller.

Figure 2.1: Schematic representation of a system with a chain topology.
Figure 2.1: Schematic representation of a system with a chain topology.

Localized MPC via System Level Synthesis

Therefore𝚵 lies inL𝑑𝐻, and together with the double reformulation of the robust polytopic constraint𝐺𝜹 ≤ 𝑔 gives rise to problem (2.10). The perturbation-based parameterization(M,v) defined in section 4 of [15] asu=Mw+vis a special case of the SLS parameterization(2.5).

Figure 2.3: Graphical representation of the scenario in Example 3.
Figure 2.3: Graphical representation of the scenario in Example 3.

Distributed AND Localized MPC Based on ADMM

This means that the length of the rows and columns that a subsystem solves is much smaller than the length of the rows and columns of 𝚽. Note that the complexity of the subproblems largely depends on whether noise is taken into account.

Simulation Experiments

We observe the trajectories of the closed loop system using DLMPC for the different sound realizations. The same goes for the location size, which is much smaller than the size of the network.

Figure 2.4: Evolution of the states of subsystem 4 under a nominal (green) and a robust (purple) controller
Figure 2.4: Evolution of the states of subsystem 4 under a nominal (green) and a robust (purple) controller

Conclusion

We provide a distributed and localized algorithmic solution that does not scale with the size of the network. First, we show that the running time of our method scales well with the size of the global system.

Figure 2.5: Average runtime per DLMPC iteration with network size (left), locality parameter (middle), and time horizon (right)
Figure 2.5: Average runtime per DLMPC iteration with network size (left), locality parameter (middle), and time horizon (right)

Recursive Feasibility, Convergence, and Asymptotic Stability

Introduction

Moreover, the imposed structure inevitably leads to a possibly small approximation of the maximum control invariant set (the least conservative option for a terminal set). Our first important contribution is to provide the first exact, fully distributed calculation of the maximum positive invariant set, or terminal set.

Problem Formulation

We show that it can be expressed in terms of closed-loop system responses, as defined in the System Level Synthesis (SLS) framework [51]. In this chapter, we explicitly treat the formulation and synthesis of the terminal set X𝑇 and the terminal cost 𝑓𝑇, and we use these to provide the aforementioned feasibility and stability guarantees.

Feasibility and Stability Guarantees

By the ADMM convergence result in [ 52 ], the desired convergence of residual, objective function, and dual variable is guaranteed if for this algorithm. Both conditions for the ADMM convergence result from [52] are met – Algorithm 1 converges in residual, objective function and dual variable.

Feasible and Stable DLMPC

If XandUinduces a 𝑑-local coupling, the set of terminals will be 2𝑑-localized because – in the presence of coupling–. In the robust setting, we include a terminal constraint on P˜ to ensure that the terminal constraint is robustly satisfied.

Simulation Experiments

The addition of a set of terminals and costs to the DLMPC algorithm introduces minimal conservatism in both the nominal and robust settings. Methods 2 and 3 look very similar; the addition of terminal charges introduces little change.

Figure 3.1: Relative difference of the optimal cost obtained (i) with terminal set (pink) and (ii) with both terminal set and cost (yellow) compared to the optimal cost computed without terminal set and cost
Figure 3.1: Relative difference of the optimal cost obtained (i) with terminal set (pink) and (ii) with both terminal set and cost (yellow) compared to the optimal cost computed without terminal set and cost

Conclusion

In this section, we analyze the effect of locality constraints𝚽 ∈ L on the optimal performance of the MPC problem (2.7). In Figure 5.2, we show how the optimal cost varies with the size of the locality region in the same system.

Global Performance Guarantees

Introduction

We find that under reasonable assumptions, global optimal performance can be achieved with relatively tight local communication constraints. Local communication can facilitate faster computational speed [28] and convergence [29], especially in the presence of delays [75] – however, this usually comes at the cost of suboptimal global performance [30].

Problem Statement

For the rest of the chapter, we abuse notation and refer to the first block column of the full matrix𝚽–denoted as𝚽{1}–as𝚽, where we drop the term {1}. For this reason, we investigate here conditions where we can keep the location parameter 𝑑small without affecting the overall optimal performance of the system.

Global Performance for Localized MPC

We now parameterize (4.2b) and combine it with Lemma 8, which gives a closed form expression for localized trajectory setYL(𝑥0). Image space representation of localized trajectory collection) Assume there exists some 𝚽 satisfying constraints (4.2a) and (4.2b). A subtlety is that this definition takes the whole of 𝚽 into account, while Definition 7 omits the first 𝑁𝑥 rows of 𝚽. Image space representation of localized trajectory collection) Assume there exists some 𝚽 satisfying constraints (4.2a) and (4.2b).

Algorithmic Implementation of Optimal Locality Selection

If 𝑑 = 1 does not preserve global performance, we iteratively increase 𝑑, construct 𝐽, and check its rank, until optimal global performance is achieved. Also assume that the number of non-zero 𝑁𝔐 for locality constraint corresponding to parameter𝑑 is related to each other as𝑂(𝑁𝔐) =𝑂(𝑁 𝑑𝑇).

Simulations

We characterize how the optimal site size changes as a function of system size and horizon length – the results are summarized in figure 4.3. From the previous section, we found that with 100% activation, the optimal site size is always 1.

Figure 4.2: Runtime of matrix construction (Step 1, green) and rank determination (Step 3, pink) of Algorithm 4 vs
Figure 4.2: Runtime of matrix construction (Step 1, green) and rank determination (Step 3, pink) of Algorithm 4 vs

Conclusion

We note that the runtime has only increased 2×, while the size of the system has increased more than 12×. First, we provide an explicit solution to each of the subproblems arising from the DLMPC scheme.

Data-driven Approach in the Noiseless Case

Introduction

However, these approaches require the collection of past trajectories of the global system, which hinders their scalability and calls into question their applicability in the distributed environment. We address this gap and present a data-driven version of the model-based Distributed Localized MPC (DLMPC) algorithm for linear time-invariant systems in a noise-free environment.

Problem Formulation

Through numerical experiments, we validate these results and further confirm that the complexity of the subproblems solved in each subsystem does not scale relative to the full system size. In what follows we show the problem (2.3) that admits a distributed solution and implementation that requires only local data and no explicit estimation of the system model.

Data-driven System Level Synthesis

In particular, the reachability constraint (2.6) can be replaced by a data-driven representation using Willems' fundamental lemma [84] for system response columns. The key insight is that by the definition of system responses (2.5), 𝚽𝑖𝑥 and 𝚽𝑖𝑢 are impulse responses xanduto[𝑥0]𝑖, which are themselves valid system trajectories that can be described by Willems' fundamental lemma.

Localized data-driven System Level Synthesis

The sparse pattern of the partition follows directly from the definition of reinforced localized subsystems and the subsystem dynamics (5.4). We can now show that equation (5.8) holds for each of the 𝑖th block columns and Φ𝑖 is a feasible impulse response of the system.

Distributed AND Localized algorithm for Data-driven MPC

We also note that the trajectory length should only scale with the localized system size rather than the global system size. Furthermore, such a cost can be included in the D3LMPC formulation in the same way as in the model-based DLMPC problem, and the structure of the resulting problem is analogous.

Simulation Experiments

We observe that the trajectory generated by our controller matches the trajectory of the optimal controller with the same location region size. We observe that the cost of the distributed controllers decreases as the size of the locality region grows and approaches the cost of the centralized controller.

Figure 5.1: The trajectory generated by the model-based DLMPC algorithm (solid orange line) vs
Figure 5.1: The trajectory generated by the model-based DLMPC algorithm (solid orange line) vs

Conclusion

Ong, “Distributed MPC of constrained linear systems with online decoupling of the terminal constraint,” in Proc. Pistikopoulos, "The Explicit Solution of Model Predictive Control via Multiparametric Quadratic Programming," in Proceedings of the 2000 American Control Conference.

Explicit Solution and GPU Parallelization

Introduction

In this area, several works propose an explicit MPC strategy, moving most of the computational load offline, so that the computational overhead of the online algorithms is significantly reduced [91]. In this chapter, we propose an explicit MPC solution applicable to large networks and show how it enables GPU pallization of the DLMPC algorithm.

Problem Formulation

Moreover, we show that the communication constraints between GPU computing threads resemble the communication scheme in control systems for large networks, and we take advantage of the local communication constraints that are already included in the DLMPC algorithm to obtain explicit with these hardware- overheads of internal communication in principle. In this chapter, we examine these issues and provide an accelerated solution to problem (6.1), both algorithmically and for parallel hardware implementation.

An Explicit Solution for DLMPC

Thus, each row of [𝚽]𝑖𝑟 multiplied by [𝑥0] The first component of ([]ir+[⇤]ir)AAACHnicbVDLSsNAFJ3UV62vqEs3wSJUhJJURZdFNy5cVLAPSEKYTC A9XWAwOHc+69c+/xY0q ENM1vrbS0vLK6Vl6vbGxube/ou3sdESUc4TaKaMR7PhSYEobbkkiKezHHMPQp7vqj69zvPmAuSMTu5TjGbggHjAQEQvAcFUcl45AQEQvQMqn4 1WjevDmXVc8fSqWTcnMBaJVZAqKNDy9E +nH6EkxEwiCoWwLTOWbgq5JIjirOIkAscQjeAA24oyGGLhppPzMuNIKX0jiLh6TBoT9XdHCkMhxqVxKZKJ0JQV6QZQBQJQB0JQV000JQV006KvDKv ZGnpXRJxwjSceKQMSJ2tVAQ8ghkirRPARr/uRF0mnUrdN64+6s2rwq 4iiDA3AIasACF6AJbkALtAECj+AZvII37Ul70d61j2lpSSt69sEfaF8/vzmjeg== The second component i. ir+[⇤]ir)AAACHnicbVDLSsNAFJ3UV62vqEs3wSJUhJJURZdFNy5cVLAPSEKYTCewWEMv4LBUEZEwWEMv4Lb4 ​​OHc+69c+/xY0qENM1vrbS0vLK6Vl6vbG xube/ou3sdESUc4TaKaMR7PhSYEobbkkiKezHHMPQp7vqj69zvPmAuSMTu5TjGbggHjAQEQakkTz1WMJUFQUClSW1+d2D mXVc8fSqWTcnMBaJVZAqKNDy9E+nH6EkxEwiCoWwLTOWbgq5JIjirOIkAscQjeAA24oyGGLhppPzMuNIKX0jiLh6 TBoT9XdHCkMhxqGvKvOV9LTOWbgq5JIjirOIkAscQjeAA24oyGGLhppPzMuNIKX0jiLh6TBoT9XdHCkMhxqGvKvOVLTOWbgq5JIjirOIkAscQjeAA24oyGGLhppPzMuNIKX0jiLh6TBoT9XdHCkMhxqGvKvOV9LEZkMhxqG vKvOV9LEZkMhxqGvKvOV95ZkBXj RJxwjSceKQMSJ2tVAQ8ghkirRPARr/uRF0mnUrdN64+6s2rwq4iiDA3AIasACF6AJbkALtAECj+AZvII37Ul70d61j2lpSSt69sEfaF8/vzmjeg==S and in the repetition ADMM[ ]kir+ [⇤]kir.

Table 6.1: Partition of the space of 𝑥 0 into the different regions that lead to different solutions for 𝜆 .
Table 6.1: Partition of the space of 𝑥 0 into the different regions that lead to different solutions for 𝜆 .

A GPU Parallelization for DLMPC

For each ADMM step, we assign each of the below subproblems to a single thread in GPU:. First, the threads have different run times because of the different lengths of the rows and columns they compute.

Figure 6.2: The DLMPC algorithm consists of different computation steps. A pre- pre-computation step is carried out to compute necessary matrices that stay constant throughout all MPC iterations
Figure 6.2: The DLMPC algorithm consists of different computation steps. A pre- pre-computation step is carried out to compute necessary matrices that stay constant throughout all MPC iterations

Simulation Experiments

We note that the runtime is measured for the entire simulation rather than just the ADMM part of the algorithm. We observe that GPU computation strategies scale much better with the size of the network than CPU implementations.

Figure 6.7: Comparison of the runtimes obtained by different computing strategies for different parameter regimes
Figure 6.7: Comparison of the runtimes obtained by different computing strategies for different parameter regimes

Conclusion and Future work

Zeilinger, “A system-level approach to tube-based predictive control,” IEEE Control Systems Letters, vol. Matni, “Data-driven distributed and localized model predictive control”, IEEE Open Journal of Control Systems, vol.

Conclusion

Challenge Addressed

Our goal is to achieve this in a computationally efficient and non-conservative way, making DMPC applicable in real-world scenarios. To achieve this, additional considerations are needed to provide efficient and non-conservative theoretical guarantees, while also thoroughly understanding the impact of local communication constraints on performance.

Main Contributions

We extend these results to the purely data-driven scenario in the noise-free case, eliminating the need for a system model. Furthermore, the extension to a data-driven approach enables the application of these results to real-world distributed systems that lack an accurate system model.

Future Research Directions

Zeilinger, “Distributed Synthesis and Stability of Cooperative Distributed Model-Predictive Control for Linear Systems”, Automatica, vol. Lygeros, “Distributed model predictive control with asymmetric adaptive terminal sets for the control of large-scale systems,” IFAC-PapersOnLine, vol.

Gambar

Figure 2.1: Schematic representation of a system with a chain topology.
Figure 2.2: Topology of the system. (a) Support of matrix 𝐴 . (b) Example of 1-incoming and 1-outgoing sets for subsystem 5.
Figure 2.3: Graphical representation of the scenario in Example 3.
Figure 2.4: Evolution of the states of subsystem 4 under a nominal (green) and a robust (purple) controller
+7

Referensi

Dokumen terkait

The Third International Multidisciplinary Conference on Social Sciences The 3rd IMCoSS 2015 Bandar Lampung University ISSN 2460-0598 PROCEEDINGS 3 rd IMCoSS 2015 The Third