• Tidak ada hasil yang ditemukan

Mobile Edge Computing (2022) springer

N/A
N/A
Tody Ariefianto Wibowo

Academic year: 2023

Membagikan "Mobile Edge Computing (2022) springer"

Copied!
123
0
0

Teks penuh

Images or other third-party material in this book are covered under the Creative Commons Book License unless otherwise noted in the credit line for the material. This book allows for easy cross-referencing due to its broad coverage of MEC principles and applications.

Mobile Cloud Computing (MCC)

However, the distances between the cloud center and end users in MCC can be hundreds of kilometers and even span countries or continents. In MEC systems, on the other hand, the servers' computing capabilities are allocated to a limited number of end users within their coverage.

Fig. 1.1 The architecture of MCC and MEC Table 1.1 Comparison of MCC and MEC
Fig. 1.1 The architecture of MCC and MEC Table 1.1 Comparison of MCC and MEC

Overview of MEC

The data is then managed and processed by the cloud providers, such as Amazon and Microsoft. Many MEC servers can be privately operated and owned by the users in environments such as home clouds.

Fig. 1.2 The MEC framework
Fig. 1.2 The MEC framework

Book Organization

Next, we present applications of MEC in typical edge computing and edge caching scenarios. Applications of MEC in IoV for task and calculation offloading are presented in chapter 5.

A Hierarchical Architecture of Mobile Edge Computing

Given the proximity of edge servers to end users, compute-intensive and latency-sensitive tasks can be offloaded and performed with low latency and high efficiency. As edge servers are distributed ubiquitously, their computing and cache resource capacity is usually limited.

Fig. 2.1 Hierarchical MEC architecture
Fig. 2.1 Hierarchical MEC architecture

Computation Model

Computation Model of Local Execution

In practice, the value of fmis is limited by the maximum value of fmax, which reflects the limitation of the computing power of the mobile phone. A computational task can be described as D(d,c,T), where d denotes the data size of the computational task, c is the number of CPU cycles required to calculate one bit of the computational task, and T is the maximum delay allowed to complete the task.

Computation Model of Full Offloading

According to [14], the energy consumption for each CPU cycle is given by ςfm2, where ς is the effective switched capacitance, depending on the chip architecture. The corresponding energy consumption by completing the device's offloaded calculation task can be expressed as.

A Computation Model for Partial Offloading

Since the total computational resources of the edge server are limited, there is a computational resource constraint (i.e., . i fei ≤Fe). For example, if the power or computing resources of the device are almost exhausted, offloading the task to the edge server is desirable (ie, the offload ratio should be close to one).

Offloading Policy

Binary Offloading

Another algorithm for offloading decision computation to trade off energy consumption and execution delay was proposed in [26]. In [26], the authors proposed a decision to offload computations to minimize both the overall task execution latency and the overall energy consumption of mobile devices.

Partial Offloading

Analyzing the optimality of the total discharge, the authors concluded that the total discharge cannot be optimal under dynamic scaling of the device voltage. The authors in [34] considered the trade-off between energy consumption and execution delay for a multi-user scenario.

Challenges and Future Directions

Regarding the offloading decision, the current research on computational offloading was investigated, as were the problems of binary offloading and partial offloading. The images or other third-party material in this chapter are included in the chapter's Creative Commons license, unless otherwise indicated in a credit line for the material.

Introduction

In this chapter, we present the architecture of the edge caching mechanism and introduce metrics for evaluating caching performance.

The Architecture of Mobile Edge Caching

The characteristics of the user nodes can affect the data caching performance, including the node movement speed, wireless transmission power, and communication topology. Since the caching scheduling depends on the cooperation of different types of entities, including both the data servers and the requesters, this module is implemented across multiple layers.

Fig. 3.1 Hierarchical mobile edge caching architecture
Fig. 3.1 Hierarchical mobile edge caching architecture

Caching Performance Metrics

  • Hit Rate Ratio
  • Content Acquisition Latency
  • Quality of Experience (QoE)
  • Caching System Utility

To improve the timeliness of news, the latest news should be pushed to the user side as quickly as possible. Furthermore, because an edge caching system includes both users and services, we define the system utility as the sum of the user-side and service-side utilities, which can be presented as.

Caching Service Design and Data Scheduling Mechanisms

Edge Caching Based on Network Infrastructure

Xu, Tao, and Shen [35] used small base stations as edge caching servers and investigated cache layout optimization to reduce long-term system transmission delay without knowing the user's data preferences. 36] presented an edge caching scenario for mobile video streaming in which base stations distributed across a city provide video storage capacity.

Edge Caching Based on D2D Services

In order to minimize the normalized delivery time, an edge caching scheme based on compression and forwarding and a D2D communication scheme were proposed, which proved to be an information-theoretically optimal approach. 39] exploited the relationships between vehicles supported by caching in content delivery services and proposed a social-aware mobile edge caching scheme that leverages deep reinforcement learning to organize socially aware vehicles in content processing and caching and maximizes delivery utility .

Hybrid Service–Enabled Edge Caching

As the hybrid edge caching mechanism is a key technique to improve data delivery efficiency, it has attracted much research interest. However, content subscribers must pay operators for caching and transmission services, so infrastructure-backed edge caching typically has high costs.

Table 3.1 Comparison of edge caching modes Infrastructure supported
Table 3.1 Comparison of edge caching modes Infrastructure supported

Case Study: Deep Reinforcement Learning–Empowered

System Model

DDPG is a deep policy gradient reinforcement learning algorithm that simultaneously learns policy and value functions in the learning process. The DDPG-based content dispatching scheme simultaneously learns an action value function and dispatch policies.

Fig. 3.3 Architecture of the DDPG-based scheme
Fig. 3.3 Architecture of the DDPG-based scheme

Numerical Results

If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. Moreover, the architecture and processes of the MEC-supported edge model sharing system are provided to show the integration angles of MEC and 6G networks.

Fundamental Characteristics of 6G

Summary This chapter first introduces the mobile edge computing (MEC) paradigm in outdoor 5G and 6G networks. Since the generated data contains users' private information, the risk of data leakage during data transmission and storage is a major threat to 6G networks.

Integrating Mobile Edge Computing (MEC) into 6G

Use Cases of Integrating MEC into 6G

The processed data is then transmitted by the MEC server to the cloud servers, which significantly reduces the edge-to-cloud transmission load and reduces the computational load of the centralized cloud servers. Caching the content of MEC apps reduces download latency and improves the user experience.

Applications of Integrating MEC into 6G

The MEC server collects the information of end users under its coverage and optimally determines the caching strategy by analyzing and predicting the popularity of content among distributed users. With MEC, the data can be analyzed at the edge of networks or even at the end-user end.

Fig. 4.3 Applications of MEC for 6G
Fig. 4.3 Applications of MEC for 6G

Challenges of Integrating MEC into 6G

With the growing concern about data security and privacy, conventional cloud mechanisms pose serious threats of user data leakage. Empowered by emerging paradigms such as federated learning [52], MEC will significantly improve data privacy in data analysis of 6G applications.

Case Study: MEC-Empowered Edge Model Sharing for 6G

Sharing at the Edge: From Data to Model

More privacy-preserving machine learning algorithms and security cooperation mechanisms are required to improve the security and privacy of MEC systems.

Architecture of Edge Model Sharing

Processes of Edge Model Sharing

Nodes in the blockchain verify their received data and package it into candidate blocks. Nodes that receive the right to generate blocks add their candidate blocks to the block chain.

Fig. 4.6 The processes of MEC-empowered model sharing
Fig. 4.6 The processes of MEC-empowered model sharing

Introduction

To address these challenges, in this chapter we explore the characteristics of edge computing from an application and service perspective and introduce a hierarchical framework of edge computing. 48 5 Mobile edge computing for the Internet of Vehicles In the VEC system, high vehicle movement speeds and rapid network topology changes lead to unique features that, unlike traditional edge computing systems, are designed for hand-held mobile smart terminals.

Challenges in VEC

Moreover, these characteristics lead to new challenges and require the application of key techniques in MEC architecture design, computing service planning, and resource management, which are investigated and described as follows. However, since the vehicles in the network are mobile and distributed, a centralized control mechanism is spectrum inefficient and time consuming.

Architecture of VEC

It is worth noting that, in some cases, cross-collaboration can be implemented between heterogeneous resources. This case can be seen as using the cost of communication resources in exchange for computational resources.

Fig. 5.1 Architecture of a VEC system
Fig. 5.1 Architecture of a VEC system

Key Techniques of VEC

Task Offloading

They further proposed a joint communication computation resource allocation scheme that minimizes the weighted sum task download delay. To ensure the high reliability of completing vehicle application tasks, Hou, Ren, et al.

Heterogeneous Edge Server Cooperation

Integrating the servers equipped on infrastructures with built-in servers results in a heterogeneous collaboration mode for edge services [73]. This mode takes full advantage of the large coverage and strong capabilities of infrastructure servers and uses built-in servers to compensate for the infrastructure's lack of flexibility.

AI-Empowered VEC

Another cross-cutting research issue of AI-powered VEC is the suitability of learning models in the context of complex vehicle networks. Considering the potentially multiple optimization objectives for download service management and that a single learning model can only meet part of the requirements, incorporating multiple models into the learning process is a promising approach.

A Case Study

Predictive Task Offloading for Fast-Moving Vehicles

In order to improve the efficiency of wireless backhaul transmission, the task input file cannot be transmitted between RSUs. The time cost and cost of multi-hop relay transmission seriously degrade the efficiency of task transmission.

Fig. 5.3 Vehicle mobility-aware predictive task data transmission
Fig. 5.3 Vehicle mobility-aware predictive task data transmission

Deep Q-Learning for Vehicular Computation

In each iteration, the value of the Q-function in the learning process is updated as In addition, the offload system states consist of the amount of computational waiting in the queue that is requested in the MEC servers, a continuous value.

Fig. 5.6 Average utilities under different offloading schemes
Fig. 5.6 Average utilities under different offloading schemes

Unmanned Aerial Vehicle–Assisted Mobile Edge Computing

In addition, details for resource allocation and optimization are presented in three scenarios of UAV-assisted MEC networks. Recent research has focused on advances in the use of UAV-assisted MEC to assist mobile users on the ground.

Fig. 6.1 A UAV-assisted MEC network framework
Fig. 6.1 A UAV-assisted MEC network framework

Joint Trajectory and Resource Optimization in UAV-Assisted

Resource Allocation and Optimization in the Scenario

Serve as a computing server: When terrestrial MEC networks are not reliably established, the UAV acts as a MEC server to assist ground mobile users in performing computing tasks. The energy consumption in the case of a UAV exploiting the computing capabilities of the MEC is a result of the UAV task offloading process, the local computing process, and the UAV flight.

Resource Allocation and Optimization in the Scenario

The energy consumption in the case of a UAV serving as a computer server arises from the local computing process, the task download process and the UAV's flight. In the UAV's flight, the energy consumed must take into account the UAV's speed, acceleration velocity and flight time.

Resource Allocation and Optimization in the Scenario

The minimization of the task completion time is studied in [92] while assuming the condition of a minimum number of computational bits. Unlike the individual optimization of computation latency, energy consumption, and the number of computation bits, computation efficiency is defined as the ratio of the total number of computation bits to the total energy consumption, to achieve a good trade-off between the number of computation bits and energy consumption.

Case Study: UAV Deployment and Resource Optimization

UAV Deployment for MEC at a Wind Farm

The total number of turbines at the wind farm is T. The coordinates of the kth turbine areqk= [xk,yk]. Then, with a known flight sequence, the topology of the UAVs at the wind farm can be designed.

Joint Trajectory and Resource Optimization

For comparison with the proposed method, a branch and bound method is used to find the optimal path for the UAV according to (6.6). Furthermore, comparing Figs. 6.6 the computational energy consumption of the proposed branch-and-bottom method.

The UAV is placed at the turbine whose code is B110. Figure 6.5 shows the result of the trajectory
The UAV is placed at the turbine whose code is B110. Figure 6.5 shows the result of the trajectory

Conclusions

Moreover, the computational energy consumption of the two methods increases as the detection task size increases. The integration of blockchain and federated learning is also presented to improve the security and privacy of the federated learning-based MEC scheme.

The Integration of Blockchain and Mobile Edge Computing

The Blockchain Structure

Adjacent leaves are concatenated in pairs, and the hash of the concatenation forms the parent of the node. It can also help improve the traceability and transparency of the data stored in the blockchain.

Fig. 7.2 Block structure
Fig. 7.2 Block structure

Blockchain Classification

Centralization: The main difference between the three types of blockchains is that a public blockchain is decentralized, a consortium blockchain is partially centralized, and a private blockchain is fully centralized because it is controlled by a single group. A private blockchain is associated with low energy consumption and short consensus latencies due to centralization.

Integration of Blockchain and MEC

Caching providers provide feedback to the base station about the availability of caching resources and their future plans. In the consensus process, one of the base stations is chosen as the leader to create a new block.

Fig. 7.3 Blockchain-empowered secure content caching
Fig. 7.3 Blockchain-empowered secure content caching

Edge Intelligence: The Convergence of AI and MEC

Federated Learning in MEC

The end users in the system are the federated learning clients and the edge servers are the federated learning aggregation server. However, new challenges have also arisen in the implementation of federated learning in MEC systems.

Fig. 7.4 Federated learning–empowered MEC
Fig. 7.4 Federated learning–empowered MEC

Transfer Learning in MEC

The quality of service in MEC systems can be significantly improved by using federated transfer learning techniques. Federated transfer learning can mitigate the requirement for large amounts of data to train machine learning models.

MEC in Other Applications

MEC in Pandemics

100 7 The Future of Mobile Computing Edge transferred between different users, based on trained machine learning models. Since machine learning models can be trained with small amounts of data, federated transfer learning reduces training computations and communication overhead for data transmission.

Fig. 7.8 MEC in pandemics
Fig. 7.8 MEC in pandemics

MEC in the Industrial IoT (IIoT)

Moreover, in addition to traditional types of energy that damage the environment, renewable energy sources such as solar, wind and tidal energy are beginning to be widely used in industrial production. The time-varying and unstable characteristics of renewable energy supply also require MEC analytical monitoring and adaptive planning.

Fig. 7.9 MEC in the IIoT
Fig. 7.9 MEC in the IIoT

MEC in Disaster Management

Ning et al., Energy-latency tradeoff for energy-aware offloading in mobile edge computing networks. Liu, Deep Reinforcement Learning (DRL)-based device-to-device (D2D) caching with blockchain and mobile edge computing.

Gambar

Fig. 1.1 The architecture of MCC and MEC Table 1.1 Comparison of MCC and MEC
Fig. 1.2 The MEC framework
Fig. 2.1 Hierarchical MEC architecture
Fig. 3.1 Hierarchical mobile edge caching architecture
+7

Referensi

Dokumen terkait