Data management is a strong focus in many areas of the industry these days, due to the speed of data acquisition, its volume and new technologies. In this work, the framework is implemented to provide minimum latency to user devices at the edge of the network based on the SLA agreement between the service provider and the network provider. It provides an algorithm for placement in the service chain and monitors the network for potential bottlenecks that may arise in the future and removes them.
To tackle the high demand for Cloud Computing technologies, researchers in both academia and industry are slowly coming to the conclusion that it is better to have smaller instances of data centers closer to the User devices. This not only helps to reduce the response time from the network but also reduces the load on the network. What this means for the end user is that the users will now be served by services closer to them than the Cloud.
These types of scenarios need to be detected and handled in advance to provide seamless services to the users.
Challenges
For example, a user may request VNFs that require 1 GHz CPU, 8 GB RAM, and 10 Mbps connections between compute nodes and three different geographically distributed network endpoints connecting users. VNF requests can be dynamic; it can arrive dynamically and remain in the network for any length of time. Because the capacity limit and fog are limited, some VNF requests may be denied, or some VNFs may need to migrate to the cloud or use resource-sharing scheduling techniques to ensure guaranteed resource availability.
Some SLA requests will have high priority to be placed close to the end users to provide low latency user service. This will increase the network load and indirectly affect other SLAs that share the same node, or link, or both. For example, a streaming service offering an El-Classico soccer match will create a high load on the network.
So temporarily moving or replicating these VNFs closer to the user devices would reduce the core network load.
Related Work
SLAs can have different priorities depending on the negotiations between the service provider and the network provider. The system can be thought of as an Orchestrator built on top of an SDN controller. When an online request for an SLA is received, the placement algorithm 1 places the VNFs that meet the SLA constraints.
System Block Diagram
Openflow v1.5 is used to communicate with the OVS switches and Client-Server protocol is used between the system and the distributed Hypervisors. It uses the information from the In-Memory database and performs either placement or bottleneck removal algorithms. It can also setup or remove new instances of VNFs on Hypervisor, if needed.
It collects switch statistics from OVS-switch, measures connection latency (Cite) and Hypervisor CPU usage using the client-server protocol mentioned above. The flow manager, on cue from the Orchestrator, adds or removes flow rules from the underlying OVS keys. Periodic Network uptime information is maintained in an in-memory database by the Monitor module.
In addition, with the advantages of Edge-Fog-Cloud Computing, placing VNF(s) on the Edge or Fog layer would reduce the response time and network load on the upper layers. The algorithm accepts online SLA requests and implements a VNF service chain layout that meets the SLA constraints. Ensures placement of VNFs closer to user devices for low network latency for user traffic originating from any user endpoint as specified in the SLA.
Placement Algorithm
Seenx List of entry point nodes that have visited node-x du[v] Delay incurred to node-v since starting from entry point-u. Lu,v Delay introduced via link (u,v) Start node where the 1st VNF of the service chain is placed.
Complexity
Working Illustration
Some network resources may be overutilized or underutilized depending on the placement of VNFs. In the worst case, even if the resources are available at the Edge or Fog layer, placement of VNFs may not be possible due to a certain SLA constraint violation. This will result in placing VNFs at the Cloud, thus not taking advantage of Edge or Fog layers.
Our algorithm does not focus on the optimal use of network resources, but aims to place VNFs close to user devices. Suppose the number of requests for certain VNFs increases tremendously, resulting in high CPU utilization on the node and increased load on user-to-VNF connections. Thus, a bottleneck may occur due to high utilization of any network resource, which may result in a violation of one or more SLAs.
Therefore, a proactive measure will require the detection of a possible bottleneck in the future using a threshold-based detection. The cost of having a threshold is the underutilization of the resources, but it would not lead to an SLA violation. We consider node bottlenecks due to high CPU utilization of Hypervisor and perform migration of one or more VNFs from that Hypervisor to reduce CPU load.
Some VNFs in the Hypervisor may become heavily used, increasing CPU utilization on the Hypervisor, which affects the performance of other VNFs. High Link Throughput Increase in traffic for a particular SLA can lead to high load over links in the VNFs. Select VNFs in the Service Chain in descending order of their utilization of the link.
It also avoids ping-pong effect of traffic in the network due to the service chain. The algorithm considers several cases during migration without violating the SLA constraints.
Algorithm
To avoid looping, it also considers migrating part of the Service Chain of VNFs, if possible, to the neighbor node hosting either the predecessor or the successor to that part of the Service Chain. Higher available RAM at the node and high link bandwidth increase the chances of migration, but link latency reduces it. In cases where part of the service chain is installed on the same node, the VNFs at the two ends of this part of the service chain are preferred to avoid loops of traffic as shown in fig. 5.1.
Also, the nodes that host the predecessor or successor of this Service chain are preferred, shown in Fig.5.2. If the migration does not violate any SLA constraints, then the Priority Score row (8) is retained. If no migration is possible due to insufficient resources in neighboring nodes, then resources are created by migrating the VNF from the neighboring node.
So the victim VNF and neighboring nodes are considered at both the bottleneck node and its neighboring nodes (lines 9-16). After this, if there is no choice of migration, the network is assumed to be fully utilized and hence a victim VNF along with all other VNFs of its.
Complexity
Working Illustration
Migration to Neighbor
Migration by creating resources at the Neighbor
No VNF of the SLA is placed on the same Hypervisor that hosts the User Container. SLAs of different service chain lengths are randomly generated with Users on different Edge Hypervisors. This chapter shows the performance evaluation of Deployment and Migration Algorithms in both simulation testbed and emulation.
Due to the limited resources of the host machine, a limited number of SLAs could be implemented. With dedicated user entry points, the placement algorithm would converge at the same point and require a comparison of the same conditions for each SLA, uniformly increasing the placement time. In this experiment, we map the ratio between the number of SLAs placed across hypervisors at the Edge and Fog layers and the total number of SLAs to be placed for different service chain lengths.
Also, increasing the length of the service chain would reduce the SLAs imposed on the Edge-Fog layer. Initially, as VNFs/SLAs are deployed, network resources at the Edge and Fog layers are used. For subsequent VNFs/SLAs, resources at the Edge and Fog layers are exhausted and thus deployed to the Cloud.
Additionally, the CDF shows the frequency of bottlenecks handled by different instances as mentioned in the migration algorithm. This is because when multiple SLAs are deployed in a network, resources are used at the Edge and Fog levels. Therefore, when a bottleneck is detected, no VNF in the hypervisor with the bottleneck can be migrated (Case 1 and Case 2 are not applicable).
Therefore, a VNF on this Hypervisor is migrated along with all other VNFs in the service chain to the cloud. The placement algorithm placed the VNFs closer to the user devices by placing VNFs of smaller instances at the Edge and the larger or heavy instances at the upper layers.
Cumulative Placement Time
Simulation
Edge Hypervisor RAM 16 GB CPU each Hypervisor v oblaku 1024 CPE each Fog Hypervisor 32 CPE each Edge Hypervisor 8.
Emulation
Placement Ratio Over Edge-Fog Network
Simulation
Emulation
CDF of the Bottleneck Removal Algorithm
Simulation
Edge Hypervisor RAM 32 GB Cloud Hypervisor CPU Cores 1024 Fog Hypervisor CPU Cores 64 Edge Hypervisor CPU Cores 32. In this work, we have investigated the handling of online SLA requests from Service Providers by placing the VNFs closer to the user devices, which reduces both Network delay and the load on the network. Upon detection of a possible Node bottleneck based on a Threshold based system, they were removed using the Bottleneck Removal algorithm when migrating VNFs.