• Tidak ada hasil yang ditemukan

Thesis submitted in partial fulfillment of the requirements for the degree of

N/A
N/A
Protected

Academic year: 2023

Membagikan "Thesis submitted in partial fulfillment of the requirements for the degree of"

Copied!
199
0
0

Teks penuh

The locations of the hypervisors and controllers together determine the latency of network elements in a virtualized, software-defined network. The first strategy fixes the hypervisor(s) in the physical network and then determines the placement of controllers in each virtual network.

Figure 1.1: High-level overview of HyperFlow [1].
Figure 1.1: High-level overview of HyperFlow [1].

Motivation of the Research Work

Constraint (1.4) is known as the demand constraint, which ensures that the total demand of switches assigned to a controller does not exceed its capacity. Constraint (1.8) ensures that the total demand of switches redistributed to controller k does not exceed its remaining capacity Uk0.

Contributions of the Thesis

  • Failure Foresight Capacitated Controller Placement in SDNs . 32
  • Cooperative Game Theory based Network Partition for Con-
  • Controller and Hypervisor Placement in Virtualized SDNs
  • Organisation of the Thesis

We demonstrate that the worst-case switching to controller latency of the proposed model is better than CCP in case of controller failure. The worst case latency of the solutions induced by cooperative k-means outperforms standard k-means and is very close to the optimal solution.

Limitations of traditional network architecture

However, it will increase the complexity of the network due to the presence of heterogeneous nature and large number of network elements. The lack of standard and open interfaces limits the ability of network operators to tailor the network to their individual needs.

Software Defined Network

Upon packet arrival, the packet header is matched against the flow table entries. These controllers are responsible for maintaining a global view of the network using information gathered from tracking devices.

Figure 2.1: Basic structure of a network element
Figure 2.1: Basic structure of a network element

Precursors to SDN

Programmability of the control plane

Control plane and data plane separation

A PCE is an application that runs inside a network element or on an external entity (outside the network). When a network element may not have complete topology information to calculate a route to a destination.

OpenFlow

The statistics section of a flow entry keeps track of information such as the number of packets from each flow, and time since the last packet matched a flow, etc.

Table 2.1: List of commodity switches that respect OpenFlow standard Vendor Switch Model
Table 2.1: List of commodity switches that respect OpenFlow standard Vendor Switch Model

Related Work

Controller Placement

A controller placement strategy to minimize the number of controllers in the network is formulated as an integer linear program [84]. The authors in [87] proposed a controller placement strategy that minimizes the number of controllers in the network while limiting the maximum switch to controller latency and inter controller latency.

Figure 2.4: Taxonomy of controller placement problems in SDNs
Figure 2.4: Taxonomy of controller placement problems in SDNs

Hypervisor Placement in SDNs

Summary

The worst case latency of the CCP without and with failures when three controllers are deployed in Internet 2 OS3E topology [98] is shown in Fig. Therefore, we think there is a need for planning ahead to avoid disconnections, repeated administrative intervention and drastic prevention increase in the worst case latency in case of controller failures. To the best of our knowledge, this is the first work that plans ahead for controller failure to avoid a drastic increase in worst-case latency and disconnections.

The goal of our first model is to minimize worst-case latency between switches and their second reference controllers, while satisfying capacity and allocation constraints.

Figure 3.1: Worst case Latency of Internet 2 OS3E topology without planning a) Without failure
Figure 3.1: Worst case Latency of Internet 2 OS3E topology without planning a) Without failure

Problem Formulation

  • Input parameters
  • Assumptions
  • FFCCP for single controller failure
  • FFCCP for multiple failures
  • FFCCP with Combined Objective
  • Controller Failover

First, it ensures that j is either the first reference controller or the second reference controller on a switch i. Constraint (3.6) ensures that the first reference controller of a switch must be the controller that is closest. Constraint (3.8) ensures that the objective value is greater than or equal to the minimum delay from each transition to its second reference controller.

Constraint (3.16) ensures that the first reference controller of a switch i must be the controller that is closest to the switch i [100].

Table 3.1: Notations used in FFCCP Input Parameter Description
Table 3.1: Notations used in FFCCP Input Parameter Description

Performance metrics

Numerical Results

Evaluation Setup

Results

A comparison between the performance of CCP, FFCCP, and CO-FFCCP with one controller error (i.e., µ = 2) for different values ​​of p, starting with the minimum required number of controllers, is shown in Fig. CO-FFCCP performs better than FFCCP in the case of no errors and better than CCP in the case of errors because it minimizes the worst-case delays both with and without errors. It can also be observed that the worst-case delay of CO-FFCCP is close to CCP in the case of no errors and close to FFCCP in the case of errors.

On the other hand, the number of controllers required by FFCCP and CO-FFCCP is more than CCP because they.

Figure 3.2: Worst case switch to controller latency of CCP, FFCCP, and CO-FFCCP with one controller failure
Figure 3.2: Worst case switch to controller latency of CCP, FFCCP, and CO-FFCCP with one controller failure

Complexity analysis

The latency between the most distant controllers increases with the number of controllers in the network. Therefore, the maximum inter-controller latency of FFCCP is lower than CCP and CO-FFCCP when less number of controllers are deployed in the network, and it is close to CCP and CO-FFCCP when more than the required number of controllers are deployed in network, as shown of fig. However, the cost of the network increases when much more than the required number of controllers are deployed in the network.

Therefore, the designer of the network typically avoids deploying more than the required number of controllers.

Conclusion

  • Input Parameters
  • Assumptions
  • CNCP Formulation with two-indexed variables
  • CNCP Formulation with three-indexed variables
  • CNCP Formulation for multiple failures
  • Controller Failover

Note that if a controller is installed at location j, then it is the first reference controller for switch j. The first reference controller of a switch i is the controller closest to i and the lth (l > 1)th reference controller of a switch i is the controller closest to the (l −1)th reference controller of i. The first term on the right-hand side of (4.31) corresponds to the latency from the switch to its first reference controller.

The lth reference controller of a switch becomes master when the first (l −1) reference controller of the switch fails, for 2 ≤ l ≤ µ.

Table 4.1: Decision variables used in CNCP formulation Variable Description
Table 4.1: Decision variables used in CNCP formulation Variable Description

Performance metrics

Heuristic Solution

In each iteration, a random neighbor S0 of the current solution S is generated in step 8 and its objective value is calculated in step 9. The new solution is accepted and becomes the current solution for the next iteration with a probability,P(S0,∆ , T ). If not accepted, another neighbor of the current solution is generated and the process continues until the temperature reaches a final temperature.

On the other hand, the new solution that is worse than the current one is accepted ifexp(−∆T)≥r.

Numerical Results

Evaluation Setup

Results

However, the worst-case latency of CNCP does not follow this trend in error-free cases, which can be observed in Fig. Worst-case latency does not characterize the distribution of latency from the switch to the controller in the network. The results comparing the worst-case latency and maximum worst-case latency of CCP and CNCP when the maximum inter-controller latency is limited to 60% of the network diameter are shown in Figure 2.

There is an increase in the maximum worst-case latency of CNCP when we limit the maximum latency between controllers.

Figure 4.1: Worst case latency and maximum worst case latency of CCP and CNCP on various networks.
Figure 4.1: Worst case latency and maximum worst case latency of CCP and CNCP on various networks.

Complexity analysis

We can observe that the simulated annealing heuristic takes less than half the time that ILP takes. The progress of the simulated annealing heuristic on the GEANT network for p=3 and p=4 was shown in Figure. The gap between the heuristic objective value and the optimal value is larger at higher temperatures, because the simulated annealing heuristic explores the search space by accepting worse solutions. with a high probability at higher temperatures.

Since the probability of accepting worse solutions decreases with temperature, the simulated annealing heuristic converges to the optimal value at lower temperatures.

Conclusion

  • Cooperative game
  • Core of the game
  • Shapley value
  • Convex game
  • Shapley value of a convex game

Motivated by this, we propose a controller placement strategy that partitions the network using the k-means algorithm with cooperative game theory-based initialization. Dividing the network into subnetworks is modeled as a cooperative game where all switches are the players of the game. In this section we present the background concepts of cooperative game theory used in formulating the problem.

The core of a cooperative game is a set of all payoff allocations that are individually rational, coalitionally rational, and collectively rational.

Figure 5.1: Worst case latency of standard k-means on various topologies when executed for 100 times.
Figure 5.1: Worst case latency of standard k-means on various topologies when executed for 100 times.

Problem Formulation

  • Input Parameters
  • Network Partitioning
  • Cooperative k-means Network Partitioning
  • Load aware Cooperative k-means Network Partitioning

LetP is the finite, non-empty set of potential locations for deploying controllers in the network. It takes as input a set of potential locations for deploying controllers, the latency matrix with the shortest path of all pairs and a threshold parameter, and returns the initial controller location as output. Each switch is a potential location for deploying a controller. Therefore, step 1 of the algorithm initializes P with S and starts with an empty set of initial controller locations C0.

Takes as an input set of switches in the network, all pairs of the shortest path delay matrix and a threshold parameter, and returns the final locations for setting controllers and switching to controller assignment.

Numerical Results

System Setup

Results

We can observe that the worst-case latency of the solution induced by cooperative k-means outperforms standard k-means and is very close to the optimal solution in all the networks. Accordingly, the worst-case delay distribution of cooperative k-means and standard k-means, when evaluated 100 times on BT North America network, is shown in Fig. Similar trends can be observed regarding the distribution of worst-case delays on Chinanet and Interoute networks.

We also evaluated the performance of capacitive cooperative k-means and equipartition cooperative k-means strategies on BT North America, Chinanet and.

Figure 5.3: Worst case latency of standard k-means and cooperative k-means strategies on various networks.
Figure 5.3: Worst case latency of standard k-means and cooperative k-means strategies on various networks.

Summary

On the other hand, determining the optimal placement of controllers in each virtual network while fixing hypervisors in the physical network is shown in Fig. Motivated by this, we present different strategies for determining the placement of hypervisors and controllers in VSDN.

We also present an approach to jointly optimize the placement of hypervisors and controllers in VSDN.

Figure 6.1: Placement of Hypervisor and controllers in Internet 2 OS3E topology.
Figure 6.1: Placement of Hypervisor and controllers in Internet 2 OS3E topology.

Problem Formulation

  • Input parameters
  • Assumptions
  • Controller Placement in VSDNs
  • Joint Hypervisor and Controller Placement
  • Generalized JHCP
  • Other Objectives

The variable rmv,k is set to one if the virtual node v ∈ Vm is assigned to the controller at k ∈Vm, otherwise to zero. 1, If ​​the query from the virtual node v ∈Vm passes through the hypervisor deployed at potential location j ∈H to get the. Constraint (6.17) ensures that the query from each virtual node v ∈ Vm passes through exactly one hypervisor to reach the controller.

Constraint (6.19) sets a hypervisor node j to be the controller of the physical node i (ti,j = 1) if the virtual node v ∈ Vm.

Table 6.1: Notations Symbol Description
Table 6.1: Notations Symbol Description

Numerical Results

  • Evaluation Setup
  • Performance analysis of VCPP in static scenario
  • Performance analysis of VCPP in dynamic scenario
  • Performance analysis of JHCP
  • Impact of number of hypervisors and controllers
  • Complexity analysis

The gap between the worst-case latencies of VCPP and HPP increases with the number of virtual networks. The worst-case latency of JHCP and HPP increases with the number of virtual networks. Moreover, the gap between JHCP and HPP worst-case latencies increases with the number of virtual networks.

Since H = N and |Vm| ⊆ N, the number of decision variables in JHCP for worst case latency and HPP is asymptotically equal to O( P . m∈M.

Figure 6.2: Performance of VCPP and HPP while optimized for the worst case latency. (a) AT&T network
Figure 6.2: Performance of VCPP and HPP while optimized for the worst case latency. (a) AT&T network

Conclusion

First, we addressed the problem for controller placement in SDNs that enable failover, avoiding disconnections, repeated administrative intervention, and drastic increase in worst-case latency in the event of controller failures. The proposed models achieve a significant improvement in the worst-case latency in the case of failures and inter-controller latency. The second approach jointly determines the placement of hypervisors in the physical network and controllers in each virtual network.

Our proposed models perform better not only in terms of worst-case failure latency, but also in terms of maximum and average inter-controller latency.

Future Directions

Guo, "On the capacitated controller placement problem in software-defined networks," IEEE Communication Letters, vol. St-Hilaire, "Optimal model for the controller placement problem in software-defined networks," IEEE Communication Letters, vol. 80] ——, "Extensional model for the controller placement problem in software-defined networks ,” IEEE Communications Letters, vol.

Simha, “Optimal controller placement in software-defined networks (sdn) using a nonzero-sum game,” in Proc.

High-level overview of HyperFlow [1]

Partitioning WAN into multiple domains

Basic structure of a network element

Software Defined Network architecture [2]

Block diagram of an OpenFlow switch

Taxonomy of controller placement problems in SDNs

Worst case Latency of Internet 2 OS3E topology without planning a)

Worst case switch to controller latency of CCP, FFCCP, and CO-

Worst case switch to controller latency of CCP, FFCCP, and CO-

Worst case inter controller latency of CCP, FFCCP, and CO-FFCCP

Average inter controller latency of CCP, FFCCP, and CO-FFCCP

Worst case inter-controller latency with two controller failures

Average inter-controller latency with two controller failures

Worst case latency and maximum worst case latency of CCP and

Switch to controller latency distribution of CCP and CNCP on

Maximum inter controller latency of CCP and CNCP on various

Average inter controller latency of CCP and CNCP on various networks.114

Gambar

Figure 1.2: Partitioning WAN into multiple domains.
Figure 2.2: Software Defined Network architecture [2]
Figure 2.3: Block diagram of an OpenFlow switch Table 2.3: Header fields used to define the flow In
Figure 2.4: Taxonomy of controller placement problems in SDNs
+7

Referensi

Dokumen terkait

06, Special Issue 05, IC-LLCP-2021July 2021 IMPACT FACTOR: 7.98 INTERNATIONAL JOURNAL 72 4 ENVIRONMENTAL SUSTAINABILITY  Green Business  Use of Green Public Transport  Use of