SDN is a promising approach to address these challenges, where highly granular flow rule deployments can provide us with fine-grained flow-based statistics. But each SDN switch has its own capacity limitation, such as its cache memory called TCAM, which can be exhausted with a large number of very granular flow rule deployments. This thesis introduces an intelligent framework, called liteFlow, which divides flow rule installations into two parts, monitoring and forwarding flow rules.
As a result of this packet in message, the controller must make a decision about which flow rule to install on the switch. If we were to install a 5-tuple IP-based flow rule on all path switches for a flow, this would result in TCAM rule space exhaustion and unfilled L2 table rule space.
Thesis Outline
It also tries to distribute IPmonitoring responsibilities among switches while optimally occupying TCAM rule space. The Software Defined Networking (SDN) paradigm promises to simplify the governance and management of the network. Network intelligence is centralized in the SDN controller, which maintains a global view of the network.
With a global view of the network at the controller, applications and policy engines built on top of the controller see network devices as a single logical switch. One of the advantages of OpenFlow and its vendor independence is the emergence of the concept of virtual switches.
OpenFlow Protocol
The OpenFlow protocol defines the fields included in the flow rules for matching. When the OpenFlow switch receives a packet and it has no matching flow rule for the packet, it forwards the packet to the controller through the packet in the message. The controller either installs a flow rule on the switch by sending a flowmod message or sends a packet out message.
If a flow rule is installed on a switch, then the packet in the message will not be sent for packets matching that flow rule unless explicitly mentioned in the action. Once a flow rule matches a packet, then the counters corresponding to that flow are updated and the corresponding actions are executed on that flow packet.
Pipeline Processing
And so new flow rules will be installed in software of the switches, resulting in very slow packet matching and switching. A considerable amount of work has been done in the area of flow-based network monitoring for traditional networks. Devoflow [28] aims to reduce the controller-switch interaction and the number of TCAM entries in the switch.
DIFANE [29] also proposes a mechanism for reducing the controller load by keeping the traffic in the data plane. In their approach, the controller runs a partitioning algorithm to divide the flow rules into high-level flow rules and low-level flow rules. As in [30], we also change the destination MAC address to a MAC label in the edge switches and forward packets based on these labels in the core switches.
With an intelligent mechanism for installing flow rules on switches, coupled with the central view of the network elements in SDN, this thesis aims to design and implement a platform that can provide fine-grained, unsampled application-level statistics useful for a plethora of network monitoring applications. The IP-based flow rules are tuned in the TCAM, while MAC-based forwarding rules are stored in L2 MAC tables. Each path in the core network is assigned this unique MAC label, based on which forwarding decisions are made.
Exploiting this fact, this forwarding approach works by changing the destination MAC address of packets at the ingress switch to an appropriate path label, called the P pseudoM AC address, and forwarding packets through core non-edge switches by matching on the destination MAC -based flow rules. At the exit, we use destination IP-based flow rules to change the destination MAC address back from pseudo to that of the host. Note that at the output switch we have flow rule in the TCAM rule space, but the number of flow rules will only depend on the number of flow rules installed at that switch.
Flow Partitioner
- Naive Approach
- Ingress Switch Approach
- Load Balanced Approach
- LiteFlow Approach
This is a basic approach where we install flow rules of 5 tuples on all path switches between hosts. Since we are installing 5-tuple flow rules on all path switches, there will be a packet message for each new TCP connection between aHostP air. So between HostP air H1 → H3, two TCP give way to two 5-tuple flow rules on all path switches.
Therefore, this approach gives us robustness, but the flow rules installed are redundant in nature. 2-double
A potential pitfall of this approach is that the ingress switch's capacity limits will be exhausted more quickly as more TCP flows come in for aHostP air. In this approach, the ingressSwitch problem in the above subsection is solved using a randomization algorithm. Again, note that sending and monitoring flow rules for the same HostP air cannot be on the same switch.
Here, in addition to the routing and monitoring flow rules, we have a third type of dual flow rule called the action control flow rule. Also, we use the concept of precedence of flow rules in OpenFlow to implement this system. So packet matching will be done first by monitoring flow rules, then for action control and finally for flow forwarding rules.
Benefits of liteFlow Approach
Flow Migration for HostPair
We propose a different migration mechanism in which the old AS will record statistics for the current monitoring flow rules, while the new AS will record statistics for the future monitoring flow rules. HostP air H1→H3 has 100 control flows with S1 as AS, HostP air H2→H3 has 50 control flow rules with S3 as its. This will not install any new monitoring flow rules in the old AS because there is no controller flow rule installed on it for the HostP air in question and the packets will match the lower priority rules installed in the switch.
By doing this, we will not have the problem of delay caused by flow scraps as described above. In the new AS, the old monitoring flows will be used for forwarding, since there is a control action flow rule for the relevant HostP air. This loss can be significant for a HostP air with a large number of monitoring flows, since all of them will be used to send to new AS due to the control action flow rule migration.
A possible solution to this obstacle is to use a hard timeout value in the monitoring flow rules. After a hard timeout, control flows will be removed from the old AS and then installed in the new AS.
Forwarding in L2 MAC Table
Changing Paths due to TCAM Load
System Design and Implementation
- Design
- PseudoMac Forwarding
- Authority Switch Selection
- Flow Migration
- Flow Partitioner
- Load Balancing Property
- Flow Rule Count
- Flow Migration in liteFlow
Migrating the HostPair with the least number of monitoring flows seems most logical because there will be duplication of monitoring flow rules on the new AS as discussed in Section 3.4.1. HostPair Count list←AS HostP air Count db.get(AS) whileHostP air Count list.hasN ext()do. Upon this initiation of a new path, P seudoM ACforwarder immediately installs destination MAC-based flow rules in the network core using F lowInstaller's SDN controller.
S2 to S5, must have a maximum of 2-tuple forward flow rules installations equal to the number of HostP broadcasts, equal to 160. LoadBalaced Approach: Theoretical calculations show that at the upper scale, each switch on average as authority switch for 27 host pairs considering 6 switches and 160 host pairs. Each switch will also have 2-double forwarding rules for the rest of the host pairs, making them 133 in number.
Since it involves the random approach, we may not get this number exactly, but in a number of experiments, we should get this number of flow rules in each switch. Additionally, each authority switch will have 1 2-tuple control flow rule for each airHostP for which it acts as an AuthoritySwitch, thus accounting for 27 2-tuple control flow rules on each switch. In addition, route switches S2 to S5 must have 2 pseudo mac rules each, switches S1 and S6 must both have 80 source-destination mac based to be stored in the L2 MAC table and 20 and 4 rules of flow based on the destination IP respectively to be stored in TCAM.
These calculations provide an estimate of a maximum of 108 flow rules on switches S2 through S5, while input switch S1 has 128 and switch S6 has 112 maximum flow rules approximately in the TCAM of the respective switches. But we see that the red and black curves dominate the percentage plot due to the P seudoM AC forwarding of the TCAM-based flow rules in the input switch. This number will be very insignificant compared to the actual number of control streams carried by each host.
S1 is the connected switch to 20 hosts, and 20 more power rules will be installed due to pseudoM AC forwarding. Therefore, we conclude that the LiteF Low approach is not only load-balanced, but also enforces almost 50% fewer flow rules compared to the LoadBalaced approach.
Trials in IIT Hyderabad Campus Network
It shows the TCAM capabilities in the number of power rules that can be installed in the TCAM of the corresponding switch. The data in Table 6.7 is from when the peak occurred for HP−3800. As you can see, the load on switch C1 is mostly due to destination IP based flow rules when forwarding PseudoM AC.
To achieve this, the need arises for fine-grained flow rules on the switches, which can exhaust the TCAM resources of the switches. This thesis demonstrated a proof of concept implementation of liteF low in a small test bed at IIT Hyderabad showing that it significantly reduces the maximum number of flow rules installed at each SDN switch. Effects of choosing different HostP air selection policies for migration may have different consequences for the TCAM of the new AS.
Clark, ‘Resonance: dynamic access control for enterprise networks’, inProceedings of the 1st ACM workshop on Research on enterprise networking, pp. Carter, ‘Past: Scalable ethernet for datacenters’, inProceedings of the 8th international conference on Emerging network experiments and technologieën, pp. Zhang, “Revisiting the case for a minimalist approach for network flow monitoring”, inProceedings of the 10th ACM SIGCOMM Conference on Internet Measurement, pp.
Papagiannaki, “How healthy are today's enterprise networks?”, in Proceedings of the 8th ACM SIGCOMM conference on Internet Measurement, pp. Carter, “Shadow macs: Scalable label switching for commodity ethernet,” in Proceedings of the third workshop on Hot topics in software-defined networking, pp. Pagh, “Uniform hashing in constant time and linear space,” inProceedings of the thirty-fifth annual ACM symposium on Theory of computing, pp.