4.8 Evaluation
4.8.5 Experiment 3
In real networks, a large number of updates take place simultaneously, affecting multiple flows. A switch may undergo more than one update simultaneously and a single update may result in modifying multiple flows on the same set of switches. The delay between updates for a single flow may be more than that in Experiment 2. Realistic flow arrival, inter-packet arrival and controller-switch delay distributions also need to be simulated.
The goal of this experiment is to find out how PPCU affects the throughput for large flows and the number of small flows and total flows completed, compared to random updates, during the usable duration (defined in Section4.7.4) of the network. To increase
Figure 4.10: Experiment 3: Changing core to aggregate links
predictability of results, all the flows in the network are affected by an update. 75% of the switches are affected by at least 1 update and 25% of the switches are affected by 2 disjoint updates, with the latter causing updates of different entries in the same P4 table for two different RUs. The updates performed are : 1) edge to aggregate link changes:
The flows from h00 toh20,h31toh50 and h40 toh60 are switched from the old path to the new path, as shown in Figure 4.9. Paths are switched from the old to the new and back, in the given sequence, in a loop, with the sequence repeating forever. 2) aggregate to core link changes: In this, all traffic destined to *.*.*.67 on the set of red links is moved to the set of green links in a single update and then back, forever, as shown in Figure 4.10.
The affected destination hosts are shown. The affected traffic from edges to aggregates is shown in black. All the four updates are disjoint, but occur simultaneously, within one PPCU.
After the network is initialised by the controller, a flow generator starts multi- threaded servers on the destination hosts, as per the configuration file described in section 4.8.2.1. Next, it starts flows one by one, as per the configured flow rate, by starting clients on the desired hosts until the maximum number of flows of each host pair is reached, after which flows begin on the next host pair. To increase predictability of results, we include only affected flows and the maximum number of flows is kept the same, across host pairs. The clients start connections to the multi-threaded servers started on the des- tination hosts. The flow generator sends a token to each client according to the packet inter-arrival time and distribution. Upon receiving the token, the client sends a message
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
1 2 3 4 5 6 7 8 9 10
Throughput (Kbits/s)
Trial number
PPCU Random
(a) Throughput
5 10 15 20 25 30
1 2 3 4 5 6 7 8 9 10
Number of successful smallows
Trial number
PPCU Random
(b) Successful small flows
5 10 15 20 25 30 35 40
1 2 3 4 5 6 7 8 9 10
Total number of successful ows
Trial number
PPCU Random
(c) Total successful flows
Figure 4.11: Throughput, small and total flows completed for flow rate=0.033 flows/s
8 10 12 14 16 18 20
2 4 6 8 10
Number of successful small flows
Trial number
PPCU Random
(a) Successful small flows
8 10 12 14 16 18 20
2 4 6 8 10
Total number of successful flows
Trial number
PPCU Random
(b) Total successful flows
0 100 200 300 400 500 600 700 800
2 4 6 8 10
Throughput (Kbits/s)
Trial number
PPCU PPCU - mean Random Random - mean
(c) Throughput
Figure 4.12: Throughput, small and total successful flows for flow rate=0.33 flows/s, maximum flows per host pair=2
12 13 14 15 16 17 18 19 20 21
2 4 6 8 10 12
Number of successful small flows
Trial number
PPCU Random
(a) Successful small flows
12 14 16 18 20 22
2 4 6 8 10 12
Total number of successful flows
Trial number
PPCU Random
(b) Total successful flows
0 100 200 300 400 500 600
2 4 6 8 10 12
Throughput (Kbits/s)
Trial number
PPCU PPCU - mean Random Random - mean
(c) Throughput
Figure 4.13: Throughput, small and total successful flows for flow rate=0.33 flows/s, maximum flows per host pair=4
6 8 10 12 14 16 18
1 2 3 4 5 6 7 8 9 10
Number of successful small flows
Trial number
PPCU Random
(a) Successful small flows
8 10 12 14 16 18 20
1 2 3 4 5 6 7 8 9 10
Total number of successful flows
Trial number
PPCU Random
(b) Total successful flows
0 100 200 300 400 500 600 700
1 2 3 4 5 6 7 8 9 10
Throughput (Kbits/s)
Trial number
PPCU Random
(c) Throughput
Figure 4.14: Throughput, small and total successful flows for flow rate=0.33 flows/s, maximum flows per host pair=2, controller-switch delay: mean=400ms, s.d=300ms
Figure 4.15: Experimental setup
of fixed size to the destination server. The server running on the destination host creates one log per flow. An online post-processor operates on the logs to identify failed flows, checks if the usable duration R has exceeded its upper limit, and generates test results.
IfR has exceeded the upper limit, the simulation is stopped. This is illustrated in Figure 4.15.
Before the experiment begins, we adjust the values of flow arrival rate and inter- packet delay and find the values at which at least one large flow completes, during the usable duration of the network, in the presence of continuous PPCU updates. The values are 0.033 flows per second, and 15 ms with a standard deviation of 0.1ms, respectively.
We initialise the network and start all the updates simultaneously, using PPCUs for all the updates, at the flow and packet arrival rates discovered above. We stop simulation whenR≥20% and measure the number of small flows completed, the number of large flows completed, their source-destination pairs and individual throughputs. The experiment is repeated for random updates.
For a low flow arrival rate, since more number of large flows or larger flows are expected to complete successfully, the collected data is sorted on the sum of throughputs
of successful large flows and is shown in Figure 4.11. For PPCU updates, the sum of throughputs, the number of successful small flows and the total number of successful flows is higher than that of random updates.
The experiment is repeated with a controller-switch delay and the results are shown in Figure 4.14. Since no large flow completes for most of the cases, the output is sorted with the number of small flows completed. We find that PPCU updates result in a higher number of small flows getting completed, in all the trials. The total number of flows and the throughput are higher too.
It is not sufficient for the network to operate in the usable duration because “a reasonable number” of large flows must also complete at a “reasonably high” throughput.
If we increase the flow arrival rate with the network still operating in the usable duration, there are two issues: no large flow completes and/or the simulation cannot maintain the flow rate as enough flows do not finish. (Even though flows must be scheduled [2] [4] [17] to avoid this situation, we conduct two experiments to check the behaviour of the network in this situation also.) In the first case, instead of comparing the throughputs of large flows, we compare the number of small flows completed. In the second case, we discard the reading. Besides, the throughputs of large flows that manage to complete, if any, are much smaller than the throughputs of large flows when a lower flow rate is used. We first conduct the experiment with a flow rate of 0.33/s and the number of maximum flows per host-destination pair set to 2. The results in Figure 4.12 demonstrate that with PPCU updates, the number of small flows and total flows successfully completed is higher. The same is the case when the maximum number of flows per host-destination pair is increased to 4, as shown in Figure 4.13. For both random and PPCU updates, the throughputs of the large flows that complete are comparable.