• Tidak ada hasil yang ditemukan

Multipath Transport and Flow Division for Multihomed Hosts

N/A
N/A
Protected

Academic year: 2023

Membagikan "Multipath Transport and Flow Division for Multihomed Hosts"

Copied!
144
0
0

Teks penuh

There have been several attempts in the past to distribute the data flow over multiple paths. Several approaches have been presented in the literature to enable multipath transmission at different layers.

Figure 1.1: Internet Network Architecture and Layer Interaction.
Figure 1.1: Internet Network Architecture and Layer Interaction.

SCTP - a multihomed transport protocol

This means that the content of one stream can be delivered to the application at the receiving end, even if another stream has experienced some packet loss, e.g. If the U flag is set for a chunk, SCTP bypasses the entire ordering mechanism and delivers the data immediately to the upper layer.

Figure 1.2: (a) SCTP Handshake Mechanism (b) TCP Handshake Mechanism.
Figure 1.2: (a) SCTP Handshake Mechanism (b) TCP Handshake Mechanism.

Motivation of the Present Work

Along with flow sharing, the congestion control mechanism for such protocols also needs to be reviewed. Current congestion control mechanisms, such as TCP congestion control, are designed to keep only one path in use at a time.

Table 1.1: Features of various Transport Protocols
Table 1.1: Features of various Transport Protocols

Thesis Contribution

This congestion control uses bandwidth estimation-based resource pooling to adjust the congestion window for the source node and is included in our proposed scheme. However, this is very computationally intensive because the complete network statistics must be known at the source.

Thesis Organization

On the Internet, TCP [12] and UDP [20] are the most widely used transport protocols for data communication. However, these signaling messages cannot use UDP as their transport protocol because UDP has no reliability mechanism.

Figure 1.5: Thesis Organization.
Figure 1.5: Thesis Organization.

Multipath Protocols

In addition, this scheme requires a transmission sequence number (TSN) to be maintained at the transmitter after path sequence mapping, which will adversely affect the scalability of the protocol. Moreover, the choice of 4 bits for the path number is conservative because using more paths in parallel does not improve the performance of the protocol.

Figure 2.1: LS-SCTP Architecture Diagram
Figure 2.1: LS-SCTP Architecture Diagram

Flow Division at Source

SCTP

Congestion Control

In an ideal situation, we would like our network to operate somewhere between points 'A' and 'B' as shown in Figure 2.7.

LOAD

Introduction

We also compare the performance of these optimizations for end-to-end and link-to-link debugging approaches. Optimizing flow sharing at the source requires dividing the original incoming flow among the available candidate paths so that some selected end-to-end Quality of Service (QoS) parameters are satisfied.

Delay Sensitive Applications

The basic motivation behind the Min-Max approach is to minimize worst-case end-to-end delay on candidate paths. When a stream is split into multiple sub-streams, the path with the highest end-to-end delay primarily dictates the overall system performance. Let W ={w1,w2,..,wN} be the set of end-to-end delays for N candidate disjoint paths, where each candidate path carries a subcurrent λ, i.e. {λ1,λ2, ..,λN}packets to each other.

We assume W = {w1,w2,..,wN} as the end-to-end delay set for disjoint candidate paths where each candidate path carries a sub-flow of λ, i.e. {λ1,λ2, .., λN} packets per second.

Delay Insensitive Applications

Network Model for Flow Optimization

As shown in Figure 3.1, we assume that xi,j is the background traffic (in packets/seconds) entering the jth node of the ith path (ni,j) destined to link (j,j+ 1). In end-to-end error recovery, for each packet loss on a link in the path, the packet is retransmitted from the source. In link-by-link error recovery, each node keeps track of the error on the link and, unlike end-to-end error recovery, retransmits the packet in the event of a packet error.

Considering the network as in Figure 3.1, assuming perfect link-by-link error recovery, the corresponding node will provide an effective flow of λi,j to be carried on the link.

Figure 3.1: Network Graph for Flow Division between Source and Destination
Figure 3.1: Network Graph for Flow Division between Source and Destination

Performance and Results

As expected, the connection-by-connection recovery scheme performs better than the end-to-end error recovery approach. Link Error Recovery - End-to-End Latency End-to-End Error Recovery - End-to-End Latency. For this system, Figure 3.12 also shows that the end-to-end error recovery approach supports less data rate for the same network conditions.

The data rate supported by the network for the same target delay is also less for end-to-end error recovery.

Figure 3.3: Percentage Flow Division for varying Input Packet Rate (S to D) with the Min-Max Approach for end-to-end error recovery
Figure 3.3: Percentage Flow Division for varying Input Packet Rate (S to D) with the Min-Max Approach for end-to-end error recovery

Conclusion

Introduction

Proposed Changes in SCTP header

We cannot re-route the packet using the original path through the same PSN because PSNs are path dependent. Number of Gap Ack Blocks (N) Number of Duplicate PSN (X) Gap Ack Block #1 End Gap Ack Block #1 Start. In MPSCTP, each path independently acknowledges each part received on that path with a SACK sent back on the same path.

In contrast to standard SCTP, here the SACK Gap Ack contains reports on the PSNs (as shown in Figure 4.2) for each corresponding path and the list of duplicate PSNs received on each path.

Figure 4.2: SACK Structure for MPSCTP
Figure 4.2: SACK Structure for MPSCTP

MPSCTP Algorithms

Performance and Results

We varied the probability of packet loss because packet loss has a significant impact on protocol performance and throughput. For our tests, we assume an FTP application transferring a 60 MB file using both MPSCTP and CMT, with varying packet loss probabilities for path-2. We note that MPSCTP outperforms CMT in any case, especially when the probability of packet loss is high.

We also investigated the performance of our proposed protocol for different number of paths between the source and destination for different packet loss probabilities.

Figure 4.6 shows the average time taken for this transfer along with the 95% confidence intervals for both the symmetric and asymmetric cases using the RTX-CWND packet retransmission strategy.
Figure 4.6 shows the average time taken for this transfer along with the 95% confidence intervals for both the symmetric and asymmetric cases using the RTX-CWND packet retransmission strategy.

Bandwidth Estimation Based Resource Pooling (BERP) Con- gestion Control

However, during the congestion detection phase, the combined congestion control cuts the congestion window in half. Currently, most congestion control algorithms are based on the standard TCP congestion control algorithm with some minor modifications. It exhibits exponential growth in the congestion window (cwnd) until the slow start threshold (ssthresh) is reached.

The first part of (4.1) tells us how to increase the congestion window in the congestion avoidance phase.

Performance Analysis of BERP congestion control

In Figure 4.11, we show how the congestion window at the source evolves for the case when the packet loss probability is 10%. How the congestion window evolves in this scenario is shown in Figure 4.14 for both congestion control schemes when the blocking link has a 10% packet loss probability. For this scenario, as in Figure 4.15, we consider all links to be fault-free.

The evolution of the congestion window for this scenario has also been shown in Figure 4.17 for average UDP traffic load of 0.9.

Figure 4.9: Network Graph for disjoint paths with similar characteristics.
Figure 4.9: Network Graph for disjoint paths with similar characteristics.

MPSCTP Fast Retransmission Algorithm

In this section, we study the performance of MPSCTP with the modified fast retransmission algorithm. The bandwidth of both paths is assumed to be 2 Mbps and the propagation delays for the two paths are 50 ms. Path 1 has a fixed packet loss probability of 1%, but the packet loss probability on path 2 varies from 1% to 10%.

We observe that the MPSCTP strategy with RTX-CWND performs significantly better than CMT with RTX-CWND.

Figure 4.22: Modified Fast retransmission algorithm for gap reports in MPSCTP.
Figure 4.22: Modified Fast retransmission algorithm for gap reports in MPSCTP.

Comparative Study of CMT, MPSCTP and MPTCP

The bit rates of each link and the link delays are shown in Figure 4.24. In Figure 4.25, we compared the time required to download this file for CMT, MPSCTP, and MPTCP. The bit rates and connection delays of the various links in the network are shown in Figure 4.28.

The average transfer times obtained by MPTCP, CMT and MPSCTP are shown in Figure 4.29.

Figure 4.24: Network topology for disjoint paths.
Figure 4.24: Network topology for disjoint paths.

Conclusion

Introduction

Note that Min-Max optimization leads to the same average delay on all available paths. BERP suggests changes in the congestion avoidance and deadlock detection phases based on the bandwidth of the available paths.

Heuristic for Min-Max Optimization

Implementation using BERP Algorithm

Simulation Results

The values ​​are chosen such that the Bandwidth Delay Product (BDP) varies significantly for both paths. The results in Figure 5.2 show that the difference in packet delay in the two paths for BERP is smaller than that for CMT-SCTP. When the delay for path-2 is around 400 ms, the packet delays between the two paths are similar and at their lowest values ​​for both schemes.

The end-to-end delays of the packets are plotted in Figure 5.4 and Figure 5.5 for MPSCTP-BERP and CMT-SCTP respectively.

Figure 5.1: Network Topology.
Figure 5.1: Network Topology.

Heuristic for Delay Insensitive Optimization

Therefore, changing the incwnd based on the RTT measurements can be used to adjust the delay that the packet has in the network. Therefore, we propose to modify the congestion control algorithm as shown in Figure 5.8 to take care of the delay experienced by any packet in the network. This means that the maximum latency jitter in the network is no greater than 10% of the target latency.

Unlike the standard congestion control algorithm where the protocol keeps increasing the flow on the network until a loss occurs, our proposed algorithm does not allow the traffic to increase on the network if the end-to-end delay starts to approach the target delay.

Figure 5.3: Packet delay variance for MPSCTP-BERP and CMT-SCTP for various link delays of path-2.
Figure 5.3: Packet delay variance for MPSCTP-BERP and CMT-SCTP for various link delays of path-2.

Simulation Results

However, there is no significant increase in throughput when the target delay increases beyond 260 ms. This is due to the fact that the network is running at maximum capacity for this target delay. The results show that delay optimization reduces the variance in packet delay and the maximum delay experienced in the network.

Note that Figure 5.10 indicates that there is a throughput loss when using the delay-insensitive optimization.

Figure 5.10: Average Throughput for MPSCTP for various Target Delays.
Figure 5.10: Average Throughput for MPSCTP for various Target Delays.

Conclusions

Introduction

Implementation Details

The function protocol.c:sctp v4 get dst() is the interface function between the IP layer and MPSCTP. To assign PSN values ​​we have declared another function sm make chunk.c:sctp chunk assign psn() as shown below. Handling of cumulative commits is performed by the function sm make chunk.c:sctp make sack().

The corresponding updates on the sender side are taken by the outqueue.c:sctp outq sack() method.

Heuristic for Path Selection Policy

For lower error probabilities, there is much more packet reordering in Tx-CWND than in the RTX-CWND strategy. It can also be observed that for higher error probabilities, Tx-CWND performs almost as well as RTX-CWND because Tx-CWND always transmits packets along paths with a higher congestion window. It is observed that with a higher error probability, RTX-CWND has a higher normalized transmission time than the Tx-CWND strategy (Figure 6.3).

This indicates that the performance of RTX-CWND deteriorates much faster than the Tx-CWND strategy as the probability of errors increases.

Figure 6.1: Network Layout
Figure 6.1: Network Layout

Issues related to SACK Reordering

Performance Evaluation

The results in Figure 6.5 show that delay trends for MPSCTP implementation are similar to the simulation with ns-2. We have also plotted an example of cwnd evolution in the emulated network with 10% packet error probability for path-2 in Figure 6.7. As the results show, the transfer time values ​​in IIT Guwahati LAN are significantly higher than the values ​​in an emulated network.

It is clear from the cwnd plot in Figure 6.11 that it experiences more packet losses than in the scenario where a simulated network is used (Figure 6.7).

Figure 6.4: Network Diagram for MPSCTP implementation in Linux kernel
Figure 6.4: Network Diagram for MPSCTP implementation in Linux kernel

Conclusion

We have proposed an aTx-CWND strategy and compared the results with the RTX-CWND strategy proposed in the literature. We found that at lower error probabilities, the average file transfer time with the RTX-CWND strategy is slightly lower than that of the Tx-CWND strategy. This indicates that the performance of RTX-CWND degrades faster than the Tx-CWND strategy with increasing packet error probabilities.

This implementation can be further extended in the future to support flow division as well as BERP congestion control.

Figure 6.11: cwnd evolution of path–2 in IIT Guwahati LAN for 10% probability of error
Figure 6.11: cwnd evolution of path–2 in IIT Guwahati LAN for 10% probability of error

Summary of Contributions and Discussions

In this thesis we have proposed some heuristics for stream sharing optimizations in an Internet-like environment. We have proposed the implementation of Delay Insensitive Optimization by suitably modifying the congestion control algorithm. Although we have considered Linux for our implementation, this can be easily implemented on other operating systems as well.

We have tested MPSCTP implementation in an emulated network as well as in the IITG LAN environment.

Suggestions for Future Work

Pastoor-Balbas, “Security Considerations for Signaling Transport (SIGTRAN) Protocols,” RFC 3788 (Proposed Standard), Internet Engineering Task Force, Jun. Heitz, “Signaling 7 (SS7) Message Framework Part 2 (MTP2) - User Customization Layer,” RFC 3331 (Proposed Standard), Internet Engineering Task Force, September Pastor-Balbas, “Signaling System 7 (SS7) Message Framework Part 3 (MTP3) - User Customization Layer ( M3UA)," RFC 4666 (Proposed Standard), Internet Engineers Task Force, Sep.

Postel, "The TCP Maximum Segment Size and Related Topics," RFC 879, Internet Engineering Task Force, Nov.

Gambar

Figure 2.1: LS-SCTP Architecture Diagram
Figure 3.1: Network Graph for Flow Division between Source and Destination
Figure 3.5: Average Path Delays for Successive Iterations of the Min-Max Approach (Input Traffic=1430 packets/second)
Figure 3.6: Average End-to-End Delay for varying Input Packet Rate (S to D) with the Min-Max Approach
+7

Referensi

Dokumen terkait

FIGURE1: MODEL FOR MEASURING RESIDENTIAL SATISFACTION Source:Authors TABLE1: FACTORS AND VARIABLES OF SATISFACTION Factor Factor label Attributes / Key Variables Factor I - HD:

ITO material parameters 𝜀∞,𝐼𝑇𝑂 = 3.9, 𝜔𝑝,𝐼𝑇𝑂 = 2.4 × 1015rad/s, Γ𝐼𝑇𝑂 = 2.6 × 1013rad/s Figure 1-4 a Extinction, b absorption, c scattering cross sections for isolated ITO sphere and