• Tidak ada hasil yang ditemukan

PDF Simulating Classes of Services over an IP/MPLS Backbone VPN Case Study

N/A
N/A
Nguyễn Gia Hào

Academic year: 2023

Membagikan "PDF Simulating Classes of Services over an IP/MPLS Backbone VPN Case Study"

Copied!
45
0
0

Teks penuh

The key information for an SP regarding the traffic entering its network at each access point is, as part of the service level agreement with the customer, the class of service together with possibly the dedicated and peak rates. There is a minimum time required for planning and realizing the capacity upgrade of the network. AF” class is associated with services that represent different levels of quality (and prices) based on the tariff.

When an IP packet (which includes the IP header) crosses a port, it is encapsulated in a frame that depends on the interface type. Additionally, under the BGP/MPLS VPN architecture, one or two 4-byte MPLS shim headers may be inserted before the IP header, depending on the role of the service provider node. CBR Traffic Profile Isochronous flow with packets of fixed length generated at regular intervals, depending on the required throughput.

The serialization time is dependent on the interface rate and the packet size, more precisely the size of the packet's frame. When an IP packet crosses 2 nodes, the serialization delay on the output port should not be considered in the input of the adjacent node. The distance of the link between the two nodes causes a propagation delay that is directly related to the distance and is independent of both packet size and interface capacity.

It then depends on the topology of the SP network which may not be optimal with respect to these endpoints.

Figure 1: IPVCoSS basic principles
Figure 1: IPVCoSS basic principles

Physical Topology

Traffic Flows

DiffServ Environment

TE Traffic Trunks Options

Due to the busy BE traffic, some oversubscription was allowed and therefore the shortest route prevailed. For this first run, it's worth having a detailed look at the traffic profile at each internal OPORT and so the rest of this section will take a look at the access and backbone links sequentially – with graphs and summary reports. All input access links except port 56 experience some congestion when there are BE traffic bursts STM-1 Ports 61 and 62 are heavily loaded, with minor congestion.

STM-1 ports 71 and 81 receive the same currents and are normally loaded. Port 82 is experiencing heavy congestion but no packet loss. All outgoing links are oversized except for port 91, which is speed limited to 60 Mbps. Among the BE flows, flow "C" has many very short bursts. With the subsequent runs, we will have the same offered VPN traffic, but we will only introduce some changes at port 71 and port 82.

Figure 12: VPN Case Study – Initial Run Configuration
Figure 12: VPN Case Study – Initial Run Configuration

Traffic Analysis – Reader’s Guide

Traffic Analysis – Ingress Access Links

Since there is no AF flow, BE traffic can use the remaining port capacity. We can see that the jitter experienced by the EF flow is greater than on port 51. The reason is that we have traffic entering the router at a rate much higher (input port FE) than the rate of this port E3.

For the same packet size, frames in output have a much longer serialization time than in input, so input packets will queue longer. It can borrow a little (3 Mb/s) of remaining EF bandwidth, but when BE bursts occur, AF packets must be queued. When a port is temporarily congested, AF packets are queued because the AF traffic is well above the remaining capacity.

This happens for the same reasons as with port 53: the FE input ports are faster than this output E3 port., and the 1200-byte packets are still serialized when input packets arrive.

Traffic Analysis – Backbone Links

However, there is a risk to EF flows because the total EF throughput is above the EF bandwidth allocated to that port. It should be noted that due to the low port occupancy (30%) and the very short packet serialization time due to the high speed of these ports, there is very little jitter for each stream. In fact, the AF bandwidth is above the AF bandwidth, but it can borrow what is missing from the EF bandwidth because there is no EF traffic.

Traffic Analysis – Egress Access Links

There is more jitter than with ports 92 and 93, but less than with port 91, because even though the upstream port is at a higher rate (STM-1), there is heavy load (the peaks) but no congestion.

Traffic Analysis – EF Flows

Here we run the same VPN flows as before, but in the middle of the run we introduce an additional flow, of 50Mbps, which transits from Px to Py via port 71. With this run-case we have the same offered traffic as with the previous case , but a different situation with regard to TE traffic trunks and thus flow paths. With the same offered traffic as in the initial run, here we are in a situation that would result from a failure on port 71.

Since many packets are discarded at the end of the TCP sessions, there are timeouts for the last packets (the destinations do not receive data and therefore no longer send duplicate ACKs, which would trigger fast recovery). Here's a run with the same reconciliations and traffic generated as the previous run. With a statistical multiplexing system such as IP, it is impossible to define absolute limits on jitter, due to the unpredictability of the traffic served.

All these characteristics of IP traffic can be easily visualized and analyzed with the traces associated with short elementary tests (just a few packets) through IPVCoSS. This article has presented a case study, with the simulation of a fairly advanced network (SP backbone and VPN sites). As for the packet scheduling on a port, the method itself – despite the name it defines – is very dependent on the vendor's implementation.

Therefore, in an appendix the class-based queuing method implemented by this simulator is explained using traces. Engineering the access links between a VPN site and the SP backbone is a key point in delivering the most stringent classes of services to VPN users. For example, an E3 leased line will involve an E3 interface for which the serialization time of a packet will be three times longer than a Fast Ethernet interface; it will therefore be prone to jitter due to the faster interfaces - upstream or downstream - in the backbone.

The structure of the access network, which may be shared by several clients of the access circuit and which may introduce several elements that must be crossed, such as ATM switches with, for example, a multi-LAN service. Ability of the CE and PE equipment programmer to consider the subscribed rate instead of the physical interface rate. Even with the best tuning of network connectivity and access, jitter depends on traffic congestion and network load.

Availability can be significantly improved by techniques such as DS-TE, but there will still be the punctual impact, for example for a real-time video, of the path transition due to a router failure or link failure within the network. However, the simulator can certainly complement this by helping to structure the overall topology of the test platform and find a first level of tuning for the QoS parameters.

Figure 18: Configuration for testing EF jitter bounds
Figure 18: Configuration for testing EF jitter bounds

IPVCoSS – TCP Congestion Control

With CONGESTION AVOIDANCE: cwnd is only incremented by SMSS at each RTT (round-trip time); this is given by the following equation: The following image shows the development of the useful window that controls the injection of data:. aircraft size). We will only concentrate on the first case, as it is closely related to the RED (Random Early Detection) queue management mechanism.

We will use the following configuration to first examine the Slow Start mechanism when applied at the beginning of a TCP data transfer phase. The evolution of cwnd and flightsize (fs) variables appears to the left of the sender side events. The IP packet ID along with the first and last bytes of the segment represent TCP data segments.

The "push" event on the receiver side represents the communication, in sequence, of the segment content to the application. An acknowledgment indicates the number of the next byte the receiver expects to receive. The ssthresh variable is initialized to a very high value to trigger the slow start algorithm immediately.

As a result, 4 new segments will be sent at 250 microsecond intervals, corresponding to 32 Mbps throughput. The first graph with a short scan interval of 1 millisecond shows the successive bursts of TCP segments due to the Slow Start algorithm. This second graph with a longer sample period and therefore a larger scan interval shows a line graph that is more conventional compared to slow start.

The trace appears from the packet drop until shortly after the Quick Recovery procedure is complete. A51 and subsequent segments cannot be delivered to the TCP user and are cached, while the expectation of A50 is permanently announced in the ACK. The first graph with a short scan interval of 1 millisecond shows in detail the impact of a dropped packet, represented by a red circle, in the middle of the flow.

This second graph with a longer trial and thus a larger scan interval provides a broader overview of the impact of the fast retransmission and fast recovery algorithms. 7] DiffServ – The Scalable End-to-End QoS Model (Cisco) [8] Advanced Topics in MPLS-TE Deployment (Cisco).

Figure 19: TCP Sender Windows
Figure 19: TCP Sender Windows

Gambar

Figure 1: IPVCoSS basic principles
Figure 3: Ethernet framing
Figure 3 and Figure 4 show respectively the framing structures with Ethernet and PPP. With Ethernet the overhead reaches  26 bytes, with a minimum interframe gap of 12 bytes
Table 1: Serialization time according to port rate and packet size
+7

Referensi

Dokumen terkait

Case study is a kind of research approach used to understand an issue or problems using a case (Creswell, 2007). What is meant by case here can be an event, process,