j.
• If i= j, then the (i, j)th entry of Y is equal to z1
ii +P
k∈N(i) 1
zik, where zii denotes the value of the resistor connected to the ground at nodeiand N(i) denotes the set of the neighboring nodes of nodeiin the circuit.
LetN denote the vector of the currents injected to the nodes from the external devices. In order to relateN toV, we need to define some matrices:
• B: n×m incidence matrix associated with the circuit,
• Ye: m×m diagonal edge admittance matrix,
• Yg: n×n diagonal node-to-ground admittance matrix,
• We: m dimensional edge current unit white noise vector (corresponding to the series resistors in the circuit),
• Wg: n dimensional node-to-ground current unit white noise vector (corresponding to the shunt resistors in the circuit).
It turns out thatY =BYeBT +Yg and
N =B√
YeWe+p
YgWg (2.5)
(a constant 2KT has been removed from the above modeling to simplify the presentation).
In the circuit, N and V are related to one another through the relation V =Y−1N. The vector N can be regarded as the noisy input of the network, which makes V a random variable. It follows from the equation (2.5) and V = Y−1N that the covariance of V is equal toY−1.
Chapter 3
Buffering Dynamics and Stability of Internet Congestion Control
Many existing fluid-flow models of the Internet congestion control algorithms ignore the effects of buffers on the data flows for simplicity. In particular, they assume that all links in the path of a flow are able to see the original source rate. However, a fluid flow in practice is modified by the queueing processes on its path, so that an intermediate link will generally not see the original source rate. In this chapter, a more accurate model is derived for the behavior of the network under a congestion controller, which takes into account of the effect of buffering on output flows. It is shown how this model can be deployed for some well- known service disciplines such as first-in first-out and generalized weighted fair queueing.
Based on the derived model, the dual and primal-dual algorithms are studied under the common pricing mechanisms, and it is shown that these algorithms can become unstable.
Sufficient conditions are provided to guarantee the stability of the dual and primal-dual algorithms. Finally, a new pricing mechanism is proposed under which these congestion control algorithms are both stable.
3.1 Introduction
In computer networks, queues build up when the input rates are larger than the available bandwidth. This causes congestion leading to packet loss and long delays. Congestion control techniques aim to adjust the transmission rates of competing users in such a way that the network resources are shared efficiently. Internet congestion control has two main components: (i) transmission control protocol (TCP) and (ii) active queue management
(AQM). TCP adapts the sending rate (or window size) of each user in response to the congestion signal from its route, whereas AQM provides congestion information to the users by manipulating the packets on each router’s queue. DropTail and random early detection (RED) are two examples of AQM schemes [45]. TCP-Reno [46, 47], TCP-NewReno [48]
and SACK TCP [49] are different versions of TCP congestion control protocols that have been deployed in the Internet. Mathematical models of these algorithms can be found in [50, 51, 52]. Congestion control protocols are either based on explicit feedback (which requires explicit communications between sources and links) or implicit feedback (which only requires end-to-end communications). For instance, packet loss in TCP Reno and queueing delay in TCP Vegas [53] are two congestion signals that can be provided to the users without needing explicit communication. However, the explicit congestion notification (ECN), as an extension of TCP, allows that each router writes some congestion information in the IP header of a packet and then the congestion signal is explicitly communicated to the users [54].
Since the seminal works [15, 16], a great deal of effort has been devoted to the modeling and synthesis of Internet congestion control. This is often performed for a fluid model of the network by solving a proper resource allocation problem in a distributed way. Different resource allocation algorithms, such as primal, dual and primal-dual algorithms, have been proposed in the literature, which enable every user to find its optimal transmission rate asymptotically using local feedback from the network. From a dynamical system perspec- tive, each of these congestion control algorithms corresponds to an autonomous distributed system that is globally asymptotically stable, where its unique equilibrium point is a solution to the resource allocation problem [17, 18].
Despite the progress in the analysis and synthesis of Internet congestion control, an important modeling issue is often neglected for the sake of simplicity. Specifically, most existing fluid models of congestion control assume that all links in the path of a flow see the original source rate. Nonetheless, a fluid flow in practice is modified by the queueing pro- cesses on its path, so that an intermediate link will generally not see the original source rate.
Reference [55] acknowledges such buffering effects on TCP/AQM networks, incorporating the model in [56] to account for the nature in which competing flows pass through congested links. In [57] and [58], the linear stability of such networks was analyzed for RED and PI AQM. Reference [59] proposes a form of deterministic nonlinear dynamic queue model and
studies how instability of deterministic fluid flow models for congestion control analysis leads to a significant increase in the variance of the flow in stochastic networks. Although it is possible to study the buffering effects for any given network through simulations, it is very advantageous to develop a fundamental theory for an arbitrary service discipline relating the buffering effects to various parameters of the network (say the routing matrix or the link capacities). Our goal is to derive a closed-form model for the buffers dynamics, based on which the stability of congestion control algorithms can be deduced via simple conditions.
The main objective of this chapter is to study congestion control taking this buffering effect into account. To this end, a general model is derived to account for the time evolution of the buffer sizes. This model can be used for different service disciplines such as weighted fair queueing (WFQ) [19, 20] and first-in first-out (FIFO). Then, the dual and primal-dual algorithms are studied, where the pricing mechanism is considered to be based on either queueing delays or queue sizes. It is shown that although these algorithms are stable when the buffering effect is ignored, they can become unstable otherwise. Several issues arising from the precise modeling of buffers are investigated here. A new pricing mechanism is also proposed to guarantee the global stability of dual and primal-dual algorithms.1