• Tidak ada hasil yang ditemukan

emergence of new technology in the mobile communication system triggered the researcher community and industry to come up with data-optimized forth genera- tion (4G) technologies. One of the major technological enhancement in 4G is the shift from circuit-switched networks (3G) to an all-IP network. An all IP-based 4G wireless network has intrinsic advantages over its predecessors. The design goal of 4G technology was to deliver a new level of experience to the users in which they have freedom to select any services with desired QoSlevels at afford- able prices, anywhere, anytime. Wimax andLTEwere the first two commercially available 4G technologies deployed in Scandinavia by TeliaSonera. Peak down- load and upload data transfer speeds of 4G LTE can reach up to 100Mbps and 50Mbps, respectively. On the other hand, WiMAX offers peak data rates of 128 Mbps in the downlink and 56 Mbps in the uplink.

2.2 LTE Architecture

Figure 2.1: Overview of an LTE system

3GPP Release 8 physical layer [30]) and low latency demands of multimedia ap- plications. To accomplish these objectives, LTE systems provide separate packet scheduling infrastructure for both the links, namely, downlink packet scheduler and uplink packet scheduler, in the Media Access Control (MAC) layer of eN- odeB [31]. In order to achieve lower Peak-to-Average Power Ratio (PAPR) for the uplink channel, the uplink packet scheduler is constraint to select RBs cor- responding to a single UE only from consecutive sub-channels in a continuous manner [32]. This limitation significantly reduces the flexibility in the resource allocation, particularly when compared to downlink packet scheduling. On the other hand, flexibility of the downlink packet scheduler motivates us to imple- ment a downlink resource allocation framework which has the ability to efficiently allocate RBs among RT VBR traffics, considering their channel conditions and QoS requirements.

In LTE, radio resources are allocated in both frequency and time domains.

In the frequency domain, the total bandwidth is frequency multiplexed into sub- channels of 180 kHz each. LTE supports various values for total bandwidth, including 1.4, 3, 5, 10, 15 or 20 MHz, and this bandwidth is multiplexed into 6, 15, 25, 50, 75 or 100 sub-channels, respectively. Each sub-channel is further time

Figure 2.2: Combined time-frequency multiplexed resource abstraction in LTE

multiplexed into 1 ms long intervals called TTI. A TTI is composed of two time slots of length 0.5 ms each. A time/frequency radio resource spanning over one time slot in the time domain and over one sub-channel in the frequency domain is called a Resource Block (RB) [33] as shown in Figure 2.2 and corresponds to the smallest radio resource unit that may be assigned to anUEfor data transmission.

Resource allocation for each UE is usually based on the comparison of per-RB metrics. The rth RB block is allocated to the ith user if metric mir is maximum i.e. if it satisfy the equation:

mir = max

j {mjr} (2.1)

The value of metric for a user depends on the resource allocation objectives.

At the physical layer, it is assumed that the total available transmission power (equal to 43 dBm) at eNodeB is uniformly spread over all the available sub- channels. Each UE estimates the Signal-to-interference-plus-noise ratio (SINR) of the received reference signals for all downlink sub-channels [34]. The estimated SINRvalues are then mapped to a corresponding set ofCQIfeedbacks (integers in the range 1 to 15) and forwarded to eNodeB periodically, using thePhysical Uplink Control Channel (PUCCH) as shown in Figure 2.1. ThePhysical Uplink Shared

2.2 LTE Architecture

Channel (PUSCH) carries data and signalling messages from the Uplink Shared Channel (UL-SCH; this channel belongs to the set of transport channels) and can sometimes also carry uplink control information. The Physical Random Access Channel (PRACH) carries random access transmission from the Random Access Channel (RACH; this channel belongs to the set of transport channels) [35].

A CQI represents the quantized version of a corresponding SINR such that a certain maximum Block Error Rate (BLER) may be guaranteed for downlink data transmission (the default value is 10%) [36].

As shown in Figure 2.1, the downlink packet scheduler which is available at eNodeB is responsible for allocating RBs to an active flow in eachTTI. For each scheduled flow, the Adaptive Modulation and Coding (AMC) module selects a proper Modulation and Coding Scheme (MCS) based on the CQI feedback. The information about the allocated RBs and the selected MCS are transmitted to the UEs on thePhysical Downlink Control Channel (PDCCH) [10]. However, the Physical Downlink Shared Channel (PDSCH) is the main downlink data bearing channel which is dynamically multiplexed in frequency and time among the user equipments [33]. The Transport Block Size (TBS), or in other words, the amount of data that a flow can transmit at the MAC layer during a TTI using a sub- channel, is obtained from the selected MCS, taking into account the physical configuration proposed in [37]. For each active flow, eNodeB maintains a buffer (queue) as the packet container for the flow. Each packet of a flow is time stamped as it arrives into the queue and subsequently transmitted through the wireless channel using the first-in first-out (FIFO) principle. At eachTTI, delays (waiting time of a packet after arrival) for all packets at eNodeB are updated. An RT packet (for theithflow say) is dropped from theMACqueue if the packet scheduler fails to transmit it within a specified stipulated delay (denoted bymaxdelayi) and this effects a packet loss.

We now present an overview of various radio resource management strategies that have been used in cellular environments.