• Tidak ada hasil yang ditemukan

Real-time Scheduling on Multiprocessor Systems

Dalam dokumen PDF gyan.iitg.ernet.in (Halaman 55-59)

2.3 A Brief Survey of Scheduling Algorithms

2.3.2 Real-time Scheduling on Multiprocessor Systems

Traditionally, scheduling of real-time applications (termed as tasks) on multiprocessors make use of either a partitioned or global approach [35] (as discussed in Section 2.2).

2.3.2.1 Partitioned Scheduling Schemes

In a partitioned approach, each task is assigned to a single designated processor on which it executes for its entire lifetime. This approach has the advantage of transforming the multiprocessor scheduling problem to a uniprocessor scheduling one. Hence, well known optimal uniprocessor scheduling approaches such asEarliest Deadline First (EDF),Rate Monotonic (RM) [26], etc. may be used. In addition, the overheads of inter-processor task migrations and local cache misses is far smaller than global scheduling. Finally, because task-to-processor mapping (which task to schedule on which processor) need not be decided globally at each time-slot, the scheduling overhead associated with a partitioning strategy is lower than that associated with a non-partitioning strategy [8, 9, 27]. However, a major drawback of partitioning is that in the worst case, no more than half the system capacity may be utilized in order to ensure that all timing constraints are met [8].

Optimal assignment of tasks to processors in partitioning is a bin-packing problem which can be stated as follows: given a list L of items of size {a1, a2, ..., an}, where ai ∈ (0,1] (ai represents the weight of task i), the problem of bin packing is to pack these items into a minimal number of unit capacity bins. The problem is known to be NP-hard and several polynomial time heuristics have been proposed to solve it.

The performance of any bin-packing algorithm is evaluated by a measure called competitive-ratio(R) which may be defined as follows:

R= lim

n→∞sup A(L) OP T(L),

where,Lis a list of items{a1, a2, ..., an}of sizen,A(L) is the number of bins required by the bin packing algorithmAwhen listLis used andOP T(L) is the best off-line number of bins required. It is easy to interpret that the use of an infinite sized list in the above measure gives us the worst-case performance ratio. However, there may be many other lists of smaller size that also gives us the worst-case ratio. We consider below some of the well known approaches [79, 107].

Next Fit(NF): This is one of the simplest of the known heuristics. It starts from the first bin and defines it as the active bin. If the next incoming item fits the bin, it places it in that bin. Otherwise, it creates a new bin, makes it the new active bin, and packs the item into this bin. Thus, at any given time, there is only one active bin. The N F algorithm has a competitive-ratio of 2.

First Fit(FF): Given a list of bins, the F F algorithm assigns the next item to the first bin that can accept it.

Best Fit(BF): The BF algorithm assigns the next item to such a processor that can accept the task and will have minimal remaining spare capacity after its addition.

Worst Fit(WF): WF is opposite to BF; it will find a bin which will fit a new item with the largest spare capacity left over. The algorithms F F, BF and W F discussed above have a competitive-ratio of 1.7.

First Fit Decreasing(FFD): FFD is the same as FF, but the items are considered in non-increasing order of their sizes. In a similar fashion, Best Fit Decreasing (BFD) and Worst Fit Decreasing (WFD) can also be defined. All these algorithmsF F D,BF Dand W F D have a competitive-ratio of 1.22. Although, this competitive-ratio of 1.22 is the

2.3 A Brief Survey of Scheduling Algorithms

best among all the algorithms, the fundamental requirement of these algorithms, which is non-increasing order of items in list L, may not satisfy the criteria of On-line. Thus, all these algorithms (F F D, BF D and W F D) are generally used as off-line strategies.

2.3.2.2 Global and Semi-partitioned Scheduling Schemes

Unlike partitioning, global and semi-partitioned scheduling schemes allow the migration of tasks from one processor to another during execution. Over the years, a few global optimal schemes such as Pfair, ERfair, etc. and semi-partitioned optimal techniques like DP-Fair, have been proposed. All these scheduling approaches allow the possibility of utilizing the entire capacity of all processors in the system, resulting in high resource utilization. Additionally, they possess many attractive features like flexible resource management, dynamic load distribution, fault resilience, etc. [107].

Most of these global and semi-partitioned scheduling strategies are based on the idea of proportional rate based execution progress for all tasks. Typically, such proportional fairness can be achieved by providing guarantees of the following form for each task:

complete X units of execution for application A out of every Y time units. Proportionate fair (Pfair) scheduling introduced by Baruah et al. [16] is known to be the first optimal global scheduler for real-time repetitive tasks with implicit deadlines, on a multipro- cessor system. Later, Anderson et al. [5] presented a work-conserving version of Pfair, calledEarly-Release fair (ERfair) scheduler, which never allows a processor to be idle in the presence of runnable/ready tasks. Since these global schemes attempt to maintain fair proportional progress for all tasks at all time slots, they may incur unrestricted pre- emption/migration overheads. More recently DP-Fair [71], an approximate proportional fair scheduler with a more relaxed execution rate constraint, was proposed. DP-Fair is a semi-partitioned scheduling technique which allows restricted preemptions/migrations.

Now, we discuss two underlying fair scheduling schemes used in this thesis, in detail.

ERfair Scheduling [5]: Consider a set of periodic tasks {T1, T2,..., Tn}. A task, say Ti, may arrive at any time within the schedule length, execute for an arbitrary number of instances and then depart. Each instance ofTi has a computation requirement ofei time

units required to be completed within a period of lengthpi time units. ERfair schedulers need to manage their task allocation and preemption in such a way that not only are all task deadlines met, but also each task is executed at a consistent rate proportional to its task weight epi

i. Typically, ERfair algorithms consider discrete time lines and divide the tasks into equal-sized subtasks. Subtasks are scheduled appropriately to ensure fairness.

The fairness accuracy is generally defined in terms of the lag between the amount of time that has been actually allocated to a task and the amount of time that would be allocated to it in an ideal system with a time quantum approaching zero. Formally, the lag of task Ti at time t, denoted lag(Ti, t), is defined as follows:

lag(Ti, t) = (ei/pi)∗t−allocated(Ti, t), (2.1) whereallocated(Ti, t) is the amount of processor time allocated toTi in [0, t). A schedule is ERf air iff:

(∀ T, t :: lag(T, t)<1) (2.2) That is, Equation 2.2 infers that the underallocation associated with each task must always be less than one time quantum. A subtask in an ERfair system becomes eligible for execution immediately after its previous subtask completes execution. Obviously, for such a criterion to be guaranteed, we must have

n

X

i=1

ei/pi ≤m (2.3)

where, m denotes the number of identical processors in the system. Equation 2.3 infers that the total workload (summation of tasks weights) should be less than or equal to the full system capacity to schedule a set of tasks in the system effectively. Equations 2.1, 2.2 and 2.3 are taken from [5]. In our energy-aware work (Chapter 3), we use ERfair as the underlying scheduling mechanism.

DP-Fair Scheduling [71]: Unlike ERfair, DP-Fair [71] is an approximate propor- tional fair scheduler with a more relaxed execution rate constraint. It is an optimal algorithm and enables full resource utilization. That is, givenn tasks andm processors,

2.3 A Brief Survey of Scheduling Algorithms

schedulability is ensured provided,

n

X

i=1

ei/pi ≤m (2.4)

where,eiandpidenote the worst case execution time and period of a taskTi, respectively.

In DP-Fair, time is partitioned into slices, demarcated by the deadlines of all jobs in the system. Within a time slice, each task is allocated a workload equal to its proportional fair share and assigned to one or two processors for scheduling. Job subtasks within a slice are typically scheduled using variations of traditional fairness ignorant schemes such as Earliest Deadline First (EDF [76]). Through such a scheduling strategy, DP- Fair is able to deliver optimal resource utilization while enforcing strict proportional fairness (ERfairness) only at period/deadline boundaries. DP-Fair is a semi-partitioned scheduling technique which allows at most m−1 task migrations and n−1 preemptions within a time slice and thus incurs much lower overheads compared to ERfair. In our fault-tolerant work (Chapter 4), we use a discrete approximation of DP-Fair as the underlying scheduling mechanism.

Dalam dokumen PDF gyan.iitg.ernet.in (Halaman 55-59)