The rate of execution of every task is same on all processors. Hence, the worst-case execution time of a task is not impacted by the particular proces- sor on which it is being executed.
(b) Uniform: The processors are identical - but they are running at different frequencies. Hence, all processors can execute all tasks but the speed at which they are executed and their worst-case execution time vary based on the processor on which they are executing.
(c) Heterogeneous: The processors are different, i.e., processors may have dif- ferent configurations, frequencies, cache sizes or instruction sets. Some tasks may therefore not be able to execute on some processors in the platform, while their execution speeds (and their worst-case execution times) may differ on the other processors.
2.2 A Classification of Real-time Scheduling Approaches
scheduling decisions at runtime based on the information about the tasks that have arrived so far. Although they are often flexible and adaptive, they may incur significant overheads because of runtime processing. However, they are a must in systems which do not have enough information before run-time to execute the scheduler statically. Online scheduling is also referred to as dynamic or runtime scheduling.
Clock-Driven Vs. Event-Driven Scheduling: In clock-driven schedulers, schedul- ing decisions are made at specific time instants which are chosen a priori before the system begins its execution [84]. Typically, in a system that uses clock-driven schedul- ing, all parameters of the job set are fixed and known. It is also called a time-driven scheduling approach. Atable-driven scheduler is an example of a clock-driven approach.
Here, the schedule is generated and stored in a table off-line. The system timer kicks off execution of a segment of the code of a task at each scheduling decision time by referring to the table at run time.
In the event-driven approach, scheduling points are defined by events such as job release or completion. Generally, these schedulers assign priorities to each task. At each scheduling instant, the currently highest priority task present in the ready queue gets hold of the resource (hence, they are also called priority-driven schedulers). These algorithms leave a resource idle only when no job requiring the resource is ready for execution. TheRate Monotonic (RM)[83,84] andEarliest Deadline First (EDF)[83,84]
algorithms are examples of the event-driven approach. Event-driven schedulers are more proficient than clock-driven schedulers because they can feasibly schedule some task sets that clock-driven schedulers cannot. These are also more flexible because they can feasibly schedule sporadic and aperiodic tasks in addition to periodic tasks whereas clock-driven schedulers can only handle periodic tasks.
This thesis primarily deals with real-time independent periodic task-sets which are scheduled using variants of online dynamic priority scheduling policies on both homo- geneous as well as heterogeneous multi-core systems.
Static Priority Vs. Dynamic Priority Scheduling: The distinction betweenstatic priority and dynamic priority scheduling is based on the priority management policy adopted by a priority-driven scheduler. In the static priority scheme, tasks are assigned an integer priority value that remains fixed for the lifetime of the task. Whenever a task is made ready to run, the active task with the highest priority commences or resumes execution, preempting the currently executing task if need be. Priority values may change at run time in case of dynamic priority schedulers. Rate Monotonic (RM) [83, 84] and Deadline Monotonic (DM) [6, 84] are examples of Static priority scheduling while Earliest Deadline First (EDF) [83, 84] and Least Slack Time First (LST) [84] are examples of dynamic priority scheduling.
Partitioning Vs. Global Scheduling: In the context of multiprocessor scheduling policies, a global scheduler is one which puts all the ready tasks in a single queue and selects the highest priority task at each invocation irrespective of which processor is being scheduled. Thus, a task is allowed to execute on any processor, even when resuming after having been preempted. In a purely partitioned approach, on the other hand, the set of tasks is partitioned into as many disjoint subsets as there are processors available, and each such subset is associated with a unique processor [37, 88, 122]. Thus, all instances of a task get executed on the same processor.
The main advantage of partitioning is that it allows the multiprocessor scheduling problem to be reduced to a set of uniprocessor ones. In each processor of this set a sep- arate well known uniprocessor scheduler like Rate Monotonic Analysis (RM), Earliest Deadline First (EDF), etc. may be easily applied. In addition, the overheads of inter- processor task migrations and local cache misses are far smaller than global scheduling.
Finally, because task-to-processor mapping (which task to schedule on which proces- sor) need not be decided globally at each time-slot, the scheduling overhead associ- ated with a partitioning strategy is lower than that associated with a non-partitioning strategy [10, 11, 37]. On the other hand, even though the generic global scheduling methodology may have a higher scheduling complexity and cause an unrestricted num- ber of migrations and cache misses, it possesses many attractive features like flexible