• Tidak ada hasil yang ditemukan

heuristic schedulers named, COST (A Cluster-Oriented Scheduling Technique for Heterogeneous Multi-cores)andHETERO-SCHED (A Low-overhead Heterogeneous Multi-core Scheduler for Real-time Periodic Tasks). COST works in three-phases to provide an effective low-overhead heuristic scheduling strategy and it has a re- striction on the number of migrations for an individual task within a time-slice. On the other hand, HETERO-SCHED is a deadline partitioned based heuristic strat- egy which works in two phases to do the same and it allows unrestricted inter-core task migrations.

• Chapter 5: Energy-Aware scheduling on heterogeneous multi-core systems In this chapter, we propose a DVFS based fully-migrative energy-efficient strat- egy called HEALERS, for scheduling periodic real-time tasks on heterogeneous platforms. HEALERS is composed of two major components: i) COMPUTE- SCHEDULE and, ii) COMPUTE-EA-SCHEDULE. These two components work in unison to not only deliver appreciable energy savings but also very high resource utilizations.

• Chapter 6: Temperature-Aware resource allocation strategy for real-time systems This chapter proposes a two-level temperature-aware scheduling strategy for multi- core systems called TARTS. At the first level, time-slices are determined based on the deadlines of tasks and task execution proceeds time-slice by time-slice, in a proportional fair manner. At any time-slice boundary, shares of all tasks to be executed in the next time-slice, are determined. The second level performs intra- time-slice schedule generation.

• Chapter 7: Conclusion and Future Works

The thesis concludes with this chapter. We discuss the work in progress, possible extensions and future work that can be done in this area.

Chapter 2

Energy and Temperature Aware RT Scheduling: Background and

State-of-the-Art

Energy has become a first class design criterion in many of today’s real-time embedded systems which are often operated by limited energy sources like batteries. Reduction of energy consumption is essential to prolong the battery life in these systems. Hence, a lot of research has been conducted towards their power management at various levels of abstraction, starting from hardware and firmware to architectural, system and even application levels. Apart from energy, temperature also often plays a critical role in the efficient performance of many real-time and embedded devices prevalent today. The uncontrolled rise in temperature beyond a safe threshold limit not only increases cooling costs but may also reduce system efficiency and life span.

In this chapter, we present a brief introduction to the definitions related to real- time systems and their task models. We first provide an overview on the structure of real-time systems. Then, various scheduling algorithms for both homogeneous and heterogeneous platforms are discussed. Next, we present some energy and temperature- aware scheduling algorithms for real-time systems.

2.1 Real-time Systems

Typically, real-time systems are composed of the following layers [101]:

• An application layer, which is composed of a set of applications that require execution in the system.

• A real-time scheduler, which takes the scheduling decisions and provides ser- vices to the application layer.

• A hardware platform, which includes the processors/cores (among other things such as memories, communication networks, etc.).

We will now present each of these layers in detail and introduce the theoretical models enabling researchers to analyze these systems and design efficient schedulers for real-time systems to schedule the application tasks on the hardware platform.

2.1.1 The Application Layer

The application layer contains a set of applications that the system needs to execute. In real-time systems, the applications are often composed of a set of recurrent tasks. Each such task may represent a piece of code (i.e., program) which is triggered by external events that may happen in their operating environment. Each execution of the task is referred to as a task instance or a job. We now present the set of definitions related to a real-time task.

2.1.1.1 A Real-time Task Model

Figure 2.1: Temporal Characteristics of real-time task Ti

Formally, a real-time task (denoted by Ti; shown in Figure 2.1) can be characterized by the following parameters:

2.1 Real-time Systems

1. Arrival time (ai) is the time at which a task becomes ready for execution. It is also referred as request time or release time of the task.

2. Start time is the time at which a task starts its execution.

3. Execution time(ei) is the time required by the processor to finish the computa- tional demand of a task without interruption.

4. Finishing time is the time at which a task finishes its execution in the system.

5. Deadline is the time before which a task should meet its execution requirement.

If it is computed with respect to the system start time (at 0), it will be called an absolute deadline. If it is computed with respect to its arrival time, it will be called a relative deadline.

6. Slack time orLaxity is the maximum time a task can be delayed after its activa- tion to complete within its deadline: di−ei.

7. Priority is the importance given to a task in context of the schedule at hand.

A real-time task Ti can be classified as periodic, aperiodic or sporadic based on the regularity of its activation [35]:

1. Periodic tasks consist of an infinite sequence of identical activities, called in- stances or jobs, that are always separated by a fixed inter-arrival time. The acti- vation time of the first periodic instance is called phase (φi). The activation time of thekth instance is given byφi+ (k−1)pi, wherepi is the activation period (fixed inter-arrival time) of the task.

2. Aperiodic tasks also consist of an infinite sequence of identical jobs. However, their activations are not regularly interleaved.

3. Sporadic tasks consist of an infinite sequence of identical jobs with consecutive jobs separated by a minimum inter-arrival time.

Following are the three levels of constraint related to the deadline of a task:

1. Implicit Deadline: All task deadlines are equal to their periods (di =pi).

2. Constrained Deadline: All task deadlines are less than or equal to their periods (di ≤pi).

3. Arbitrary Deadline: All task deadlines may be less than, equal to, or greater than their periods.

We now provide a few other definitions related to tasks and taskset.

Utilization: The utilization of a (implicit deadline) taskTi is given byui =ei/pi. In case of constrained deadline, ui =ei/di.

Hyperperiod: It is the minimum interval of time after which the schedule repeats itself. For a set of periodic tasks (with periods p1, p2, . . . , pn) activated simultaneously att = 0, the hyperperiod is given by the least common multiple of the periods.

Static and Dynamic Task System: In a static task system, the set of tasks that is executed on the platform is completely defined before it starts running the task set. In a dynamic task system, some tasks may experience modifications of their properties while other tasks leave or join the executed task set at run-time.

2.1.2 A Real-time Scheduler

A real-time scheduler acts as an interface between applications and hardware platform. It configures and manages the hardware platform (e.g., manage hardware interrupts, hard- ware timers, etc.). More importantly, it schedules the tasks using a real-time scheduling algorithm. The set of rules that, at any time, determines the order in which tasks are executed is called a scheduling algorithm.

Given a set of tasks, T = {T1, T2, ..., Tn}, a schedule is an assignment of tasks onto available processors, so that each task is executed until completion. A schedule is said to

2.1 Real-time Systems

befeasible if all tasks can be completed according to a set of specified constraints. A set of tasks is said to be schedulable if there exists at least one algorithm that can produce a feasible schedule. A scheduling algorithm is said to be optimal if it is able to find a fea- sible schedule, if one exists. In many cases, optimal schedules are difficult to determine due to complex objectives and/or one or more complicated necessary conditions, espe- cially when the number of tasks/resources become higher. Therefore, such complicated optimal solution strategies often do not scale for large systems or scenarios when only partial information about online task/performance behaviours are available. In many a times, for cases when optimal solutions are hard to derive, it is possible to obtain efficient solutions by analysing the problem structure and evolving a faster and greedier mecha- nism (compared to the optimal strategy) which strategically explores only a part of the overall solution space. Although, such strategies may not guarantee optimality or even bounds on the maximum deviation from optimality, they may provide good/satisfactory solutions in most practical situations. Such solution mechanisms are usually referred to as heuristic strategies. In a recursive scheduling algorithm, the overall task scheduling in the system is broken into group of time intervals and then schedules are prepared for each interval. In a work-conserving scheduling algorithm, the processor is never kept idle when there exists a task waiting for execution on the processor.

2.1.3 Processing Platform

The termprocessor refers to a hardware element in the platform which is able to process the execution of a task.

1. Uniprocessor systems can execute only one task at a time and must switch between tasks.

2. Multiprocessor systems range from several separate uniprocessors tightly cou- pled using high speed network to multi-core. It can be classified as follows:

(a) Homogeneous: The processors are identical, i.e., they all have same func- tional units, instruction set architecture, cache sizes and hardware services.

The rate of execution of every task is same on all processors. Hence, the worst-case execution time of a task is not impacted by the particular proces- sor on which it is being executed.

(b) Uniform: The processors are identical - but they are running at different frequencies. Hence, all processors can execute all tasks but the speed at which they are executed and their worst-case execution time vary based on the processor on which they are executing.

(c) Heterogeneous: The processors are different, i.e., processors may have dif- ferent configurations, frequencies, cache sizes or instruction sets. Some tasks may therefore not be able to execute on some processors in the platform, while their execution speeds (and their worst-case execution times) may differ on the other processors.