2.3 A Brief Survey of Scheduling Algorithms
2.3.3 Energy-aware Real-time Scheduling Strategies
2.3 A Brief Survey of Scheduling Algorithms
schedulability is ensured provided,
n
X
i=1
ei/pi ≤m (2.4)
where,eiandpidenote the worst case execution time and period of a taskTi, respectively.
In DP-Fair, time is partitioned into slices, demarcated by the deadlines of all jobs in the system. Within a time slice, each task is allocated a workload equal to its proportional fair share and assigned to one or two processors for scheduling. Job subtasks within a slice are typically scheduled using variations of traditional fairness ignorant schemes such as Earliest Deadline First (EDF [76]). Through such a scheduling strategy, DP- Fair is able to deliver optimal resource utilization while enforcing strict proportional fairness (ERfairness) only at period/deadline boundaries. DP-Fair is a semi-partitioned scheduling technique which allows at most m−1 task migrations and n−1 preemptions within a time slice and thus incurs much lower overheads compared to ERfair. In our fault-tolerant work (Chapter 4), we use a discrete approximation of DP-Fair as the underlying scheduling mechanism.
The dynamic power consumption (Pd) of CMOS circuits is given by:
Pd=Cef f ×Vdd2 ×f.
where,Vdd is the supply voltage, Cef f is the average switched capacitance per cycle, and f is the clock frequency. Here, Vdd may be considered to be roughly proportional to f. The major components of static current in a standard inverter are reverse bias junction current [111] and subthreshold conduction [111]. Hence, the static power consumption, Ps, is given by:
Ps = (Vdd×Isubn+|Vbs| ×Ij)Lg.
where, Vbs is the the body bias voltage, Isubn is the subthreshold leakage current, Ij is the reverse bias junction current and Lg is the number of devices in the circuit.
Need for Energy-aware Execution: As technology scales to lower feature sizes, leak- age or static power consumption/energy dissipation are becoming design parameters of higher criticality. With each technology generation, leakage drain is expected to in- crease by a factor of more than 5 and has already become the major source of power wastage within a chip [60]. The problem of power wastage in general and leakage drain in particular has been further aggravated by the advent of high-end portable embedded systems such asPersonal Digital Assistants (PDAs), cell phones, car on-board systems etc., which are powered by limited energy sources like batteries [65]. Hence, techniques for controlling power/energy consumption are being applied at all system levels starting from hardware and firmware to architectural, system and even application level.
Techniques For Reducing Energy Consumption: At the operating system level, two primary mechanisms are generally used to reduce energy consumption: 1) Dynamic Voltage Scaling (DVS) [89, 108] and 2) Dynamic Power Management (DPM) [21, 67].
The first mechanism reduces dynamic energy consumption and involves lowering the processor’s operating frequency by appropriately scaling its supply voltage when the full speed is not required. As energy dissipated per cycle in CMOS circuits scale quadrati-
2.3 A Brief Survey of Scheduling Algorithms
cally to the supply voltage, this strategy is able to provide large energy savings in DVS enabled processors. On the other hand, DPM mechanism tries to minimize static energy dissipation in the system by putting a processor in low-power suspension/sleep mode for as long as possible while still guaranteeing the tasks’ timing constraints. However, tran- sition between idle and active states require a fixed amount of time and energy. Hence, a purely greedy policy is often not acceptable because it degrades performance and may not decrease energy consumption. Thus, one of the primary tasks of suspension based algorithms is to predict when the idle period will be long enough so as to compensate the transition cost. There has also been an attempt to maximize the duration of idle intervals by delaying task execution using the procrastination scheduling model [13].
2.3.3.2 A Review of Energy-aware Scheduling on Multiprocessor systems Energy-aware scheduling algorithms on multiprocessor systems are mainly grouped into two categories: partitioned scheduling [30, 53] and global scheduling [21, 67–69]. While partition oriented scheduling strategies maintain separate local ready queues for tasks in each processor, global scheduling employs only one queue for all tasks assuming a single system-wide priority space. Partitioning is often the favored approach primarily due to its lower overheads and ease of implementation using well known uniprocessor schedulers [76] on individual processors. However, partitioning often suffers from low resource utilization [27]. On the other hand, global scheduling has attractive features such as flexible resource management, dynamic load distribution, fault resilience, high resource utilization etc. [107]. Chen et al. [30] explored the energy-efficient scheduling of periodic real-time tasks on multiprocessor systems with the consideration of leakage current along with DVS. Huang et al. [53] presented a run-time task reallocation scheme that improves the energy efficiency of leakage-aware DVS on multi-core processors. A two phased scheduling heuristic for sporadic tasks on heterogeneous multi-cores was pre- sented by Awan and Petters in [12]. Here, the first phase attempts to minimize dynamic energy dissipation by assigning each task to its favorite processor based on the task’s dynamic energy consumption affinity towards different processors. The second phase reduces static energy consumption by trading off higher dynamic energy consumption of
a task to enhance the ability of the processors to use more efficient sleep states. All these approaches follow a partition oriented strategy, and hence, their resource utilizations are often low.
Bhatti et al. [21] presented a DPM strategy for global multiprocessor systems called Assertive Dynamic Power Management (AsDPM). AsDPM first determines the mini- mum number of active processors needed to fulfill the execution requirement of released jobs at runtime. Then it attempts to cluster the distributed idleness existing on a subset of the active processors into longer continuous idle intervals so that these obtained inter- vals may be employed to switch some of the processors to deeper low power states for a longer duration of time. This AsDPM strategy is then used along with global schedulers like Global-EDF or Global-LLF to get a better reduction in the energy consumption. As both Global-EDF and Global-LLF are known to be sub-optimal (that is, they cannot fully utilize the complete capacity of the set of processors comprising a multiprocessor system), the global strategy presented in [21] also becomes sub-optimal in nature.
Legout et al. [67] presented an offline power-aware heuristic scheduling algorithm called Linear Programming DPM (LPDPM) which tries to increase the duration of idle periods so that deeper low-power states may be attained. It models processor idle time as an additional task and tries to reduce the number of preemptions (or executions) of this additional task. In [68], Legout et al. improved their previous work by employing an existing online scheduler called Fixed Priority until Zero Laxity (FPZL) to schedule tasks inside intervals delimited by consecutive task releases. The approach uses dynamic slack reclamation in order to activate deeper low-power states online. Then they extended their works in [69] to both hard real-time and mixed-criticality (MC) systems. In all their works, they used a mixed integer linear program to compute a partial schedule that optimizes the length of idle periods and an existing scheduling algorithm (FPZL) to further increase the length of the idle periods online. Since FPZL is not optimal for generic periodic tasks having arbitrary period lengths, these algorithms are also not optimal. Also, all these algorithms use a hybrid offline-online strategy and hence cannot be employed in completely dynamic scenarios where a task may arrive/depart at
2.3 A Brief Survey of Scheduling Algorithms
any time. A purely online adaptive static power management strategy called Balanced Workload Scheme (BWS) has been presented by Chen et al. in [29] for hard real- time pipelined multiprocessor systems. At each adaptation instant, the BWS heuristic attempts to maximize the number of processors that may be switched to sleep mode and exploits the slacks generated at run-time to effectively extend sleep durations.
However, there has not been a significant effort towards the development of energy- efficient proportional fair scheduling methodologies. In chapter 3, we have chosen ER- fair [5], a work-conserving proportional fair scheduler as our underlying scheduling scheme and developed a novel energy-efficient algorithm called ERfair Scheduler with Suspension on Multiprocessors (ESSM). The ESSM algorithm attempts to locally max- imize the total length of suspension intervals while simultaneously reducing the number of such intervals using a novel procrastination mechanism, thus lowering energy con- sumption in the process.