• Tidak ada hasil yang ditemukan

Energy Efficient Scheduling of Real Time Tasks on Large

N/A
N/A
Protected

Academic year: 2023

Membagikan "Energy Efficient Scheduling of Real Time Tasks on Large"

Copied!
188
0
0

Teks penuh

These scheduling policies are designed based on the urgency points of the real-time tasks for a heterogeneous computing environment. Results show that the clustering technique can be decided based on the value of the critical utilization.

Multiprocessor Scheduling

In the case of real-time tasks, the optimality criteria are normally expressed in terms of the deadline of the tasks. P|pmtn|Cmax denotes the scheduling problem with M identical machines where a set of tasks must be executed to minimize the total scheduling time, and preemption of the tasks is allowed.

Classification of Multiprocessor Scheduling Algorithms

When preemption is not allowed, the problem becomes difficult, and this problem has been shown to be NP-hard [8]. For the case of two processors (represented as P2||Cmax), the problem is mapped to the subset sum problem, where n numbers (i.e., tasks) are to be divided into two subsets of nearly equal sum, and the problem is known to be NP-complete problem [9].

Real Time Scheduling

Scheduling of periodic real-time task scheduling

Earliest Deadline First: Earliest Deadline First (EDF) is implicitly a preemptive scheduling technique that assigns priority based on a task's deadline. EDF proves to be an optimal scheduling technique for single-processor systems, in the sense that if a task set is schedulable by an algorithm, then EDF can also schedule the task set.

Scheduling of aperiodic real-time task scheduling

Bratley's Algorithm: This algorithm tries to find a feasible schedule for a set of non-preemptive independent tasks for a single processor system. This algorithm is a tree search algorithm and for each task the algorithm may need to explore all partial paths originating from that node.

Aperiodic Real-Time Task Scheduling on Multiprocessor Environment 6

Cloud computing and virtualization

If necessary, a VM with the requirement is created using the VM OS library. The bottom layer is the hardware layer that consists of the physical hosts where the VMs are placed using the virtualizer.

Real-Time Scheduling for Large Systems and Cloud

Workflow Scheduling for Large Systems and Cloud

Some scientific applications of various fields such as astronomy, astrophysics, bioinformatics, high energy physics, etc. On the other hand, the emerging new computing platform has been found to be a better choice [27, 28].

Energy Consumption in Large Systems

Power consumption models

Now the power consumption of a processor is expressed as a function of its usage, processor frequency, etc. Scheduling policies that target the dynamic power consumption of the hosts (or processors) mainly use the DVFS (dynamic voltage and frequency scaling) technique by reducing the frequency and voltage to reduce the power consumption.

Figure 1.2: Component wise power consumption values for a Xeon based server [1]
Figure 1.2: Component wise power consumption values for a Xeon based server [1]

Impact of high power consumption

In addition to the computing power consumption discussed above, a significant portion of a data center's total power consumption is contributed by cooling equipment, AC/DC conversion equipment, etc. We consider the total computational energy as the total energy of the system and use this throughout the thesis.

Motivation of the Thesis

Of these, the majority of research uses the DVFS technique, and these only focus on the dynamic energy consumption of the processor. Most of these scheduling policies assume a continuous domain for the computing capacity of the VMs.

Contributions of the Thesis

Scheduling online real-time tasks on LMTMPS

We present the concept of critical utilization, where the energy consumption of the host is minimal. However, they perform much better than state-of-the-art policy on migration numbers.

Summary

Along with system energy consumption, we analyze the migration count and partition count for each policy. The power (or energy) consumption of the system is taken as a function of host utilization, or the number of active threads on a processor, or the summation of the energy consumption of running VMs on a host.

Organization of the Thesis

Non-virtualized system

31] was the pioneer to start the research in this direction by connecting power consumption with scheduling and used DVFS technique to study the power consumption of some scheduling techniques. 70] used the DVFS technique to design a scheme for high-performance computing (HPC) clusters where program execution is divided into multiple regions and the optimal frequency is selected for each region.

Virtualized system

The applications were scheduled in an energy efficient manner and the quality of service (QoS) is measured as meeting the application deadline constraints. However, in their approach, they have only considered the dynamic energy consumption of the hosts; and the target utilization of hosts is 100%.

Coarse Grained Approaches

Non-virtualized system

77] investigated the characteristics of the applications running in data centers in order to impose appropriate limits on the power consumption. They investigated the effect of collocation on the execution time of the tasks and proposed greedy heuristics for the same.

Virtualized system

56] studied the relationship between energy consumption and system performance, which was determined by CPU (or processor) utilization and disk utilization. Most of the VM consolidation problem only considers CPU utilization when distributing VMs across hosts.

Energy Efficient Workflow Scheduling

Workflow scheduling on large systems

In [94], authors used a bidirectional bin-packing approach to place the VMs on the hosts. They used linear regression and K-nearest neighbor regression prediction models to estimate future resource use.

Energy-efficient scheduling of workflows

The proposed scheduling policies consider a thread-based power consumption model, which is designed based on the power consumption model of some of the latest commercial processors. Existing scheduling algorithms for real-time tasks do not take into account this interesting behavior of the power consumption of processors.

Figure 3.1: Power consumption plot of a few recent commercial processors with number of active threads [2, 3]
Figure 3.1: Power consumption plot of a few recent commercial processors with number of active threads [2, 3]

System Model

This large amount of power consumption is called the base power consumption (BPC) of the processor in our model. So L.(PBase+rδ) is the power consumption of all fully utilized active processors and PBase+iδ represents the power consumption of one processor.

Power Consumption Model

Thus, in our considered system, the total value of processor power consumption is not proportional to the number of active HTs (or utilization) of the processor. But in our case, we consider the power consumption of the processor according to its number of active HTs.

Task Model: Synthetic Data Sets and Real-World Traces

Synthetic tasks

  • Execution time variation
  • Deadline variation

Different due date schemes considered based on the slack time variation of tasks are mentioned below. Here, the relative deadline of tasks is initially relaxed and with the increase in time (or task sequence number of tasks), the deadline becomes short.

Figure 3.4: Different deadline schemes with µ = 10 and σ = 5
Figure 3.4: Different deadline schemes with µ = 10 and σ = 5

Real-world traces

Mathematically, it can be represented as di =ai +ei+DEC(i); where i is the task sequence ti, the value of the function DEC(i) decreases as i∈ {1,2,. Mathematically, this can be expressed as di =D; where D is the common deadline for all tasks and D ≥max{ai + ei}.

Objective in the Chapter

Standard Task Scheduling Policies

  • Utilization based allocation policy (UBA)
  • Front work consolidation (FWC)
  • Rear work consolidation (RWC)
  • Utilization based work consolidation (UBWC)
  • Earliest deadline first scheduling policy (EDF)

If all the conditions of Theorem 3.1 hold, the total energy consumption of the system with preconsolidation scheduling is minimum. When the instantaneous power consumption of the system is proportional to the processor utilization, it consumes the same amount of energy as UBA and thus the energy consumption is minimum.

Figure 3.5: Illustration of front work consolidation of real-time tasks
Figure 3.5: Illustration of front work consolidation of real-time tasks

Proposed Task Scheduling Policies

  • Smart scheduling policy (Smart)
  • Smart scheduling policy with early dispatch (Smart-ED)
  • Smart scheduling policy with reserve slots (Smart-R)
  • Smart scheduling policy with handling immediate urgency

This is an online scheduling policy and the policy runs in the event of an event. The events can be (i) the arrival of a task, (ii) the completion of a task, (iii) the occurrence of the urgent point of a task.

Figure 3.6: With extra annotated information to Figure 3.3 to explain the smart scheduling policy (C = 100, δ = 10 and r = 8)
Figure 3.6: With extra annotated information to Figure 3.3 to explain the smart scheduling policy (C = 100, δ = 10 and r = 8)

Experiment and Results

  • Experimental setup
  • Parameter setup
    • Machine parameters
    • Task parameters
    • Migration overhead
  • Instantaneous power consumption
  • Migration count

Figure 3.14(a) shows the maximum energy reduction of our proposed smart scheduling policies Smart, Smart-ED, Smart-R, and Smart-HIU with respect to the base policies for real workload tracks. Our proposed policies achieve maximum energy reduction of up to 44% compared to all baseline policies.

Table 3.2: Different experimental parameter values for execution time schemes
Table 3.2: Different experimental parameter values for execution time schemes

Summary

The model assumes a non-linear relationship between energy consumption and host utilization. We define a threshold for host utilization when power consumption jumps sharply from range-II to range-III.

Figure 4.1: Power consumption of a server versus the utilization of the host as reported in [4]
Figure 4.1: Power consumption of a server versus the utilization of the host as reported in [4]

System Model

When a VM executes a task, it consumes some of the resources of the physical machine it resides on. As the VM completes its assigned task, it becomes idle and the corresponding VM usage becomes negligible and we ignore this in our work.

Task Model

In the Random deadline scheme, slack is generated by the tasks from a random distribution; how(ti) varies in the range fromRmin. With a strict deadline system, the time difference between the deadline and the task's completion time is short.

Energy Consumption Model

As shown in Figure 4.1 and formulated in Equation 4.7, the slope of the power function is much higher when host utilization moves above 70%. T HU1 indicates the boundary of the second region (ie, when the host utilization moves above 70%) and T HU2 indicates the maximum allowable utilization for a host which is taken as 100%.

Objective in the Chapter

The power consumption model shown by equation 4.7 used in our work assumes that there is a local power optimization module (DVFS or DPM) in each host. We can safely assume that whenever the host is operating at peak utilization, it is operating at its highest capable operating frequency, and the operating frequency of a host is proportional to the host's utilization.

Scheduling Strategies

  • Scheduling at urgent critical point (SCUP )
  • Scheduling at task completion (STC )
  • Scheduling with consolidation (SWC )

If an active host can accommodate the VM, it is placed on the host and the task is assigned to the VM. After the task is assigned to the VM and the VM is placed on the host, the scheduler fills the remaining utilization of the newly powered on.

If this step fails, the scheduling policy checks to see if the task's deadline is close. If the occurrence of FUP is not imminent, the scheduler places the job in the GWQ with a scheduling window.

Performance Evaluation

  • Simulation environment and parameter setup
  • Experiments with synthetic data
  • Experiments with real-world data: Metacentrum
  • Experiments with real-world data: Google tracelog

In the case of the random due date scheme, the proposed policies consume about 15% less energy than EARH. In Figure 4.6 we have plotted the reduction in energy consumption of the two proposed policies in relation to all three baseline policies.

Figure 4.4: Normalized energy consumption of various scheduling policies for synthetic dataset
Figure 4.4: Normalized energy consumption of various scheduling policies for synthetic dataset

Summary

Here, we first calculate a utilization value for which the energy consumption of the hosts of the cloud system is minimum. In the last chapter, we discussed the scheduling of online real-time tasks for a virtualized cloud system where the computing capacity of the VMs is considered continuous.

System Model

For a VM type vtj, there is a constantuj, which is the amount of utilization that vtj will provide for the task. Any such task ti can be described using 3-tuple: ti = (ai, ei, di), where ai is arrival time, ei is the execution time when executed with maximum utilization (umax = 1) and di is the deadline for the task.

Figure 5.1: System model
Figure 5.1: System model

Energy Consumption Model

So we can see that the critical utilization value is independent of the execution time (i.e. length) of the task executed by the system. The local optimization module on a host controls the frequency and sleep state of the computing system which may have more than one computing component.

Figure 5.2: Energy consumption versus total utilization of the host
Figure 5.2: Energy consumption versus total utilization of the host

Objective in the Chapter

As mentioned in the previous chapter, the energy model used in this chapter also assumes that a local energy optimization module (DVFS or DPM) is present on each host. Since power consumption depends on host usage, we can safely assume that when the host is running at its highest usage, it is running at the highest possible operating frequency and that the host operating frequency is proportional to host usage . the host.

Classification of cloud systems

Calculation of hot thresholds for the hosts

Let E1 and Enew be the energy consumption of an already active PM and the new PM turned on for the incoming task, when the task is scheduled on new. Then the energy consumption E1,EnewandE10 can be written asE1 =t(Pmin+αuc3),Enew =t(Pmin+αut3) and E10 =t(Pmin+α(uc+ut)3) respectively.

Figure 5.4: Hot threshold (u c + u t ) versus u c
Figure 5.4: Hot threshold (u c + u t ) versus u c

Thus, by using the least feasible VM, we obtain a minimum energy consumption schedule. Also, in this case, the energy consumption of a host is directly proportional to the square of the total usage (equation 5.4 with Pmin = 0).

Hosts with significantly high static power consumption (u c >

Then it assigns all selected VMs to separate physical machines and executes the tasks on the respective assigned VMs. This scheme is based on the fact that the value ofuc is 0 for each physical machine and we want its total usage to be as close to uc as possible.

Scheduling Methodology for the Systems with General Specifica-

  • Scheduling n tasks of same type (Case 1: (e, d))
  • Scheduling approach for two types of tasks having same
  • Scheduling approach for the requests with multiple number
  • Scheduling approach for general synced real-time tasks (Case

High number (γ) of VMs per host is preferable if High number (γ) of VMs per host is preferable if. Let the number of hosts in case β and γ be the number of VMs per host, respectively.

Figure 5.6: Scheduling approach for case 3: SC3(e i , d, n)
Figure 5.6: Scheduling approach for case 3: SC3(e i , d, n)

Performance Evaluation

We normalized the energy consumption of all the clustering techniques with respect to CLT1 for the same task set. Keeping the task the same, Figure 5.8(c) shows a comparison of energy consumption values ​​between all four clustering techniques on a different cloud system.

Summary

For systems with a higher UC, it is better to choose CLT4 as the cluster technique to minimize the energy consumption of the system. In this chapter, we carefully compared the energy consumption with the usage characteristics of the cloud system hosts to minimize the overall energy consumption of the system.

System Model

They only considered the dynamic energy consumption of the hosts and used DVFS technology to exploit the slack time of a task. The task of the Slack distribution engine is to distribute the entire slack of a workflow between different tasks in that workflow.

Figure 6.1: System architecture
Figure 6.1: System architecture

Application Model

The task of the workflow scheduler is to efficiently schedule the tasks of an incoming workflow to the existing state of cloud resources, so that the overall energy of the cloud system is minimized (taking into account the energy consumption model described in section 6.4). The system consists of a consolidation agent, and the agent's job is to consolidate VMs into a minimum number of hosts to reduce the total number of active hosts in the system.

Energy Consumption Model

The sum of these power consumption values ​​over the total time interval gives the energy consumption of the system. Accordingly, the base energy consumption of hosts in the system model (Figure 6.1) can be expressed as follows.

Objective in the Chapter

Again, the dynamic power consumption of a host is essentially contributed by the active VMs when they run some tasks. We assumed that all hosts reside in a single data center and they are heterogeneous in their computing capacity and energy consumption.

Scheduling Options and Restrictions in Workflow Scheduling

VM placement

A task can be scheduled when a set of hosts in the system can collectively meet the VM requirement of the task. This reduces the number of active hosts; which in turn reduces the overall energy (or power) consumption of the system.

Figure 6.3: System state with different VM allocation type
Figure 6.3: System state with different VM allocation type

Migration

This additional energy is added to the total energy consumption from Equation 6.5 to find the total energy consumption of the system. Otherwise, the system must migrate VMs from the current host to the newly assigned hosts.

Slack distribution

In this case, the total idle time of the workflow is distributed among all tasks in the workflow. A job with a higher length value and a VM request gets a larger portion of the idle time.

Scheduling Policies

Vm Migration Slack deployment Non-sharded deployment (NSVM) No migration (NM) Slack to first task (SFT).

Scheduling with Non-splittable VM Allocation (N SV M)

  • Slack to first task (SF T N SV M N M )
  • Slack forwarding (SF W N SV M N M )
  • Slack division and forwarding (SDF N SV M N M ) . 130

The first workflow task requires 5 VMs to run and currently the h3 host can easily hold the task. The first task of a workflow gets full freedom to use all the shortcomings of the workflow.

Figure 6.5: System state at different time (t) instant under SF T N SV M N M scheduling policy
Figure 6.5: System state at different time (t) instant under SF T N SV M N M scheduling policy

Scheduling with Splittable VM Allocation (SV M )

  • Slack to first task (SF T SV M N M )
  • Slack forwarding (SF W SV M N M )
  • Slack division and forwarding (SDF SV M N M )

Let us consider the same workflow as shown in Figure 6.9 to be scheduled using the SDF SV MN M policy where the current system state is shown in Figure 6.8(a). So at time 15, the task can be scheduled where2 will host 6 VMs and another 3 VMs will be hosted on H3.

Figure 6.7: System state at different time (t) instant under SF T SV M N M
Figure 6.7: System state at different time (t) instant under SF T SV M N M

When the total VM demand is high enough (greater than the maximum VM capacity of the hosts), the scheduler selects only the hosts with the maximum VM capacity. Then, the scheduler for the remaining VM requests checks all possible options to find the hosts with the minimum power consumption.

Performance Evaluation

  • Simulation platform and parameter setup
  • Real scientific work-flow
  • Impact of slack distribution
  • Trade-off between energy consumption, migration count and
  • Different mixes of scientific workflows

The figure also reveals that the power consumption of the system is relatively less in the case of VM partitioning as compared to non-partitioning allocation. Scheduling policies under the non-distributable VM allocation policy work similarly to the state-of-the-art EnReal policy; however, the proposed policies beat En-Real in terms of migration numbers.

Figure 6.11: Examples of scientific workflows
Figure 6.11: Examples of scientific workflows

Summary

We have designed two energy efficient scheduling approaches, (i) Emergency Point Aware Scheduling (UPS) and (ii) Emergency Point Aware Scheduling - Early Scheduling (UPS-ES). In the last contribution of the thesis, we have considered the real-time scheduling of a set of dependent tasks in a cloud system.

Scope for Future Work

Gammoudi, “Energy-efficient partitioning and scheduling approach for cloud scientific workflows,” in IEEE Int. Buyya, “Energy-efficient resource management in virtualized cloud datacenters,” in IEEE/ACM Int.

Task and VM information for λ = 10

Energy reduction of the proposed policies for real-trace data

System model

Energy consumption versus total utilization of the host

Options for scheduling the new task

Energy consumption versus utilization of extreme cases

Description for clustering techniques

Energy consumption of cloud system

System architecture

Application model

System state with different VM allocation type

Examples of scientific workflows

Impact of slack distribution on energy consumption

Normalized values of energy consumption, migration and split count 143

Gambar

Figure 1.1: Cloud computing architecture
Figure 1.2: Component wise power consumption values for a Xeon based server [1]
Figure 3.1: Power consumption plot of a few recent commercial processors with number of active threads [2, 3]
Figure 3.2: Online scheduling of real-time tasks on LMTMPS
+7

Referensi

Dokumen terkait

This study analyzes big ball game lesson plans including eight indicators: formulation of indicators, learning objectives, learning methods, learning media, learning materials, learning

It’s clear that Keillor, like many American humourists, owes a lot to Mark Twain, who believed that the trick to telling a funny story was to ‘conceal the fact that he even dimly