• Tidak ada hasil yang ditemukan

For our simulation, we modified CloudSim [63] such that VMs and nodes can have some at- tributes, and VMs can have placement constraints based on the attributes of VMs and nodes.

We also added features to create VMs dynamically and deal with VM units, and implemented the six constraint-aware VM placement algorithms discussed in Section III. To provide compre- hensive analysis on the effects of VM placement constraints, we identify various parameters for the VM placement constraints and cluster configurations, which may affect the performance of the VM placement algorithms. We then synthetically generate a diversity of constraint scenarios, and run simulations of the algorithms over those scenarios.

Cluster Setting. Table 4.1 presents the configurations of nodes and VMs used for our simula- tions. For the runtime and migration overhead of a VM, we use the values measured from one of SPEC CPU2006 applications given in Table 4.2.

We assign attributes to nodes in a cluster as follows. The nodes have Natt attributes, att0, att1, ..., attNatt−1, where Natt ≥ 2, and each attribute has two possible values. For att0, the possible values are 1 and 2, while for the other attributes, those are 0 and 1. Value 0 indicates that a node does not possess the specified attribute, while different positive values indicate the different types of the attribute. Note that att0 represents one kind of attributes, for which each node must possess the attribute, such as the CPU architecture with Intel and AMD types,

Table 4.1: Cluster configurations

Total num. of nodes (Nnode) 128, 256, 512 Num. of cores per node (Ncore) 4, 8, 16

Num. of nodes with

8, 16, 32 same attribute values

VM types

Require 100% of one core 40%

Require 75% of one core 40%

Require 50% of one core 20%

Total CPU requirement of VMs 75% of total CPU resources

and the other attributes represent the other kind, for which some of nodes in the cluster can be selectively configured to have the attributes such as SSD and GPU. For each attribute, we assign possible values to the nodes, such that the cluster is configured to have an equal number of nodes with the same attribute value. After assigning values for all the attributes, we sort the nodes in a random order. For the default cluster of 256 nodes,Natt is three, and the number of nodes with the exactly same node attributes is 32. After assigning values for all the attributes, we sort the nodes in a random order, and assign them index. We set the max CPU utilization threshold of each node to 85% to have some amount of headroom (as discussed in [64]).

Constraint Generation. Each user or user group can own multiple VMs. We call a set of VMs owned by the same user (or user group) aVM group. A user can have multiple VM groups based on his or her need. For VMs in the same VM group, a subset of the VMs may need to place on the same node, becoming a VM unit. On generating simulation scenarios, we consider the following four relationships among VM units in the same group.

1. All the VM units have the VMtoVM-MustNot constraint with respect to each other, and have the same VMtoNode-Must constraint condition.

2. All the VM units have the VMtoVM-MustNot constraint with respect to each other, but have no VMtoNode-Must constraint.

3. All the VM units have the same VMtoNode-Must constraint condition, but have no VMtoVM-MustNot constraint.

4. All the VM units have neither VMtoVM-MustNot constraint nor VMtoNode-Must con- straint.

We define the VM unit size as the total CPU resource requirement of a VM unit considering that the value of the CPU resource for one CPU core is 100% since each VM can have a different requirement for the CPU resource. For example, for a VM unitu, if the size ofu is 500%, then the total CPU resource requirement of all the VMs in u is 5 cores. In addition, we define the

VM group size as the total number of VM units in a VM group. We assign VM placement constraints of VMs as follows.

1. For all the VMs to be created, we first randomly create multiple VM units and form multiple VM groups from the generated VM units such that VM unit sizes and VM group sizes are randomly selected within ranges limited by the maximum VM unit sizeMu and the maximum VM group sizeMg, respectively.

2. Each group may have the VMtoNode-Must and/or VMtoVM-MustNot constraints. If the VMtoNode-Must constraint rate, CRV toN M ust, is x%, then x% of the groups have the VMtoNode-Must constraint. Similarly, if the VMtoVM-MustNot constraint rate, CRV toV M ustN ot, is y%, then y% of the groups have the VMtoVM-MustNot constraint.

3. For each VM group with the VMtoNode-Must constraint, it has an equal chance of selecting one of the values for each node attribute. For an attribute, when value 0 is assigned to a group, all the VM unit in the group can be placed in any nodes regarding the attribute.

It is important to note that on generating constraints for VMs, we tried to avoid a constraint setting in which the placement of all VMs are not feasible at all. For example, we ensure that the total CPU requirement of VMs that must be placed on nodes with a certain attribute is at most the total amount of CPU resources of the nodes that possess the attribute.

Metrics. For each simulation run, VM creation requests are arrived dynamically to the system based on the normal distribution, and the last VM creation is requested at 3,600 seconds, regardless of the total number of VMs. A created VM continues to execute its application iteratively, but once the last VM is created, it finishes the remaining execution and does not restart its execution again. The performance of each algorithm is evaluated over the whole period. Two different performance metrics are used depending on the optimization goal.

• The number of active nodes(Nactive): The time-weighted average of the number of active nodes is measured for the ES algorithms.

• Load imbalance (σload): The time-weighted average of the standard deviation of CPU utilization is computed during the period from a time when 20% of the VM creations are requested to a time when 80% of them are requested for the LB algorithms. (In the excluded time periods, the load balancing of a cluster is not affected noticeably by different algorithms.)

In addition, we measured the following two metrics for all the algorithms:

• VM creation failure rate (Rf ail): The percentage of VMs that fail to be created, because there are no nodes satisfying their VM placement constraints and resource requirements.

• Migration overhead (Tmig): The percentage of the average of additional VM runtimes caused by VM migrations to the average runtime of all VMs in no constraint case.

The result of each simulation in this section is the average value over 200 simulation runs. For the spot migration algorithm, the threshold was experimentally selected, and for the periodic migration algorithm, the periodic dynamic migration of VMs were performed every 100 VM creations.