• Tidak ada hasil yang ditemukan

Hypervisor 1 Hypervisor 2 Virtual Machine CPU

5.3.2 Bandwidth and Migration Time

The other consideration for improving application performance is the time required for performing live migrations. The live migration is done to improve the performance, shortening the time period necessary for accomplishing a task. Thus, there must be a method of estimating the time period required for migrating to ensure that making the migration will improve performance.

The experiment was done using the same hardware on Table 5.1, except that CPU

5. Systems Design

Figure 5.7: Number of frames processed every second on ffmpeg, with NFS bandwidth limit (by 50Mbps)

clock on hypervisor 1 was limited to 1.6GHz (133MHz base clock multiplied by 12.0). A FreeBSD dummynet bridge was placed between the hypervisors, to limit the bandwidth available for making live migration. The hardware specification of the dummynet bridge is shown in Table 5.2, and the topology is shown in Figure 5.8.

Table 5.2: Dummynet bridge hardware specification Dummynet Bridge

CPU AMD Phenom II X4 810 2.6GHz

Memory 2GB DDR2

Chipset AMD 780G

Network Intel 82574L Gigabit Ethernet (PCI-Express) Intel 82541PI Gigabit Ethernet (PCI)

OS FreeBSD 8.1 i386

The ffmpeg encoding was executed again to observe the relationships between mi- gration and improving application performances. The results of the experiment are shown in Table 5.3.

Experiment (1) is simply running ffmpeg video encoding within the virtual machine without making any live migrations. Experiments (2), (4), and (5) runs the video

5. Systems Design

Hypervisor 1

Dummynet Bridge (Migration)

Switching (NFS)Hub Hypervisor

2

ServerNFS Virtual

Machine Virtual

MachineVirtual Machine Migrate

Figure 5.8: Migration bandwidth pre-experiment map

Table 5.3: ffmpeg migration pre-experiment Migrate at

Band Hypervisor 1 (1.6GHz) Hypervisor 2 (3GHz) -width ffmpeg Mig to iperf ffmpeg Mig to iperf (Mbps) (min) 2 (min) (Mbps) (min) 1 (min) (Mbps)

(1) None - 4:55 - - 3:22 - -

(2) 30 sec

1000 3:46 0:35

751 4:37 0:38

(3) 60 sec 3:38 0:30 3:38 0:41 667

(4) 30 sec 200 4:03 1:33 179 4:33 1:55 173

(5) 30 sec 100 5:04 3:25 93.1 3:45 4:43 87

encoding, and start executing live migration manually after 30 seconds of wall-clock time. Experiment (3) is the same, except that the live migration was executed after 60 seconds. The duration for completing video encoding is shown in column “ffmpeg”, the duration between executing migration and actual migration occurred is shown in column “Mig to”, and the result of iperf, network throughput benchmark, is shown in column “iperf” to indicate the actual network bandwidth available.

In the table, experiment (1) shows that video encoding performance will be affected by CPU clock speed available on the hypervisor. Next, experiment (2) shows that the performance will be affected when migration is made to another hypervisor. When the virtual machine migrated from hypervisor 1 to 2, the performance improved; when the virtual machine migrated from hypervisor 2 to 1, whose CPU clock is lower than the original hypervisor, the performance declined.

In experiment (3), performance on hypervisor 2, which performed a migration from a hypervisor with the faster CPU to the slower, has improved since the virtual machine

5. Systems Design

stayed on the faster hypervisor longer than experiment (2). The performance was expected to decline compared to start migration after 30 seconds on migration from hypervisor 1 to hypervisor 2, but the performance turned out to be better on this case.

0 100000 200000 300000 400000

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39

Bandwidth (Kbps)

Time Elapsed (seconds)

Live Migra!on Bandwidth Using "ifstat"

Figure 5.9: Bandwidth consumed by migration, monitored using ifstat

When a live migration is performed, network bandwidth is consumed as shown in Figure 5.9. From the data, it can be informed that no more than 300Mbps of band- width was consumed for performing migration. Thus, in experiments (4) and (5), network bandwidth available for performing live migration was limited to 200Mbps and 100Mbps.

When network bandwidth was limited to 200Mbps (experiment 4), the migration time took three times longer than without having network bandwidth limitation. The experiment has shown that the migration is effective even when network bandwidth is limited to 200Mbps, but the performance improvement compared to without perform- ing migrations has declined compared to without having limits by approximately 7%.

This is the case where migration was still effective even when the network bandwidth is limited.

On the other hand, when the network bandwidth was limited to 100Mbps in ex- periment (5), the performance did not improve. Especially when the test was run on hypervisor 2 for migrating to hypervisor 1, the video encoding had ended before the live migration has completed. This is the case where migration has become a waste:

the synchronization of memory contents wasn’t necessary, and the performance may have been affected while performing the migration. This particular virtual machine had 1024MB of memory space allocated, and average of approximately 75.50Mbps of network bandwidth was consumed during migration. If Equation 5.1 is appropriately stated, simply copying the memory contents would take approximately 108 seconds, shown in Equation 5.2.

TM = 1024(M B)×8

75.50(M bps) 108(seconds) (5.2)

5. Systems Design

However, migration took 283 seconds until it completes. The test was running a video encoding application, which may have changed memory contents while they were being synchronized. Even during the migration process, applications can make changes on memory spaces regardless of whether the space has already been synchronized or not; the change will be synchronized again when it has been synchronized once.

By default, KVM continues to synchronize memory contents until the duration be- tween pause and resume of virtual machine for performing actual migration falls less than 30 milliseconds. We can infer that since synchronization was too slow to complete, a greater portion of memory contents has been modified since the first synchronization.

Because the final stage of migration will not take place until memory changes are set- tled, the synchronization has iterated multiple times, not allowing the final migration to complete. This phenomena did not appear when the same test was conducted with no limitation on migration network bandwidth. From this experiment, it can be said that network bandwidth available for migrating virtual machines will affect migration duration and effectiveness of the migration.

This pre-experiment has shown that live migration will not always improve the per- formance of the virtual machine. The duration of the migration, which is to be es- timated from available network bandwidth and allocated memory space, is possible.

However, additional overhead of synchronizing modified memory space must be taken into account, and this duration may not be easily estimated without knowing the memory space being modified by applications running inside the virtual machine.