This migration must be transparent to the guest operating system, applications running on the operating system, and remote clients of the virtual machine. Postcopy migration transfers a VM's memory contents after sending its processor state to the target host. In the first step of precopy memory migration, all memory pages are transferred to the VM destination;.
When the server wants to reclaim memory, it instructs the driver to "inflate" the balloon by allocating pinned physical pages within the VM. KVM.The kernel-based virtual machine [Machines 2012], or KVM, is a virtual machine monitor that allows full virtualization for Linux on x86 hardware. In the first iteration, all the storage data must be copied to the destination VM.
Postcopy migration transfers the VM's memory contents after its processor state has been sent to the destination host. Active push reduces the duration of residual dependencies on the source host, and pushes the VM's pages from the source to the target VM, continuing execution.
SUSPEND/RESUME MIGRATION 1. Important Concepts
- Internet Suspend/Resume
- Capsule
- The Collective
- SoulPads
- Cloudlets
- CloneCloud
The improvement is based on reducing the iteration number in the pre-copy phase of the live migration of Xen to the minimum of two. For example, the virtual machine monitor was VMware in early versions of ISR, but in the latest versions VMware, KVM or Xen can be used. The ISR client software encrypts data from a packet before it is transmitted to the distributed storage layer.
However, this project implements some optimization to reduce storage requirements, transfer time and startup time over a network of a capsule. Ballooning reduces the size of the compressed memory state and thus reduces the startup time of capsules. PCs can be connected to a broadband LAN or WAN or even disconnected from the network like a laptop.
The Collective offers cache-based system management, which separates a computer state into two parts: the system state, which consists of an operating system and all installed applications, and the user state, which consists of a user's profile preferences - ences and data files. The VAT performs functions such as authenticating users, fetching and running the latest copies of devices locally, storing user status in the data repository, managing a cache to reduce the amount of data that must be fetched over the network, and improving performance . The SoulPad divides the user's machine into a body (screen, CPU, RAM, I/O) and a soul (session state, software, data, preferences).
A cloud is defined as a reliable, resource-rich computer or group of computers that is well connected to the Internet and available for use by nearby mobile devices. If the migrated thread reaches a reintegration point, it is suspended, packaged, and then returned to the mobile device. A mathematical optimizer selects migration points that optimize the total execution time or mobile device energy consumption according to the application and the cost model.
Test results have shown that some applications achieve up to twenty times faster execution speed and spend up to twenty times less energy on the mobile device.
WAN LIVE MIGRATION 1. Important Concepts
- Seamless Live Migration of Virtual Machines over the MAN/WAN
- LiveWide-Area Migration of Virtual Machines
- A Live Storage Migration Mechanism over WAN
- CloudNet
The live migration caused an application downtime of 0.8–1.6 seconds, which compared to the downtime with intra-LAN configuration was five to 10 times higher. During the storage migration, a user-level block device is used to record and forward any write access to the destination. It then starts the bulk transfer stage, which pre-copies the VM's disk image to the destination while the VM continues to execute.
The system then invokes Xena Live Migration, which iteratively copies the dirty pages to the target VM and logs them. As the VM continues to run at the source during the mass transfer and live Xen migration stages, it is important to ensure that any changes to its disk images are propagated to the target and applied to the build there. The deltas are sent and queued on the destination VM for later use in the disk image.
When the migration is complete, the dynamic DNS entry is updated and new connections are directed to the new VM IP address. NBD connects the source and destination host nodes to the destination and proxy server respectively using TCP/IP. In the final part of the live migration, the VM is restarted on the target site, and then I/O operations are performed on the target site via a proxy server.
The configuration parameters correspond to a network between Tokyo and the West Coast of the United States. Experimental results show that the proposed mechanism reallocates disks between source and destination sites with an I/O performance comparable to replicated I/O operations in the LAN environment. Cloudnet uses the Distributed Redundancy Device (DRBD) distributed storage system to migrate storage to the destination data center.
Once the remote disk is translated into a consistent state, Cloud-Net switches to a synchronous replication scheme and a live VM storage state migration is initiated.
LOAD BALANCING 1. Important Concepts
Dynamic Placement of Virtual Machines
Harmony
There is a storage virtualization manager, IBM SVC's CLI interface, that interacts with HARMONY to obtain virtual storage configuration and handles non-disruptive data migration. On the other hand, HARMONY interacts with a server virtualization manager to obtain configuration and performance information about VMs and physical servers and to manage live migration of VMs. If a node is overloaded, exceeding a threshold, HARMONY starts the optimization planning component VectorDot.
It generates recommendations for migrating one or more VMs or VMs to alleviate the hotspot. These are processed by the Virtualization Orchestrator using the appropriate server and storage virtualization managers. An overload of a switch node (or a switch port) can occur if a large amount of I/O is being pushed through it.
If a node exceeds the threshold along any of its resource dimensions, the node is considered a congested node or trigger node. To resolve the overload, the load balancing algorithm in HARMONY moves one or more VMs or Vdisks from an overloaded node to underloaded nodes. In an example of virtual machine and storage migration with HARMONY, the authors obtained the following results: A VM2 and its storage are migrated to another physical server.
This happens because of the CPU congestion at source and destination servers caused by live migration. Store live migration does not cause CPU congestion and has a smaller overhead of 7.6%, it takes place for a much longer period of time, and 20 GB of data is migrated to another physical machine. HARMONY is highly scalable and delivers allocation and load balancing recommendations for more than 5,000 VMs and Vdisks on more than 1,300 nodes in less than 100 seconds.
RELATED TOPIC: AUTOMATIC MANAGEMENT OF LIVE MIGRATION 1. Important Concepts
- Sandpiper
- VirtualPower
- Live Migration of VM Based on Full System Trace and Replay
- Record/Replay for Replication
- Lightweight Virtualization Solution
Sandpiper works by moving load from more congested servers to less congested servers, minimizing data duplication during migration. If the algorithm cannot find a physical server to host the VM with the highest VSR, then the algorithm moves to the next highest VS VM and tries to move it in a similar fashion. VirtualPower exports a rich set of virtualized power states, called Virtual Power Management states (VPM states); then the guest takes action according to VM-level power management policies.
Time refers to the exact point in the execution flow at which an event occurs. When the last log file is replayed, there is a consistent replica of the VM on both the source and target. The state of the replica should be synchronized with the primary only when the output of the primary has become externally visible.
Writes to the primary disk are held in a RAM buffer until the corresponding checkpoint is reached. After that, the checkpoint is accepted on the primary, which then drops the outgoing network traffic, and the buffered disk writes are applied to the backup disk. Thus, the results may simply be visible until the state of the linked system is committed to the copy.
After the guest VM restarts, the content stored in the migration buffer is migrated to the replica machine first. At the end of each network buffer migration cycle, it is known which packets should be replicated to the backup machine and which should be responded to clients. Important concepts. Lightweight virtualization approaches [Vaughan-Nichols 2006] allow a single operating system to run multiple instances of the same OS or different OSes.
Instead, the virtualized OS or application talks to the host OS, which then makes the appropriate calls to the real hardware.
CURRENT TRENDS
All incoming packets are deleted, the processes are frozen and they do not process incoming packets. During the first sync, the source container is in use and some files may be changed and the target container may contain outdated files. This phase creates a container and its associated processes on a target server, extracting information from the dump file.
CONCLUSIONS
With the development of virtualization and the incorporation of this technology as a means to achieve high availability systems, migration mechanisms were greatly improved. The first, process migration, which was born along with distributed systems, is the beginning of VM migration. In Proceedings of the 2011 IEEE/IFIP 41st International Conference on Trusted Systems and Networks (DSN'11).
InProceedings of the IFIP International Conference on Network and Parallel Computing Workshops.51–58.DOI:http://dx.doi.org/10.1109/NPC.2009.32. InProceedings of the 2nd Conference on Symposium on Network System Design and Implementation - Part 2 (NSDI'05). In Proceedings of the 4th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments (VEE'08).
In Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing (HPDC'10). In Proceedings of the 12th International Conference on Stabilization, Safety and Security of Distributed Systems (SSS'10). InProceedings of the IEEE International Conference on Cluster Computing.99–106.DOI:http://dx.doi.org/10.1109/CLUSTR.