Post-copy live migration in QEMU

Hurray! We have finally deployed QEMU 2.1.5 with post-copy live migration support on our servers! But before we get to that, a little bit of context… in our previous blog posts we focused on performance analysis of pre-copy live migration in Openstack. So far all of our experiments were done using QEMU version 1.2 with KVM acceleration. As we were keen to do some experimentation with post-copy live migration, we had to upgrade to the very new QEMU 2.1.5 which provides post-copy live migration support in one of its branches. (More generally, there have been significant enhancements in QEMU since version 1.2  – of November 2012 – and hence we expected better performance and reliability in pre-copy as well). This blog post focuses on our first hands-on experience with post-copy live migration in QEMU.

The main difference between post-copy and pre-copy migration is the way in which the VM’s RAM is transferred. While pre-copy spawns a new instance and copies all the memory pages before the old one is turned off, post-copy spawns and runs the new instance immediately and fetches all the missing memory pages as needed.

In our earlier experiments, we observed that the pre-copy approach fails to live migrate instances that contain intensive memory activity. More precisely, the rate of change of memory in the VM MUST be lower than the throughput of the network interface which is used for the migration to achieve successful convergence. Post-copy live migration enables live migration to terminate in finite time irrespective of VM memory load thus providing a  reliable tool to manage server load between compute hosts in any circumstances.

Note that in order to use post-copy the kernel needs to support userfaultfd and remap_anon_pages syscalls which are necessary to handle memory page faults in user space. In the post-copy live migration context it’s used as follows – the newly spawned VM tries to access a memory page and if it fails, the VM fetches the memory page from original  VM memory over the network. These syscalls however haven’t been merged into the official kernel release yet, but you can use the available patch by Andrea Arcangeli. After quite some effort we managed to get Ubuntu 12.10 LTS running on customized 3.18.0-rc3+ kernel with this patch applied.

Using post-copy live migration in QEMU

So, how can it be used? You can use either the QEMU monitor directly by pressing alt+2 in instance’s VNC session (alt+1 to switch back) or if you are using libvirt you can use virsh qemu-monitor-command [domain] –hmp ‘[command]’ command instead of using qemu monitor directly. Just substitute the [domain] with the one you want to access listed under virsh list.

1. Enable post-copy capability for the domain (instance) you want to migrate BEFORE the migration initialization.

QEMU < 2.5 (experimental):

(qemu) migrate_set_capability x-postcopy-ram on

QEMU >= 2.5:

(qemu) migrate_set_capability postcopy-ram on

Note, you can check the status of all migration capabilities by:

(qemu) info migrate_capabilities

2. Initialize live migration as usual (standard pre-copy mechanism is used)

(qemu) virsh migrate --live [domain] qemu+ssh://[host-ip]/system

3. Once the live migration is initialized you can switch to post-copy (not before)

(qemu) migrate_start_postcopy

Note that, somewhat confusingly, the post-copy mechanism is not used by default even after you enable the post-copy capability. This flag ensures that that the destination system provides all the needed calls and functions to support post-copy mode and if not, the live migration is not initialized. The switch to post-copy migration needs to be done after migration initialization. As such, the approach is neither pure pre-copy nor pure post-copy, but a hybrid which combines aspects of both. The main reason for this hybrid live migration approach is that in many use cases the VM can be successfully migrated with the pre-copy technique without the higher risk of losing an instance if a network failure occurs. Switching to the post-copy can be done at any time if it is concluded that the  migration is taking too long and it probably won’t converge in reasonable time (or at all) and therefore it can be used as a fallback for scenarios where the VM is stuck in a migrating state due to its high memory activity. The main drawback of the post-copy mechanism is that any network interruption causes loosing of the VM.

Since this is a very new feature, there is a lot of work being done to support post-copy live migration in libvirt’s API and even more work to be done to support this functionality in Openstack, particularly within Nova.

The combination of these more powerful live migration solutions offers the potential for much more fluid and flexible load management in data centers although the capabilities and performance is still not well understood. Therefore our next work will focus on getting the post-copy live migration to work in an OpenStack context.

[UPDATE 23.5.2016]

As pointed out by Md Haris Iqbal in the comment section (thanks!), the postcopy capability is no longer experimental in QEMU version 2.5 and higher. Therefore, to enable it in such cases simply use:

(qemu) migrate_set_capability postcopy-ram on

(without the “x-” prefix)


4 Kommentare

  • The postcopy capability is moved from the experimental phase so, the `x-` has been removed from the command. Now you just have to type “migrate_set_capability postcopy-ram on”.

  • hello,when I do the command—-migrate_set_capability postcopy-ram on,it has errors:bash: migrate_set_capability: command not found…
    why does this appear?what should I do or install anything?please help me..
    Thanks!


Leave a Reply

Your email address will not be published. Required fields are marked *