Setting up post-copy live migration in OpenStack

We have shed blood, sweat and tears trying to get post-copy live migration working in OpenStack over the last month: this blogpost explains all the necessary steps to make it work and hopefully it will save future post-copy pioneers some pain. In our previous blog post we focused on setting it up in QEMU. This time we consider the bigger picture spanning all the levels of Openstack from system kernel requirements to setting up the right flags in the Nova configuration file in the OpenStack environments running QEMU / KVM with Libvirt.

Note that post-copy live migration is a very recent feature and it has not been officially released in QEMU, Libvirt or OpenStack; that means that working with post-copy migration necessitates patching and compiling the most recent linux kernel as well as QEMU and libvirt.

Starting at the bottom –  the linux kernel

Currently QEMU’s implementation of post-copy live migration is based on Linux “userfault” and “remap_anon_pages” Linux syscalls by Andrea Arcangeli. These system calls support  sharing RAM between source and destination VMs during the post-copy live migration. (If you are interested in more detailed description of these calls here is the official description of page fault handling in user space feature).

The easiest way to get the source code for a kernel which includes this patch is using git.

git clone git:// -b userfault
git fetch git checkout -f origin/userfault

Once you’ve download the patched source code you need to configure, compile and install the new kernel. Care needs to be taken here – building kernels incorrectly can have bad side-effects in some cases. The crucial configuration parameter USERFAULTFD needs to be enabled in the kernel configuration file. It can be found in following location in menuconfig: General setup -> Configure standard kernel features (expert users) -> Enable madvise/fadvise syscalls. Generally,  it is recommended to use your current kernel configuration that can be usually found in /boot/config-$(uname -r) as a reference file for the other settings.

Also, building the kernel does require having a pretty full set of build tools on the machine (obviously) and these are often not on cloud hosts. Consequently, it may be necessary to install packages such as make, gcc, ncurses and others. The kernel can be built using the standard kernel build process:

make menuconfig
make modules_install
make install

If all dependencies are met, the kernel configuration is done properly and the build successful  you should be able to boot into new kernel supporting post-copy live migration.

Climbing up the stack – QEMU

Currently the the QEMU support for post-copy live migration is being developed in the wp3-postcopy branch github repo as a part of the ORBIT project. And again the easiest way to get the current source code is to use git.

git clone -b wp3-postcopy

In the downloaded QEMU source code folder, run the following commands.

make install

The build might not work out of the box – it may be necessary to install some extra libraries  – we had to install glib2.0-dev, libfdt-dev and  libpixman-1-dev, for example.

Instead of ./configure command you can use ./configure –target-list=x86_64-softmmu to compile QEMU just for x86_64 architecture and save some time during the build (assuming you’re on an x86_64 architecture).

The binaries of new version of QEMU are by default installed in /usr/local/bin folder. In this state your QEMU installation should fully support post-copy live migration using the QEMU monitor using the approaches described in previous blog post. To ensure that your brand new QEMU is the default, you can check your version of qemu simply by running qemu-system-x86_64 –version.

The air is getting thinner – Libvirt

From an Openstack point of view the most important requirement is postcopy support in Libvirt since it provides the interface between Nova and QEMU. The post-copy patch is being developed in wp3-postcopy branch of libvirt by Cristian Klein. This patch adds the possibility to change QEMU’s domain migration capability and run post-copy live migration by supporting   specific post-copy live migration flags. The following new flags relating to post-copy live migration were introduced:

  • VIR_MIGRATE_ENABLE_POSTCOPY that turns on x-postcopy-ram flag in QEMU’s domain migration capability (but not initiating the post-copy live migration itself) and
  • VIR_MIGRATE_POSTCOPY_AFTER_PRECOPY that specifies that post-copy migration should be started after the first pass of pre-copy live migration.

Note, that both of these flags needs to be specified to enable  post-copy live migration.

Use git to get the post-copy branch of libvirt…

git clone -b wp3-postcopy

After installing the necessary dependencies, in our case libtool, autoconf, autopoint, python-dev, libxml2-utils, xsltproc, libyajl-dev, libxml2-dev, libdevmapper-dev, libpciaccess-dev, libnl-dev, w3c-dtd-xhtml, gettext, the compilation and installation of libvirt should be just a question of:

./ --system
make install

And don’t forget (as we did) to recompile and reinstall libvirt-python the libvirt’s python API to be able to use new features in Openstack.

git clone git://
python build
python install

Reaching the clouds – Openstack

Once all previous steps are complete your Openstack environment is ready to use post-copy live migration. Just add VIR_MIGRATE_ENABLE_POSTCOPY and VIR_MIGRATE_POSTCOPY_AFTER_PRECOPY flags to the live_migration_flag parameter in nova.conf file. Restart the nova-compute service and you are set up to post-copy live migrate your OpenStack instances.

As a next step, we will explore post-copy live migration performance in Openstack and provide you our hands-on feeling in upcoming blog posts.

Acknowledgement:  Thanks to Cristian Klein for help with getting this working.


  1. It’s good to see that you’ve got that going; let us know how it goes. We’ve still got plenty to do to tune performance.

    • cima

      3. December 2014 at 15:46

      So far our initial experiments with post-copy look very promising. We will publish some results and our comments probably during the next week.

Leave a Reply

Your email address will not be published. Required fields are marked *