Tag: juno

Experimental evaluation of post-copy live migration in OpenStack using 10Gb/s interfaces

Up to now, we have published several blog posts focusing on the live migration performance in our experimental Openstack deployment – performance analysis of post-copy live migration in Openstack and an analysis of the performance of live migration in Openstack. While we analyzed the live migration behaviour using different live migration algorithms (read our previous blog posts regarding pre-copy and post-copy (hybrid) live migration performance) we observed that both live migration algorithms can easily saturate our 1Gb/s infrastructure and that is not fast enough, not for us! Fortunately, our friends Robayet Nasim and Prof. Andreas Kassler from Karlstad University, Sweden also like their live migrations as fast and reliable as possible, so they kindly offered their 10 Gb/s infrastructure for further performance analysis. Since this topic is very much in line with the objectives of the COST ACROSS action which both we (ICCLab!) and Karlstad are participants of,  this analysis  was carried out under a 2-week short term scientific mission (STSM) within this action.
This blog post presents a short wrap-up of the results obtained focusing on the evaluation of post-copy live migration in OpenStack using 10Gb/s interfaces and comparing them with the performance of the 1Gb/s setup. The full STSM report can be found here. Continue reading

Tunneled Hybrid Live Migration

In our previous blog posts we mostly focused on virtual machine live migration performance comparing pre-copy, post-copy and hybrid approaches in an Openstack context rather than exploring other live migration features. Libvirt together with the Qemu hypervisor provides many migration configuration options. One of these options is a possibility to use tunneled live migration. Recently we found that the current libvirt tunneling implementation is not supported in post-copy migration. Consequently, In order to make the post-copy patch more production ready we decided to support the community and add support for post-copy tunneled live migration to libvirt on our own. This blog post describes the whole story of immersing ourselves into the open source community and hacking an established open source project since we believe this experience can be generalized. Continue reading

Performance analysis of “post-copy” live migration in Openstack

Previously we described how to  set up post-copy live migration in OpenStack Icehouse (and it should not be a problem to set it up in the same way in Juno). Naturally,  we were curious to see how it performs. In this blog post we focus on performance analysis of post-copy live migration in Openstack Icehouse using QEMU / KVM with libvirt. Continue reading

Setting up Live Migration in Openstack Icehouse [Juno]

[Update 8.12.2014] Since OpenStack’s Juno release hasn’t introduced any changes regarding live migration, Juno users should be able to follow this tutorial as well as the Icehouse users. If you experience any issues let us know. The same setup can be used for newer versions of QEMU and Libvirt as well. Currently we are using QEMU 2.1.5 with Libvirt 1.2.11.

The Green IT theme here in ICCLab is working on monitoring and reducing datacenter energy consumption by leveraging Openstack’s live migration feature. We’ve already experimented a little with live migration in the Havana release (mostly with no luck), but since live migration is touted as one of the new stable features in the Icehouse release, we decided to investigate how it has evolved. This blogpost, largely based on official Openstack documentation, provides step-by-step walkthrough of how to setup and perform virtual machine live migration with servers running the Openstack Icehouse release and KVM/QEMU hypervisor with libvirt.

Virtual machine (VM) live migration is a process, where a VM instance, comprising of its states, memory and emulated devices, is moved from one hypervisor to another with ideally no downtime. It can come handy in many situations such as basic system maintenance, VM consolidation and more complex load management systems designed to reduce data center energy consumption. Continue reading