Tag: live migration (page 1 of 2)

Experimental evaluation of post-copy live migration in OpenStack using 10Gb/s interfaces

Up to now, we have published several blog posts focusing on the live migration performance in our experimental Openstack deployment – performance analysis of post-copy live migration in Openstack and an analysis of the performance of live migration in Openstack. While we analyzed the live migration behaviour using different live migration algorithms (read our previous blog posts regarding pre-copy and post-copy (hybrid) live migration performance) we observed that both live migration algorithms can easily saturate our 1Gb/s infrastructure and that is not fast enough, not for us! Fortunately, our friends Robayet Nasim and Prof. Andreas Kassler from Karlstad University, Sweden also like their live migrations as fast and reliable as possible, so they kindly offered their 10 Gb/s infrastructure for further performance analysis. Since this topic is very much in line with the objectives of the COST ACROSS action which both we (ICCLab!) and Karlstad are participants of,  this analysis  was carried out under a 2-week short term scientific mission (STSM) within this action.
This blog post presents a short wrap-up of the results obtained focusing on the evaluation of post-copy live migration in OpenStack using 10Gb/s interfaces and comparing them with the performance of the 1Gb/s setup. The full STSM report can be found here. Continue reading

Tunneled Hybrid Live Migration

In our previous blog posts we mostly focused on virtual machine live migration performance comparing pre-copy, post-copy and hybrid approaches in an Openstack context rather than exploring other live migration features. Libvirt together with the Qemu hypervisor provides many migration configuration options. One of these options is a possibility to use tunneled live migration. Recently we found that the current libvirt tunneling implementation is not supported in post-copy migration. Consequently, In order to make the post-copy patch more production ready we decided to support the community and add support for post-copy tunneled live migration to libvirt on our own. This blog post describes the whole story of immersing ourselves into the open source community and hacking an established open source project since we believe this experience can be generalized. Continue reading

Performance analysis of “post-copy” live migration in Openstack

Previously we described how to  set up post-copy live migration in OpenStack Icehouse (and it should not be a problem to set it up in the same way in Juno). Naturally,  we were curious to see how it performs. In this blog post we focus on performance analysis of post-copy live migration in Openstack Icehouse using QEMU / KVM with libvirt. Continue reading

Setting up post-copy live migration in OpenStack

We have shed blood, sweat and tears trying to get post-copy live migration working in OpenStack over the last month: this blogpost explains all the necessary steps to make it work and hopefully it will save future post-copy pioneers some pain. In our previous blog post we focused on setting it up in QEMU. This time we consider the bigger picture spanning all the levels of Openstack from system kernel requirements to setting up the right flags in the Nova configuration file in the OpenStack environments running QEMU / KVM with Libvirt. Continue reading

Post-copy live migration in QEMU

Hurray! We have finally deployed QEMU 2.1.5 with post-copy live migration support on our servers! But before we get to that, a little bit of context… in our previous blog posts we focused on performance analysis of pre-copy live migration in Openstack. So far all of our experiments were done using QEMU version 1.2 with KVM acceleration. As we were keen to do some experimentation with post-copy live migration, we had to upgrade to the very new QEMU 2.1.5 which provides post-copy live migration support in one of its branches. (More generally, there have been significant enhancements in QEMU since version 1.2  – of November 2012 – and hence we expected better performance and reliability in pre-copy as well). This blog post focuses on our first hands-on experience with post-copy live migration in QEMU.

Continue reading

ICCLab vBrownBag Tech Talks @ OpenStack Summit

ICCLab had the privilege to talk about our latest research activities in “Rating, Charging & Billing” & “Performance analysis of live migrations in OpenStack” at the #vBrownBag Tech talks which were held as part of OpenStack summit in Paris. Here we provide a short summary of each talk and include the captured video for your viewing pleasure!

Continue reading

The impact of ephemeral VM disk usage on the performance of Live Migration in Openstack

In our previous work we presented the performance of live migration in Openstack Icehouse using various types of VM flavors, its memory load and also examined how it performs in network and CPU loaded environment (see our previous posts –performance of live migration, performance of block live migration, performance of both under varying cpu and network load). One factor which was not considered in our earlier work is the impact of VM ephemeral disk size on the performance of the live migration. That is the focus of this post. Continue reading

Performance of Live Migration in Openstack under CPU and network load

Previously, we analyzed the performance of virtual machine (VM) live migration in different scenarios under Openstack Icehouse. Until now, all our experiments were performed on essentially unloaded servers – clearly, this means that the results are not so widely applicable. Here, we analyze how the addition of load to the physical hosts and the network impacts the behaviour of both block live migration (BLM) and live migration (LM). (Note that the main difference is that BLM migrates the VM disk via the network while LM uses shared storage between source and destination hosts and the disk is not migrated at all). Continue reading

An analysis of the performance of live migration in Openstack

We continue our recent work regarding an analysis of the performance of live migration in Openstack Icehouse. Our previous results focused on block live migration in Openstack, without shared storage configured between computing nodes. In this post we focus on the performance of live migration in the system with a shared file system configured, compare it with block live migration and try to determine scenarios more suitable for each approach. Continue reading

An analysis of the performance of block live migration in Openstack

Since our servers have been set up for live migration with Openstack Icehouse, we wondered how live migration would perform. We measured the duration of the migration process, VM downtime duration and the amount of data transfered via the ethernet during a live migration. All tests were performed across 5 different VM flavors to examine the impact of the flavor. Another point we were curious about is how  higher memory load of VMs can impact migration performance. Here, we present the results of our experiments which show how live mgration works in these different scenarios.

Continue reading

« Older posts