Tag: icehouse

Performance analysis of “post-copy” live migration in Openstack

Previously we described how to  set up post-copy live migration in OpenStack Icehouse (and it should not be a problem to set it up in the same way in Juno). Naturally,  we were curious to see how it performs. In this blog post we focus on performance analysis of post-copy live migration in Openstack Icehouse using QEMU / KVM with libvirt. Continue reading

Setting up post-copy live migration in OpenStack

We have shed blood, sweat and tears trying to get post-copy live migration working in OpenStack over the last month: this blogpost explains all the necessary steps to make it work and hopefully it will save future post-copy pioneers some pain. In our previous blog post we focused on setting it up in QEMU. This time we consider the bigger picture spanning all the levels of Openstack from system kernel requirements to setting up the right flags in the Nova configuration file in the OpenStack environments running QEMU / KVM with Libvirt. Continue reading

ICCLab vBrownBag Tech Talks @ OpenStack Summit

ICCLab had the privilege to talk about our latest research activities in “Rating, Charging & Billing” & “Performance analysis of live migrations in OpenStack” at the #vBrownBag Tech talks which were held as part of OpenStack summit in Paris. Here we provide a short summary of each talk and include the captured video for your viewing pleasure!

Continue reading

OpenStack Summit – Deep Dive into Day 2

CERN Openstack (super) User Story
CERN is looking for answers to the fundamental questions concerning creation of the Universe and true to its nature, its a a big data challenge. With the historical run of LHC in 2013, their archive now contains ~100PB (with additional 27PB/year) at ~11 000 servers with ~75000 disk drives and ~45 000 tapes and with the reopening of the LHC, they expect a significant increase of data in 2015. CERN recently opened up a new data center in Budapest connected to Geneve’s headquarters by T-Systems 100GbE line. 
CERN currently runs  four Openstack Icehouse clouds and expects these to run 150 000 cores by Q1 2015 in total. All the CERN‘s non-specific code is upstream and are available for anyone who would like to build at the top of it in the future.
CERN put great emphasis on collaboration. Openlab project is public-private partnership between CERN and major ICT companies (e.g. Rackspace) and its goal is to accelerate the development of cutting-edge cloud solutions.
 2014-11-04 09.42.44obrázek2
OpenShift on OpenStack
RedHat and Cisco gave a demo on deploying OpenShift on OpenStack using Heat, Docker & Kubernetes. OpenShift is a PaaS offering from RedHat with both the enterprise and open source versions. The thought process of deploying OpenShift on OpenStack is to maintain a high degree of flexibility and enable a faster deployment of applications. In the demo, Heat was made use of for orchestration. Docker’s pull and push methodology is used for getting a new Image or saving a modified version which could be pulled later on. Along with tagging of the images, diff operation can also be done. Docker containers are also used as daemons. However Docker cannot see beyond a single host and doesn’t have the capacity to manage mass configuration and deployment. That’s where the Kubernetes comes into picture. Here Pod resemble the Docker’s containers and the etc functionality is used to configure the master which would pass it along to the slaves and there by mass configuration is achieved.The link to the presentation can be found here.
2014-11-04 11.15.33 2014-11-04 11.18.49

The impact of ephemeral VM disk usage on the performance of Live Migration in Openstack

In our previous work we presented the performance of live migration in Openstack Icehouse using various types of VM flavors, its memory load and also examined how it performs in network and CPU loaded environment (see our previous posts –performance of live migration, performance of block live migration, performance of both under varying cpu and network load). One factor which was not considered in our earlier work is the impact of VM ephemeral disk size on the performance of the live migration. That is the focus of this post. Continue reading

Performance of Live Migration in Openstack under CPU and network load

Previously, we analyzed the performance of virtual machine (VM) live migration in different scenarios under Openstack Icehouse. Until now, all our experiments were performed on essentially unloaded servers – clearly, this means that the results are not so widely applicable. Here, we analyze how the addition of load to the physical hosts and the network impacts the behaviour of both block live migration (BLM) and live migration (LM). (Note that the main difference is that BLM migrates the VM disk via the network while LM uses shared storage between source and destination hosts and the disk is not migrated at all). Continue reading

An analysis of the performance of live migration in Openstack

We continue our recent work regarding an analysis of the performance of live migration in Openstack Icehouse. Our previous results focused on block live migration in Openstack, without shared storage configured between computing nodes. In this post we focus on the performance of live migration in the system with a shared file system configured, compare it with block live migration and try to determine scenarios more suitable for each approach. Continue reading

An analysis of the performance of block live migration in Openstack

Since our servers have been set up for live migration with Openstack Icehouse, we wondered how live migration would perform. We measured the duration of the migration process, VM downtime duration and the amount of data transfered via the ethernet during a live migration. All tests were performed across 5 different VM flavors to examine the impact of the flavor. Another point we were curious about is how  higher memory load of VMs can impact migration performance. Here, we present the results of our experiments which show how live mgration works in these different scenarios.

Continue reading

Setting up Live Migration in Openstack Icehouse [Juno]

[Update 8.12.2014] Since OpenStack’s Juno release hasn’t introduced any changes regarding live migration, Juno users should be able to follow this tutorial as well as the Icehouse users. If you experience any issues let us know. The same setup can be used for newer versions of QEMU and Libvirt as well. Currently we are using QEMU 2.1.5 with Libvirt 1.2.11.

The Green IT theme here in ICCLab is working on monitoring and reducing datacenter energy consumption by leveraging Openstack’s live migration feature. We’ve already experimented a little with live migration in the Havana release (mostly with no luck), but since live migration is touted as one of the new stable features in the Icehouse release, we decided to investigate how it has evolved. This blogpost, largely based on official Openstack documentation, provides step-by-step walkthrough of how to setup and perform virtual machine live migration with servers running the Openstack Icehouse release and KVM/QEMU hypervisor with libvirt.

Virtual machine (VM) live migration is a process, where a VM instance, comprising of its states, memory and emulated devices, is moved from one hypervisor to another with ideally no downtime. It can come handy in many situations such as basic system maintenance, VM consolidation and more complex load management systems designed to reduce data center energy consumption. Continue reading

Ice, Ice, Baby…installing Icehouse with Mirantis OS 5.0

The release of Icehouse brought a few enhancements that were of particular interest in our work – notably Ceilometer/Telemetry developments both in terms of data models  as well as performance and improved support for live VM migration. As we had a positive experience with installing Openstack using Mirantis Openstack 4.x (based on Fuel 4.x), we thought it would be worth having a go at upgrading to Icehouse with Mirantis Openstack 5.0 (of which Fuel 5.0 is a key component) on a small set of servers. Here’s a short note on how it worked out.

Continue reading