Author: cima (page 1 of 2)

Observations from 11th NGSDP Experts Talk

On 22nd April 2016 the 11th Experts Talk on Next Generation Service Delivery Platforms (NGSDP) was held at the Telekom Innovation Laboratories in Berlin. The purpose of the event is to bring together thought-leaders in the area of Next Generation Services to discuss the state of the art in the field. Continue reading

StickTrack – ICCLab hackathon project

[Note: this project took place as part of the ICCLab Hackathon – more general information on the ICCLab Hackathon is here].

Shared inventories, such as office fridges that contains items having an expiration date, have always been a problem to manage. Our very own fridge isn’t an exception either. At the moment, about 35 of us (ICCLab & DataLab guys) sharing the same fridge, go through the everyday hassle of remembering what food article do they have in the fridge and which one needs to be consumed or thrown away due their expiration date. After a terrible period of “fridge-chaos” (* – November 2015) Annette came up with a practical solution to the problem which involved manually written labels written on each food article in the fridge. There was a person assigned (thanks Denis) only for maintaining the fridge’s healthy ecosystem and check for any growth of life! Meaning that Denis was responsible to continuously check the expiration of food articles and inform their owners to take immediate action. Since then we thought of ways to automate this 1845 technology and came up with StickTrack – a QR-code based inventory tracking solution notifying our small fridge community what’s going inside of the fridge. Me, Lidia, Martin, Oleksii, Andy, Piyush and Amrita (also known as “Un Palo” team) decided to make the future happen during our internal 3-day hackathon. Continue reading

Orchestrating IMS – Project Clearwater on CloudStack using Heat and Hurtle

Project Clearwater is an open source implementation of IP Multimedia Subsystem (IMS) developed for scalable deployment in the cloud to provide voice, video and messaging services. There has been  work done before on orchestrating Clearwater in OpenStack using Cloudify. We, in cooperation with our partner – Citrix, present orchestration of this system in Apache CloudStack using OpenStack Heat with our recent plugin. Continue reading

OpenStack Heat plugin for Apache CloudStack

This blog post presents a plugin for OpenStack Heat which adds support for Apache CloudStack resources and thus enables a template-based orchestration on CloudStack using Heat. As this plugin extends the standard Heat’s resource type list it can also be used within our Hurtle orchestrator for providing your application as a service or any other application underlying on Heat. This work follows from our earlier work in which we developed a Heat plugin for SDC. Continue reading

Introduction to Apache Mesos

Service scheduling and task placement within large-scale clusters is receiving a lot of interest in the cloud community at present. Moreover, service scheduling is one of the keystones of our recently kicked off ACeN project and we finally got a chance to experiment with the technology that is currently a frontrunner in this area – Apache Mesos. As Mesos provides much more control of service placement than current available built-in IaaS schedulers it elegantly addresses many problems in data centers such as task data locality, efficient resource utilization or efficient load variation accommodation. This blogpost describes Mesos architecture, its basic workflow and explains why we think it’s a big deal also in the cloud context.

Continue reading

Experimental evaluation of post-copy live migration in OpenStack using 10Gb/s interfaces

Up to now, we have published several blog posts focusing on the live migration performance in our experimental Openstack deployment – performance analysis of post-copy live migration in Openstack and an analysis of the performance of live migration in Openstack. While we analyzed the live migration behaviour using different live migration algorithms (read our previous blog posts regarding pre-copy and post-copy (hybrid) live migration performance) we observed that both live migration algorithms can easily saturate our 1Gb/s infrastructure and that is not fast enough, not for us! Fortunately, our friends Robayet Nasim and Prof. Andreas Kassler from Karlstad University, Sweden also like their live migrations as fast and reliable as possible, so they kindly offered their 10 Gb/s infrastructure for further performance analysis. Since this topic is very much in line with the objectives of the COST ACROSS action which both we (ICCLab!) and Karlstad are participants of,  this analysis  was carried out under a 2-week short term scientific mission (STSM) within this action.
This blog post presents a short wrap-up of the results obtained focusing on the evaluation of post-copy live migration in OpenStack using 10Gb/s interfaces and comparing them with the performance of the 1Gb/s setup. The full STSM report can be found here. Continue reading

Tunneled Hybrid Live Migration

In our previous blog posts we mostly focused on virtual machine live migration performance comparing pre-copy, post-copy and hybrid approaches in an Openstack context rather than exploring other live migration features. Libvirt together with the Qemu hypervisor provides many migration configuration options. One of these options is a possibility to use tunneled live migration. Recently we found that the current libvirt tunneling implementation is not supported in post-copy migration. Consequently, In order to make the post-copy patch more production ready we decided to support the community and add support for post-copy tunneled live migration to libvirt on our own. This blog post describes the whole story of immersing ourselves into the open source community and hacking an established open source project since we believe this experience can be generalized. Continue reading

Performance analysis of “post-copy” live migration in Openstack

Previously we described how to  set up post-copy live migration in OpenStack Icehouse (and it should not be a problem to set it up in the same way in Juno). Naturally,  we were curious to see how it performs. In this blog post we focus on performance analysis of post-copy live migration in Openstack Icehouse using QEMU / KVM with libvirt. Continue reading

Setting up post-copy live migration in OpenStack

We have shed blood, sweat and tears trying to get post-copy live migration working in OpenStack over the last month: this blogpost explains all the necessary steps to make it work and hopefully it will save future post-copy pioneers some pain. In our previous blog post we focused on setting it up in QEMU. This time we consider the bigger picture spanning all the levels of Openstack from system kernel requirements to setting up the right flags in the Nova configuration file in the OpenStack environments running QEMU / KVM with Libvirt. Continue reading

Post-copy live migration in QEMU

Hurray! We have finally deployed QEMU 2.1.5 with post-copy live migration support on our servers! But before we get to that, a little bit of context… in our previous blog posts we focused on performance analysis of pre-copy live migration in Openstack. So far all of our experiments were done using QEMU version 1.2 with KVM acceleration. As we were keen to do some experimentation with post-copy live migration, we had to upgrade to the very new QEMU 2.1.5 which provides post-copy live migration support in one of its branches. (More generally, there have been significant enhancements in QEMU since version 1.2  – of November 2012 – and hence we expected better performance and reliability in pre-copy as well). This blog post focuses on our first hands-on experience with post-copy live migration in QEMU.

Continue reading

« Older posts