Attending the Swiss Informatics Society Green IT Special Interest Group

The Green IT Special Interest Group (SIG) of the Swiss Informatics Society met yesterday (29/10/14) in Zurich. ZKB kindly hosted the event. A full meeting report will probably appear on the group’s website, but here we just capture some of our reflections on the group’s work.

This was our second time to attend the group meetings. It attracts a very interesting cross-section of folks who have an interest in both making IT systems more energy efficient as well as some folks who want to use IT systems to make other vertical more energy efficient.

The group is led by the very active and engaging Klaus Meyer who does a fantastic job of defining the strategy and direction of the group, representing the group to interested parties, running the group meetings and generally banging the Green IT drum.

The meeting is attended by a diverse mix of folks including Data Centre folks who are interested in increasing energy efficiency in Data Centres – these include folks from some of the financial and insurance sector in Switzerland. There are also academics who are interested in energy efficiency from different perspectives. There are also consultants and small companies who are active in the space. All in all, the group has a very interesting and healthy mix of interesting perspectives which leads to interesting discussions.

At this week’s meeting the host ZKB gave a presentation on how energy efficiency is very important in their IT systems and they talked about how they have managed to achieve very significant savings in their operations by using advanced DC design, largely focused on cooling and airflow issues. This was followed by a very interesting presentation by the guys from Born Green Technologies on a system they are working on which supports understanding of the energy consumption of the IT systems within an organization, mostly focused on the equipment on people’s desks – phones, computers, monitors etc. – and described a case study they performed with a mid-size client in which they were able to obtain 25% savings on their energy bill.

The group is receiving increasing interest – there is a so-called Antenna group being formed in La Suisse Romande – and we’re sure it will go from strength to strength in the coming years. From our point of view, we’re very happy to be associated with it and will continue to contribute as it grows.

The Cloud for testing environments

In our last ICCLab newsletter, in the cloud economics tutorial, we introduced how cloud infrastructures could be utilized for off-loading the variable and unpredictable resource needs. This is one of the basic principles of public cloud business. The InIT ICCLab cloud-economics lecture provides extensive use case studies and lab exercises on these topics.

1. Use Cases

Within the editorial, the same newsletter reported another use case related to the deployment of performing environment for measurements and tests based on the public cloud. Hence this represent another good opportunity to utilize cloud based infrastructures.

top applications

Even GreenPages introduce these concept as enterprise case, however it can be extended to other actors with similar needs. For example, requirements  to simulate production conditions for testing, while not affecting live deployments. With cloud services, suitable environments can be provisioned for apps development teams without affecting production environments, and then can be decommissioned with resulting charge-back reports for the respective cost centers. The Cloud will solve complex business needs with efficient, replicable and cost-effective solutions. With traditional hardware infrastructure, setting up a dedicated development environment could be expensive and time consuming. Unlike physical test environment labs, the tests in the cloud enable to offer architects access to test environments on demand with no resource constraints and eliminating capital expenditure.

2. Automation for Operating costs saving
Compared to traditional test environments (server based) the cloud allows to reduce IT operating costs by utilizing the automation and orchestration features. In addition to these savings, the organization can redirect key resources for manual configuration activities of more mission-critical and value-added tasks increasing the globally the margins. Test cloud environments allow working with live environments for their testing services and not just modeling tools. The scenario prepared for tests are closer to the final production configuration therefore increasing productivity and lowering the risks in the IT environment.

3. What is the best strategy for test deployment in the cloud?

As test configuration may grow in complexity for fast delivery of innovative applications to the marketplace. It is interesting to see how to reduce the time to plan, install and validate test environments. One key aspect is to consider that the cloud enables provisioning of test infrastructures on demand to maximize the utilisation of the asset.  Feasibility studies are required to find the scenarios in which, moving testing to the cloud, can benefit the organization. Cost analysis should be made for private and public cloud utilisation with a correct mix.

test strategy

The steps that should be followed to move, more effectively, applications in the cloud would be:

  • Business needs and understanding of the benefit of the cloud

Define the business and technical objectives of moving a particular testing project to the cloud, to gain more from your cloud investment

  • Formulate the testing strategy

The test strategies should clearly answer what is intended to be achieved by moving testing to the cloud, including cost savings, easy access to infrastructure, short cycle times, etc. The economics need to be analysied for defined type of cloud tests, the risks and the duration of the tests (costs).

  • Plan your infrastructure

To  define the infrastructure requirements necessary for building a test environment (private and public cloud). In case of public cloud, the service provider offers & prices should be an input (costs, terms and conditions, exit or movement to another service provider).

  • Executing the test

The applications are tested according to the defined test strategy. Optimal utilization of the test infrastructure has to be defined  to achieve cost benefits.

  • Monitor and analyze test results

Monitoring of test results in real-time to understand and evaluate capacity and  performance issues. The monitoring will consider also the financial performance of cloud services. The test results could be mined in the cloud and their analytics cloud also take advantages of data science and bigdata technologies. This represents another opportunity.

4. Our experience with testing on the cloud

ICC_Lab is investing a lot on infrastructures dedicated to the cloud and currently we have two OpenStack based installations. Some of them have test environments that will be used for internal projects and cooperation projects in the FI-PPP and H2020 programs.

The advantage of being able to use a cloud environment for testing is clear in our everyday activities. A typical concrete use case is that of setting up backend services running on a certain number of virtual machines, that can be easily (re-)created and destroyed in a very short time and without affecting any other running activity.

These testing backends represent a very convenient and reliable point of presence for the applications that need them, but at the same time, the flexibility of the cloud is such that reorganizing or radically changing the testing environment comes at a very low effort.

Some frequent use cases include:

  • Setting up cloud environments to support applications running locally during the development cycle. Using the cloud approach instead of having local testing environments ensures a higher degree of consistency and reliability.
  • Run automated tests against cloud backends.
  • Support demonstrations. This is a particularly useful scenario, as the testing environment running on the cloud can be easily used to showcase demos of our applications.

Another factor to consider is that a service or the applications using it, can be easily moved from the testing to the pre-production phase. One of the internal projects we are currently developing, requires a Swift backend and, in a longer time frame, nothing to small changes will be required if we will want to distributed our application publicly and still have it running as we expect.

On a different perspective than that of testing applications we are developing, we often use our cloud to setup temporary services (e.g., open source frameworks) for evaluation or analysis purposes. This kind of testing takes great advantage from the “on-demand, self-service” factor of cloud computing!

by Antonio Cimmino, Vincenzo Pii 


Setup a Kubernetes Cluster on OpenStack with Heat

In this post we take a look at Kubernetes and help you setup a Kubernetes Cluster on your existing OpenStack Cloud using its Orchestration Service Heat. This Kubernetes Cluster should only be used as a Proof of Concept.

Technology involved:

The Heat Template used in this Post is available on Github.

What is Kubernetes?

Kubernetes allows the management of docker containers at scale. Its core concepts are covered in this presentation, held at the recent OpenStack&Docker Usergroup meetups.

A complete overview of Kubernetes is found on the Kubernetes Repo.


The provisioned Cluster consists of 5 VMs. The first one, discovery, is a dedicated etcd host. This allows easy etcd discovery thanks to a static IP-Address.

A Kubernetes Master host is setup with the Kubernetes components apiserver, scheduler, kube-register, controller-manager as well as proxy. This machine also gets a floating IP assined and acts as a access point to your Kubernetes cluster.

Three Kubernetes Minion hosts are setup with the Kubernetes components kubelet and proxy.


Follow the instructions on the Github repo to get your Kubernetes cluster up and running: 


Two examples are provided in the repo:

8th Swiss Openstack meetup

docker-logo-loggedout         chosug  icclab-logo

Last week, 16 Oct 2014, great participation to OpenStack User Group – Meeting @ICCLab Winterthur. We have co-located it with docker CH meetupAround 60 participants from both the user groups.

For this event, we have organised the agenda trying to have a good mix of big players and developers presentations. Goals : Analysis of OpenStack and Docker Solutions, deployments and containers orchestration.

Final Agenda  start: 18.00

Snacks and Drinks were kindly offered by ZHAWMirantis

We had some interesting technical discussions and Q&A with some speakers during the evening apero, as usual.


IMG_20141016_204104 IMG_20141016_203550 IMG_20141016_200322 IMG_20141016_192012 IMG_20141016_183141 - Copy IMG_20141016_185018IMG_20141016_181052 - Copy

Numerical Dosimetry in the cloud

What is it all about?

We’re using a bunch of VMs to do numerical dosimetry and are very satisfied with the service and performance we get. Here I try to give some background on our work.
Assume yourself sitting in the dentists chair for an x-ray image of your teeth. How much radiation will miss the x-ray film in your mouth and instead wander through your body? That’s one type of question we try to answer with computer models. Or numeric dosimetry, as we call it.

The interactions between ionizing radiation – e.g. x-rays – and atoms are well known. However, there is a big deal of randomness, so called stochastic behavior. Let’s go back to the dentists chair and follow one single photon (that’s the particle x-rays are composed of). This sounds a bit like ray tracing, but is way more noisy as you’ll see.

The image below shows a voxel phantom (built of Lego bricks made of bone, fat, muscle etc.) during a radiography of the left breast.


Tracing a photon

The photon is just about to leave the x-ray tube. We take a known distribution of photon energies, throw dices and pick one energy at random. Then we decide – again by throwing dices – how long the photon will fly until it comes close to an atom. How exactly will it hit the atom? Which of the many processes (e.g. Compton scattering) will take place? How much energy will be lost and in what direction will it leave the atom? The answer – you may have already guessed that – is rolling in the dice. We repeat the process until the photon has lost all it’s energy or leaves our model world.

During its journey the photon has created many secondary particles (e.g. electrons kicked out of an atomic orbit). We follow each of them and their children again. Finally, all particles have come to rest and we know in detail what happened to that single photon and to the matter it crossed. This process takes some 100 micro seconds on an average cloud CPU.

Monte Carlo (MC)

This method of problem solving is called Monte Carlo after the roulette tables. You always apply MC if there are too many parameters to solve a problem in a deterministic way. One well know application is the so called rain drop Pi. By counting the fraction of random points that are within a circle you can approach the number Pi (3.141).

Back to the dentist: Unfortunately, with our single photon we do not see any energy deposit in your thyroid gland (located at the front of your neck) yet. This first photon passed by pure chance without any interaction. So we just start another one, 5’000 a second, 18 Millions per hour etc. until we have enough dose collected in your neck. Only a tiny fraction q of the N initial photons ends up in our target volume and the energy deposit shows fluctuations that typically decrease proportional to 1/sqrt(qN). So we need some 1E9 initial photons to have 1E5 in the target volume and have a relative error smaller than 1 %. This would take 2 CPU days.

MC and the cloud

This type of MC problems is CPU bound and trivial to parallelize, since the photons are independent from each other (remember that in a drop of water there are 1E23 molecules, our 1E9 photons will not disturb that). So with M CPUs my waiting time is just reduced by a factor M. In the above example and with 50 CPUs I have a result after 1 hour instead of 2 days.

This is a quantitative progress on the one hand. But on the other hand and more important for my work is the progress in quality: During one day, I can play with 10 different scenarios, I can concentrate on problem solving and do not waste time unwinding the stack in my head after a week. The cloud helps to improve the quality of our work.

Practical considerations

The code we use is Geant4 (, a free C++ library to propagate particles through matter. Code development is done locally (e.g. Ubuntu in a virtual box) and then uploaded with rsync to the master node.

Our CPUs are distributed over several virtual machines deployed in ICCLab’s OpenStack cloud. From the master we distribute code and collect results via rsync, job deployment and status is done through small bash scripts. The final analysis is then done locally with Matlab.

Code deployment and result collection is done within 30 seconds, which is negligible compared to run times of hours. So even on the job scale our speedup is M.

icclab@ Nagios World Conference 2014

Benz explains the OpenStack Nagios integration to the interested audience.

ICCLab Cloud HA initiative Leader Konstantin Benz explains the OpenStack Nagios integration to the interested audience.

The icclab participated on the Nagios World Conference 2014 which took place Oct 13th-16th, 2014 in St. Paul, MN, USA. Icclab’s Cloud High Availability-initiative leader Konstantin Benz presented an approach on how to use Nagios Core to monitor utilization of OpenStack resources. The key point he mentioned was that Nagios has to be reconfigured elastically in order to monitor virtual machines in an OpenStack environment. Depending on implementation requirements, it can be useful to exploit configuration management tools like Puppet or Chef to automatically reconfigure the Nagios server as soon as new VMs are commissioned or decommissioned by cloud users. Another approach could be to exploit OpenStack’s Ceilometer component though an integration of Nagios with Ceilometer could lead to data duplication which can be problematic for some systems, said Benz. Besides the Nagios-Ceilometer plugin Benz was able to show how elastic Nagios reconfiguration could work with Python fabric and the Cuisine library. This approach seems to be a lightweight solution to monitor VM utilization in OpenStack with Nagios. Benz also discussed a similar approach which has been chosen in the XIFI-project. The eXtensible Infrastructures for Future Internet cloud project uses Nagios as main monitoring tool to monitor OpenStack instances and resources provided by OpenStack.

Nagios Founder Ethan galstad presents Nagios Log Server to the audience.

Nagios Founder Ethan Galstad presents Nagios Log Server to the audience.

A highlight of the Nagios conference was a demo presentation of Nagios Log Server which was announced by Nagios Founder Ethan Galstad. Nagios Log Server allows for scalable and fast querying of log files – fully replacing “ELK”-Stack (ElasticSearch, LogStash, Kibana) solutions. Nagios Log Server is available under a perpetual licence that costs $995. Compared to commercial solutions this is a very modest price. In contrast to ELK-Stack solutions, Nagios Log Server offers user authentication to protect sensitive data in logfiles to be viewable by unauthorized website visitors. Another advantage are customizable visual dashboards that show log file findings. Visualization makes the task of reporting incidents to higher management a lot easier and allows for better monitoring.

The impact of ephemeral VM disk usage on the performance of Live Migration in Openstack

In our previous work we presented the performance of live migration in Openstack Icehouse using various types of VM flavors, its memory load and also examined how it performs in network and CPU loaded environment (see our previous posts –performance of live migration, performance of block live migration, performance of both under varying cpu and network load). One factor which was not considered in our earlier work is the impact of VM ephemeral disk size on the performance of the live migration. That is the focus of this post. Continue reading

Valon Mamudi

Hello World!

Valon Mamudi started with his apprenticeship 2008 at Klein Computer System AG, after four years apprenticeship and two years work as an System Engineer at the same Company, he decided to change his role and take on new challenges, consequently he started as an ICT Service Engineer at ZHAW. He is now responsible for the whole Infrastructure of the InIT. rico


Contact: mamu[at]

8th Swiss OpenStack and Docker User Group meeting – announcement

docker-logo-loggedout         chosug  icclab-logo

OpenStack User Group – Meeting, 16 Oct. at ICCLab Winterthur

Co-located with docker CH meeting

Goals: Analysis of OpenStack  Solutions, deployments and container solutions.


ZHAW Zurich University of Applied Science
Technikumstrasse 9, 8401 Winterthur
Data: 16.10.2013 –   18:00 – 21:00

Agenda  start: 18.00 –  ROOM TL203  (Chemistry Building)

(order of speakers may change)

– Intro & Welcome 5 mins (Florian & Antonio)
– Peter Mumenthaler – Puzzle ITC – “Docker, blessing or curse? (15m)
– Marco Kueding and Rolf Schaerer (Cisco CH) – “Intercloud and Cisco OpenStack strategy.  (35 m)
– Michael Erne, ZHAW ICCLab – “Manage Docker at scale with Kubernetes” (15m)

Drink break

– Jesper Kuhl, Nuage Networks & Alcatel Lucent  “VSP – Virtualized Services Platform” (25m)
– Srikanta Patanjali, ZHAW ICCLab – ” Updates on CYCLOPS – A Charging platform for OPenStack Clouds” (20m)
– Alexander Gabert, Cynthia, “Network Virtualization” (20m)

– Common Wrap up and apero

Looking forward to seeing you all!
Snacks and Drinks kindly offered by ZHAWMirantis

Comparison of Ryu and OpenDaylight Northbound APIs

For our SDK4SDN work we made a comparison between two SDN controllers: Ryu and Opendaylight. We focused on the Northbound APIs of the controllers and we compared the capabilities and ease of use of their respective REST APIs.

Both controllers support REST, which is based on a mix of HTTP, JSON and XML. In Ryu a WSGI web server is used to create the REST APIs, which link with other systems and browsers. In OpenDaylight, the Jersey library provides the REST APIs with both JSON and XML interfaces. Jersey also provides its own API to simplify RESTful services by extending the JAX-RS toolkit which is compliant with the Northbound API.

Continue reading