[This post was originally published on the GEYSER blog by our own Seàn Murphy. ICCLab is a partner in GEYSER and is responsible for developing workload migration mechanisms and other activities.]
GEYSER focuses on making Data Centres more energy efficient in the context of varying availability of energy. One of the tools used in this context is a mechanism to effect load consolidation on IT workload in the Data Centres. The GEYSER project has chosen to focus on the Openstack cloud computing framework as the context to perform such load consolidation and in the earlier stages of the project developed a load consolidation solution which was demonstrated on a small cluster locally.
During project execution, activities evolved within the Openstack community resulting in an opportunity for GEYSER. More specifically, the Watcher group was formed within the Openstack community to focus on making Openstack more energy efficient. Interestingly, one of the main focal points of the Watcher group was also to leverage load consolidation mechanisms to effect energy savings. Continue reading
Cloud Platforms allow development teams to bring application to production very fast.
In Cloud Foundry a simple ‘cf push’ can be used to deploy your application and bind it to the required services. This works incredibly well for small applications. But as the trend in Cloud Native Applications is going towards microservice architectures, which easily can grow to a large number of decoupled components, it is hard to keep the overview of all the applications and dependencies. It also can get cumbersome and expensive to maintain the deployment scripts and configuration of the applications and very often deployments will get slow and unreliable.
When dorma+kaba was developing exivo, their new trusted, on-demand access control solution for small enterprises, they where exactly facing these challenges, because they had to run and maintain more than 70 apps and 60 services on the Swisscom Application Cloud. Continue reading
After several months of development, last week was finally the first beta release of the distributed computing orchestration framework DISCO.
What is DISCO anyway?
Have you ever needed a computing cluster for Big Data to be ready in a matter of seconds, with a huge amount of computers at its disposal? If so, then DISCO is for you! DISCO (for DIStributed COmputing) is an abstraction layer for OpenStack‘s orchestration part, Heat (or any other framework which can deploy a Heat orchestration template). Based on the orchestration framework Hurtle developed at our lab, it supervises the whole lifecycle of a distributed computing cluster, from designing to disposal.
How does DISCO work?
As already mentioned, DISCO is a middleman between OpenStack and the end user. It not only takes the troublesome work of designing a whole (virtual) computing cluster but it also deploys a distributed computing architecture of choice onto that cluster – automatically. Continue reading
In FIWARELab, we recently upgraded from Openstack Icehouse to a Kilo High Availability (HA) deployment. Our approach involved having two simultaneous active Openstack deployments so VMs and volumes could be migrated from one deployment to another with a minimal downtime. More specifically, VMs and volumes were snapshotted and transferred to the Kilo deployment as images where they could be recreated by their users. During this process, we found that, VMs originally created from of CentOS images did not boot properly – their network interface did not come up correctly and the VM was unable to fetch user-metadata.
The root of the issue lies in a standard configuration of CentOS: its device manager (udev) saves a mapping between MAC address and network for security reasons and ensures that the network interface only boots up on that specific MAC address. This mapping is stored in a file which is created when a VM boots for the first time. The file is located in /etc/udev/rules.d/70-persistent-ipoib.rules.
Cloud services are meant to be elastically scalable and robust against all kinds of failures. The core services are very mature nowadays, but the tools which glue them together are often in need of quality improvements. Two common risks in networked environments are (1) unavailability and (2) slowness of services. The first risk is easier to detect but more severe in its effects. Furthermore, there is a dependency between the two as timeouts in many layers of the software cause unavailability failures upon strong slowdown. Timeouts should beavoided but are often part of protocols, libraries, framework and stacks with almost arbitrary combinations, so that in practice, failures happen more often than necessary. This post shows how research initiatives in the Service Prototyping Lab work on improving the situation to make cloud services access more robust and easier to handle for application developers.
The Service Engineering (SE, blog.zhaw.ch/icclab) group at the Zurich University of Applied Sciences (ZHAW) / Institute of Applied Information Technology (InIT) in Switzerland is seeking applications for a full-time position at its Winterthur facility.