Supporting Container based Application Deployment to Heterogeneous Hardware using Rancher

In our previous blog post we explained how networking works in Rancher in a Cattle environment. There we also mentioned that we have been working on enabling Rancher to operate on heterogeneous compute infrastructures – for example, a mixed environment comprised of ARM based edge devices connected to VMs running in the cloud. In this blog post we go into more detail on how we built rancher-agent service for aarch64 ARM based devices.

Before we begin, the following list of hardware and OS was used during this work:

Rancher Labs had done some work already on supporting multi-arch hosts – most of it on enabling rancher-agent to work on ARM based devices – but as the Rancher platform evolved this has been deprioritized. Back then most of the rancher-agent scheduling and networking services running in the host were consolidated into a single container (agent-instance) and this was ported to ARM based devices as described in this blog post.  From rancher-agent version v1.1.0, these micro services were split into separate containers giving the user the option to select which scheduling or networking solution to use. Once this (significant) change to rancher-agent was made, Rancher Labs stopped progressing support for ARM devices. Continue reading

Monitoring an Openstack deployment with Prometheus and Grafana

Following our previous blog post, we are still looking at tools for collecting metrics from an Openstack deployment in order to understand its resource utilization. Although Monasca has a comprehensive set of metrics and alarm definitions, the complex installation process combined with a lack of documentation makes it a frustrating experience to get it up and running. Further, although it is complex, with many moving parts, it was difficult to configure it to obtain the analysis we wanted from the raw data, viz how many of our servers are overloaded over different timescales in different respects (cpu, memory, disk io, network io). For these reasons we decided to try Prometheus with Grafana which turned out to be much easier to install and configure (taking less than an hour to set up!). This blog post covers the installation process and configuration of Prometheus and Grafana in a Docker container and how to install and configure Canonical’s Prometheus Openstack exporter to collect a small set of metrics related to an Openstack deployment.

Continue reading

Installing Monasca – a happy ending after much sadness and woe

In one of our projects we are making contributions to an Openstack project called Watcher, this project focuses on optimizing resource utilization of a cloud according to a given strategy. As part of this work it is important to understand the resource utilization of the cloud beforehand in order to make a meaningful contribution. This requires collection of metrics from the system and processing them to understand how the system is performing. The Ceilometer project was our default choice for collecting metrics in an Openstack deployment but as work has evolved we are also exploring alternatives – specifically Monasca. In this blog post I will cover my personal experience installing Monasca (which was more challenging than expected) and how we hacked the monasca/demo docker image to connect it to our Openstack deployment. Continue reading

Openstack Summit Barcelona 2016 – Day 3

The third day of the summit had a different feel from the previous couple of days – there was no keynote and there were noticeably less people around: there is a strong sense that the show is over and now it’s necessary to do some real work. Hence, there is more time and space allocated to the project teams to enable them to move their work forward.

img_20161025_112030 Continue reading

Trust delegation in Openstack using Keystone trusts

In one of our blog posts we presented a basic tool which extends the Openstack Nova client and supports executing API calls at some point in the future. Much has evolved since then: the tool is not just a wrapper around Openstack clients anymore and instead we rebuilt it in the context of the Openstack Mistral project which provides very nice workflow as service capabilities – this will be elaborated a bit more in a future blog post. During this process we came across a very interesting feature in Keystone which we were not aware of – Trusts. Trusts is a mechanism in Keystone which enables delegation of roles and even impersonation of users from a trustor to a  trustee; it has many uses but is particularly useful in an Openstack administration context. In this blog post we will cover basic command line instructions to create  and use trusts.

Continue reading

Testing PyMongo applications with MockupDB

In one of our projects, we needed to test some mongo based backend functionality: we wrote a small application which comprised of a mongo backend and a python app which communicated with the backend via pymongo. We like the flexibility of mongo in a rapid prototyping context and did not want to go with a full fledged ORM model for this app. Here we describe how we used MockupDB to perform some unit testing on this app. Continue reading

Employing Openstack Watcher in GEYSER to make Openstack more Energy Efficient

[This post was originally published on the GEYSER blog by our own Seàn Murphy. ICCLab is a partner in GEYSER and is responsible for developing workload migration mechanisms and other activities.]

GEYSER focuses on making Data Centres more energy efficient in the context of varying availability of energy. One of the tools used in this context is a mechanism to effect load consolidation on IT workload in the Data Centres. The GEYSER project has chosen to focus on the Openstack cloud computing framework as the context to perform such load consolidation and in the earlier stages of the project developed a load consolidation solution which was demonstrated on a small cluster locally.

During project execution, activities evolved within the Openstack community resulting in an opportunity for GEYSER. More specifically, the Watcher group was formed within the Openstack community to focus on making Openstack more energy efficient. Interestingly, one of the main focal points of the Watcher group was also to leverage load consolidation mechanisms to effect energy savings. Continue reading

A Tool for Understanding OpenStack Cloud Performance using Stacktach and the OpenStack Notification System

In one of our projects, FICORE the continuation of FIWARE, we need to offer an Openstack-based service. One aspect of service operations is to understand the performance of the system and one particular aspect of this is to understand how long basic operations take; it is interesting to see how this evolves over time as, for example, a system may get more and more loaded. To address this, we first looked at using an approach based on log files but it was not workable as the information regarding an operation is spread across multiple hosts and services. An alternative approach is to use the Openstack notification system where a lot of key events occurring within the system are published – this is a single point for all the information we need. We then used Stacktach to consume, filter and store this data and built a web application on top of it. In this blog post we give a brief overview of the Openstack notification system, the Stacktach filtering tool and the basic web tool we developed.
Continue reading

Putting your OpenStack Horizon dashboard into a Docker Container

In one of our projects, GEYSER, we were looking into a way of packaging customized code for a pilot test. Generally, we developed µservices which communicate with key Openstack components and we have specifically modified Openstack Horizon code by adding a new dashboard. These µservices as well as Openstack Horizon are quite decoupled from the core Openstack system meaning that communication is done mostly through external API calls rather than a message bus. A number of packaging options were considered: basic packaging python code is relatively straightforward but does not offer the flexibility we require, specifically around rollback. Other solutions include virtual environments or virtual machines, but ultimately, we decided to use Docker containers they are all the rage these days. This blog post describes step-by-step how to containerize Horizon in docker, noting any particular issues we observed in the process. Continue reading