Author: Bruno Grazioli (page 2 of 3)

Putting your OpenStack Horizon dashboard into a Docker Container

In one of our projects, GEYSER, we were looking into a way of packaging customized code for a pilot test. Generally, we developed µservices which communicate with key Openstack components and we have specifically modified Openstack Horizon code by adding a new dashboard. These µservices as well as Openstack Horizon are quite decoupled from the core Openstack system meaning that communication is done mostly through external API calls rather than a message bus. A number of packaging options were considered: basic packaging python code is relatively straightforward but does not offer the flexibility we require, specifically around rollback. Other solutions include virtual environments or virtual machines, but ultimately, we decided to use Docker containers they are all the rage these days. This blog post describes step-by-step how to containerize Horizon in docker, noting any particular issues we observed in the process. Continue reading

Getting Networking up and running in Openstack Cells – the Neutron way

In our previous blog post we described our experience enabling floating ips through modifications to nova python libraries in an Openstack Cells deployment using nova network. That solution was not robust enough and hence we had a go at installing neutron networking, although there is very little documentation specifically addressing Neutron and Cells. Neutron’s configuration offers better support and integration with the Cells architecture than we expected; unlike nova-network operations such as floating IP association succeed without any modifications to the source code. Here, we present an overview of the neutron networking architecture in Cells as well as main takeaways we learnt from installing it in our (small) Cells deployment. Continue reading

Openstack Cells and nova-network: Enabling floating ip association

In our previous blog post we presented an overview of Nova Cells describing its architecture and how a basic configuration can be set up. After some further investigation it is clear why this is still considered experimental and unstable; some basic operations are not supported as yet e.g. floating ip association as well as inconsistencies in management of security groups between API and Compute Cells. Here, we focused on using only the key projects in OpenStack i.e nova, glance and keystone and avoided adding extra complexity to the system; for this reason legacy networking (nova-network) was chosen instead of Neutron – Neutron is generally more complex and we had seen problems reported with between neutron and cells. In this blog post we describe our experience enabling floating ips in an Openstack Cells architecture using nova network which required making some small modifications to the nova python libraries.

Continue reading

Initial Experience with Openstack Nova Cells

In the GEYSER project, we are examining suitable Openstack architectures for our pilot deployments. In an earlier blog post we described different ways to architect  an Openstack deployment mostly focusing on AZ (Availability Zone) and Cells (those were the only options available back in 2013). Much has changed since then and new concepts were added such as regions and host aggregates. Even though Cells have been available since Grizzly they are still considered experimental due to lack of maturity and instability. In this blog post we describe our experience enabling Cells in an experimental Openstack deployment.

Why Cells?

The documentation says that “Cells functionality enables you to scale an OpenStack Compute cloud in a more distributed fashion without having to use complicated technologies like database and message queue clustering. It supports very large deployments”. Although we don’t have a large deployment this is pretty much in line with our requirements for our pilot – a distributed system with a single public API exposed. Comparing with other architectural approaches currently available the one which gets closer to this design are regions, but even then is not desirable as it exposes a public API for each region. Continue reading

Extending the Openstack Dashboard to support Delay Tolerant Workload

[This post was originally published on the GEYSER blog. ICCLab is a partner in GEYSER and is responsible for developing workload migration mechanisms and other activities.]

Scheduling workload in the cloud is an important capability which can be used to realize energy savings and it is the focus of some of our activities within the GEYSER project. The most prominent open source cloud stack – Openstack – provides little support for more flexible scheduling of workload, particularly pertaining to delay-tolerant work. The existing Openstack scheduler, within the nova component, launches every request in a sequential fashion – first-come-first-served – and consequently does not offer the required flexibility. Of course there is more intelligence in the scheduler such that certain hosts can be given higher priority with weightings, or filters can be used to prevent work from operating on other hosts, but it does not give the freedom to choose the time a VM/workload should be started. So, we developed a basic µservice which enables work to be scheduled for specific future points in time – the system is well integrated with the Openstack dashboard and we describe it here.

Continue reading

Managing ceilometer data in openstack

Ceilometer can collect a large amount of data, particularly in a system with large amount of servers and high activity: in such a scenario, the numbers of meters and samples can be large which  affects ceilometer performance and gives rise to quite large databases. In our particular case we are studying energy consumption in servers and how resource utilization (mainly cpu) may relates to overall energy consumption. The energy data is collected through Kwapi and stored in ceilometer every 10 seconds (yes, this is probably too fine-grained!). We had problems that the database accumulated too quickly, filling up the root disk partition on the controller and causing significant problems for the system. In this blog post, we describe the approach we now use for managing ceilometer data which ensures that the resources consumed by ceilometer remain under control. Continue reading

Using Ceilometer Data to Determine which VMs ran on a given Physical host in Openstack

In our previous blog post we showed a web application to monitor and understand energy consumption in an Openstack cluster; the main goal of this tool is to understand how energy consumption relates to activity on the cloud resources. In general, this can be quite complex as there are many resources to take in account and Ceilometer doesn’t report all the information required at this time. In this blog post we describe one task we needed to perform in this work: we needed to retrieve from Ceilometer the set of VMs running on a given physical server together with their CPU utilization over some period of time. We can envisage other contexts in which it might be useful to do this, so we present the solution here. (Note that it is of course possible to obtain the set of VMs that are currently running on a given physical host using Nova, but this does not offer the historical perspective).

Continue reading

A Web Application to Monitor and Understand Energy Consumption in an Openstack Cloud

In one of our projects we need to understand the energy consumption of our servers. Our initial work in this direction involved collecting energy consumption data using Kwapi and storing it in Ceilometer for further study. The data stored in Ceilometer is valuable; however, it is insufficient to really understand energy consumption in detail. Consequently, we are developing a web application which gives a much greater insight into energy consumption in our cloud resources. This is very much a work in progress, so this post just highlights a few points relating to the application as well as a video which shows the current version of the application.

The tool was developed to be totally integrated with Openstack. Users log in with their Openstack credentials (using Keystone authentication) and are  redirected to the overview page where they can see  the total energy consumed by the VMs in their projects for the the previous month as well as some  general information regarding virtual machines; a line chart displays how energy consumed varies over time.

Continue reading

Understanding the relationship between ceilometer processor utilisation and system energy consumption for a basic scenario in Openstack

In one of our earlier blog posts, we described some test we performed to determine how server power consumption increases with compute load; this post is something of a variation on that post, but here we put the focus on work taking place within VMs rather than work taking place within the host OS. The point here is to understand how VM load and energy consumption correlate. Here we document the results obtained.

As with our previous work, we focused on compute bound loads – the focus in this test is on increasing the compute load on the servers by performing π calculations inside the VM. In this work, we used homogeneous VMs – all the VMs were of the same flavor with the following configuration 2GB RAM, 20GB local disk and 1 VCPU.

Continue reading

Collecting energy consumption data using Kwapi in Openstack

In some of our projects we need to understand the energy consumption of our servers in an Openstack cluster. The first step in this process was to collect energy consumption data from our IBM servers; we stored this in Ceilometer for further study. In this blog post we will cover how we do this.

First, Kwapi 101.

Kwapi is a part of the openstack ecosystem (perhaps a little peripheral) which is focused on collecting energy data. It has pretty good integration with Ceilometer which enables the energy data to be stored there.

Kwapi is architected in such a way that individual drivers listen to a specific wattmeter – a wattmeter can be a physical energy meter with a wifi interface or connected to ipmi, i.e. any kind of device that measures energy consumption. The drivers then pass the information on to plug-ins. These are the API, the forwarder and the RRD plugin which is the visualization plugin that provides a web interface with power consumption graphs.

Continue reading

« Older posts Newer posts »