As the trend continues to move towards Serverless Computing, Edge Computing and Functions as a Service (FaaS), the need for a storage system that can adapt to these architectures grows ever bigger. In a scenario where smart cars have to make decisions on a whim, there is no chance for that car to ask a data center what to do in this scenario. These scenarios constitute a driver for new storage solutions in more distributed architectures. In our work, we have been considering a scenario in which there is a distributed storage solution which exposes different local endpoints to applications distributed over a mix of cloud and local resources; such applications can give the storage infrastructure and indicator of the nature of the data which can then be used to determine where it should be stored. For example, data could be considered to be either latency-sensitive (in which case the storage system should try to store it as locally as possible) or loss sensitive (in which case the storage system should ensure it is on reliable storage). Continue reading
After too many hours of trial and error and searching for the right solution on how to properly write and integrate your own backend in cinder, here are all the steps and instructions necessary. So if you are looking for a guide on how to integrate your own cinder driver, look no further. Continue reading
On the 21st of March we held the 15th OpenStack meetup. As ever, the talks were interesting, relevant and entertaining. It was kindly sponsored by Rackspace and held at their offices in Zürich. Much thanks goes to them and to previous sponsors!
At this meetup there were 2 talks and an interactive and impromptu panel discussion on the recent operator’s meetup in Milan.
The first talk was by Giuseppe Paterno who shared the experience in eBay on the workloads that are running there upon OpenStack.
Next up was Geoff Higginbottom from Rackspace who showed how to use Nagios and StackStorm to automate the recovery of OpenStack services. This was interesting from the lab’s perspective as much of what Geoff talked about was related to our Cloud Incident Management initiative. You can see almost the same talk that Geoff gave at the OpenStack Nordic Days.
The two presentations were followed up by the panel discussion involving those that attended including our own Seán Murphy and was moderated by Andy Edmonds. Finally, as is now almost a tradition, we had a very nice apero!
Looking forward to the next and 16th OpenStack meetup!
At ICCLab, we have recently updated the Openstack OVA onboarding tool to include an exporting functionality that can help operators migrate and checkpoint individual VMs. Furthermore, researchers can now export VMs to their local environments, even use them offline, and at any time bring them back to the cloud using the same tool.
The OpenStack OVA onboarding tool automatically transforms selected virtual machines into downloadable VMDK images. Virtual machines and their metadata are fetched from OpenStack’s Nova service, and made packed as OVA file. The tool offers a GUI integration with OpenStack’s Horizon Dashboard, but can be also deployed separately.
Following our previous blog post, we are still looking at tools for collecting metrics from an Openstack deployment in order to understand its resource utilization. Although Monasca has a comprehensive set of metrics and alarm definitions, the complex installation process combined with a lack of documentation makes it a frustrating experience to get it up and running. Further, although it is complex, with many moving parts, it was difficult to configure it to obtain the analysis we wanted from the raw data, viz how many of our servers are overloaded over different timescales in different respects (cpu, memory, disk io, network io). For these reasons we decided to try Prometheus with Grafana which turned out to be much easier to install and configure (taking less than an hour to set up!). This blog post covers the installation process and configuration of Prometheus and Grafana in a Docker container and how to install and configure Canonical’s Prometheus Openstack exporter to collect a small set of metrics related to an Openstack deployment.
In one of our projects we are making contributions to an Openstack project called Watcher, this project focuses on optimizing resource utilization of a cloud according to a given strategy. As part of this work it is important to understand the resource utilization of the cloud beforehand in order to make a meaningful contribution. This requires collection of metrics from the system and processing them to understand how the system is performing. The Ceilometer project was our default choice for collecting metrics in an Openstack deployment but as work has evolved we are also exploring alternatives – specifically Monasca. In this blog post I will cover my personal experience installing Monasca (which was more challenging than expected) and how we hacked the monasca/demo docker image to connect it to our Openstack deployment. Continue reading
ICCLab is announcing an integration of the Openstack OVA onboarding tool into OpenStack’s Horizon dashboard. To deploy the OVA file to Openstack all images are extracted from the file and uploaded to the Openstack cluster, all necessary file format transformations are automatically performed, glance images get created and the tool creates a heat stack out of them. As we mentioned a couple of weeks ago, uploading your local VMs into OpenStack was never easier.
If you ever thought of uploading your local VMs to OpenStack, perhaps you have come across OpenStack’s support for importing single virtual disk images. However, this cannot be used to deploy complicated VM setups, including network configurations and multiple VMs connected to each other.
We at ICCLab have therefore decided to develop a tool that will allow anyone to upload their VM setups from their local environments directly to OpenStack. We call it OpenStack VM onboarding tool and it’s available as open source.
VM onboarding tool features:
- Easy to run – the tool comprises of simple frontend, backend and Openstack client libraries to access Openstack APIs. All these components can be easily run with one command.
- Easy to Import – to import an OVA file the user needs to provide only the basic Openstack credentials (username, password, tenant, region, keystone URL) and an OVA file.
- Full infrastructure import – the tool imports virtual machines, external networks, internal network connections and security groups.
In one of our blog posts we presented a basic tool which extends the Openstack Nova client and supports executing API calls at some point in the future. Much has evolved since then: the tool is not just a wrapper around Openstack clients anymore and instead we rebuilt it in the context of the Openstack Mistral project which provides very nice workflow as service capabilities – this will be elaborated a bit more in a future blog post. During this process we came across a very interesting feature in Keystone which we were not aware of – Trusts. Trusts is a mechanism in Keystone which enables delegation of roles and even impersonation of users from a trustor to a trustee; it has many uses but is particularly useful in an Openstack administration context. In this blog post we will cover basic command line instructions to create and use trusts.
As announced in our last blogpost about the official release of Cyclops 2.0, which is finally out and is adding new features.
The collector that is being released today is the Ceilometer Usage Collector. This collector enables Cyclops 2.0 to provide full rating, charging and billing support to an OpenStack deployment using the data provided by Ceilometer.
In addition to the announced features, our team has pushed forward in the development of the new Usage Collectors. The Usage Collectors are the entry point of data for the Framework itself. They consist of isolated microservices that gather data from a specific provider and distributes it via RabbitMQ to the UDR microservice.