The third day of the summit had a different feel from the previous couple of days – there was no keynote and there were noticeably less people around: there is a strong sense that the show is over and now it’s necessary to do some real work. Hence, there is more time and space allocated to the project teams to enable them to move their work forward.
ICCLab is announcing an integration of the Openstack OVA onboarding tool into OpenStack’s Horizon dashboard. To deploy the OVA file to Openstack all images are extracted from the file and uploaded to the Openstack cluster, all necessary file format transformations are automatically performed, glance images get created and the tool creates a heat stack out of them. As we mentioned a couple of weeks ago, uploading your local VMs into OpenStack was never easier.
If you ever thought of uploading your local VMs to OpenStack, perhaps you have come across OpenStack’s support for importing single virtual disk images. However, this cannot be used to deploy complicated VM setups, including network configurations and multiple VMs connected to each other.
We at ICCLab have therefore decided to develop a tool that will allow anyone to upload their VM setups from their local environments directly to OpenStack. We call it OpenStack VM onboarding tool and it’s available as open source.
VM onboarding tool features:
- Easy to run – the tool comprises of simple frontend, backend and Openstack client libraries to access Openstack APIs. All these components can be easily run with one command.
- Easy to Import – to import an OVA file the user needs to provide only the basic Openstack credentials (username, password, tenant, region, keystone URL) and an OVA file.
- Full infrastructure import – the tool imports virtual machines, external networks, internal network connections and security groups.
In one of our blog posts we presented a basic tool which extends the Openstack Nova client and supports executing API calls at some point in the future. Much has evolved since then: the tool is not just a wrapper around Openstack clients anymore and instead we rebuilt it in the context of the Openstack Mistral project which provides very nice workflow as service capabilities – this will be elaborated a bit more in a future blog post. During this process we came across a very interesting feature in Keystone which we were not aware of – Trusts. Trusts is a mechanism in Keystone which enables delegation of roles and even impersonation of users from a trustor to a trustee; it has many uses but is particularly useful in an Openstack administration context. In this blog post we will cover basic command line instructions to create and use trusts.
As announced in our last blogpost about the official release of Cyclops 2.0, which is finally out and is adding new features.
The collector that is being released today is the Ceilometer Usage Collector. This collector enables Cyclops 2.0 to provide full rating, charging and billing support to an OpenStack deployment using the data provided by Ceilometer.
In addition to the announced features, our team has pushed forward in the development of the new Usage Collectors. The Usage Collectors are the entry point of data for the Framework itself. They consist of isolated microservices that gather data from a specific provider and distributes it via RabbitMQ to the UDR microservice.
Last evening we organised the 13th OpenStack user group meetup at ZHAW premises in Winterthur. The meetup was also a celebratory event to mark the 6th b’day of OpenStack and the OpenStack foundation supported the event by sponsoring it. I thank the OpenStack foundation on behalf of the Swiss OpenStack community.
Our flagship open-source framework for cloud billing – Cyclops has matured to version 2.0 today. Over the past several months, Cyclops team at ICCLab have gathered community feedbacks, worked systematically updating and re-updating the framework core architecture to make the whole work-flow of billing of cloud services clean and seamless.
The core components in principle are still same as in our previous releases: udr, rc and billing micro-services, but they have been written again from scratch with main focus on modularity, extensibility, and elasticity. The framework is highly configurable and can be deployed as per the unique needs of billing use-cases of any organization.
After 7 months, Service Engineering and SWITCH are back with the regular Swiss SDN workshops, this time held on 16th of June at the ZHAW premises in Winterthur. For the 6th time, the Software Defined Networking (SDN) community from Switzerland and abroad (represented by the industry and the academia), embarked on a joint SDN-NFV full-day journey to discuss SDN, present the best practices and prototypes and share the know-how and some demonstrations of their recent research activities. As a novelty this time, the SDN workshop/meetup was collocated with the Open Cloud Day, allowing for broader attendance from participants in both events. Regular attendees and new fellows could be spot on site engaged in interesting discussions. You can find the through report of the event here from our collaborator SWITCH and i leave you below the complete list of the SDN track. The complete presentations repository could be found here. Enjoy and see you in the next events!
GEYSER focuses on making Data Centres more energy efficient in the context of varying availability of energy. One of the tools used in this context is a mechanism to effect load consolidation on IT workload in the Data Centres. The GEYSER project has chosen to focus on the Openstack cloud computing framework as the context to perform such load consolidation and in the earlier stages of the project developed a load consolidation solution which was demonstrated on a small cluster locally.
During project execution, activities evolved within the Openstack community resulting in an opportunity for GEYSER. More specifically, the Watcher group was formed within the Openstack community to focus on making Openstack more energy efficient. Interestingly, one of the main focal points of the Watcher group was also to leverage load consolidation mechanisms to effect energy savings. Continue reading
In FIWARELab, we recently upgraded from Openstack Icehouse to a Kilo High Availability (HA) deployment. Our approach involved having two simultaneous active Openstack deployments so VMs and volumes could be migrated from one deployment to another with a minimal downtime. More specifically, VMs and volumes were snapshotted and transferred to the Kilo deployment as images where they could be recreated by their users. During this process, we found that, VMs originally created from of CentOS images did not boot properly – their network interface did not come up correctly and the VM was unable to fetch user-metadata.
The root of the issue lies in a standard configuration of CentOS: its device manager (udev) saves a mapping between MAC address and network for security reasons and ensures that the network interface only boots up on that specific MAC address. This mapping is stored in a file which is created when a VM boots for the first time. The file is located in /etc/udev/rules.d/70-persistent-ipoib.rules.