Cyclops 2.0 supports Ceilometer Data Collection!

As announced in our last blogpost about the official release of Cyclops 2.0, which is finally out and is adding new features.

The collector that is being released today is the Ceilometer Usage Collector. This collector enables Cyclops 2.0 to provide full rating, charging and billing support to an OpenStack deployment using the data provided by Ceilometer.

In addition to the announced features, our team has pushed forward in the development of the new Usage Collectors. The Usage Collectors are the entry point of data for the Framework itself. They consist of isolated microservices that gather data from a specific provider and distributes it via RabbitMQ to the UDR microservice.

Screen Shot 2016-07-11 at 16 Continue reading

Open source cloud billing engine Cyclops @ OpenStack CEE Day 2016

The Cyclops team at SE research group in InIT is pleased to announce their upcoming talk at OpenStack CEE Day 2016. Cyclops is undergoing development at breakneck speed and many improvements have been incorporated in the framework in last 6 months. Interested? Then hear us out in Budapest on June 06, 2016. Continue reading

VM Reliability Tester for Openstack

vmrelt

“Measure and benchmark reliability of your OpenStack virtual machines.”

“VM Reliability Tester” is a software that tests performance and reliability of virtual machines that are hosted in an OpenStack cloud platform. It evaluates the failure rate of VMs by performing a stress test on them. VM Reliability Tester installs OpenStack virtual machines, uploads a test program to them, runs this test program remotely and then captures program execution times to determine reliability of the virtual machines. If the test program takes a significant amount of time to complete, this is considered to result in a VM failure. Such deviations in execution time are an important benchmark for testing performance and reliability of your OpenStack environment.

Why VM Reliability Tester?

Cloud computing (or to be more precise: virtualization) is providing virtual resources instead of physical ones. The performance of virtual resources is hidden from the user, because virtual resources are abstracting form the physical hardware layer. As a system administrator you still might want to know how your virtual machines react under heavy load and you want true performance measurements – instead of promises by your cloud vendor. Therefore it might be an advantage to test the reaction of virtual machines that you have created in your OpenStack cloud and measuring the VM performance before creating a productive infrastructure and deploy productive applications on them. VM Reliability Tester delivers you estimates on how your VM performs when it is running applications. With the data produced by VM Reliability Tester you will be able to:

  • Check if your VM is performing well enough to serve your performance requirements.
  • Benchmark VM images in terms of application performance.
  • Benchmark OpenStack platforms from different vendors.
  • Acquire data that helps you to shape SLAs and underpinning contracts.

How does it work?

VM Reliability Tester uses a “master” VM which serves to create test VMs and upload test programs to them. The master VM first configures the test VMs and then runs the uploaded test programs. Test program runs are repeated in a (configurable) batch of several program runs. The test programs executes for the configured number of times on the test VMs and logs execution time of each test program run. After a batch of test program runs has finished, the master VM captures the logged execution times and calculates the mean and standard deviation of execution times in the batch. If a test program run took longer than the batch mean plus 3 standard deviations, it is considered as a failure and logged by the master VM in a file called “f_rates.csv”.

Based on the numbers of batches and test program runs per batch as well as the number of failures, VM Reliability Tester computes a failure rate sample. This sample is then used to predict failure rate estimates in productive VM infrastructures.

Setup and Installation

Prerequisites for installation of VM Reliability Tester are:

  • You must have valid OpenStack authentication credentials and provide them in the setup file “openrc.py”.
  • You have to provide a private/public keypair for authentication with the VMs that you own. Local path to your public and private key file must be added to a “config.ini” and “remote_config.ini” file.
  • You must own a PC or labtop and have Python and some Python libraries installed on top of it.

Installation of the tool is done easily by cloning the Github repository and changing the contents of the files openrc.py, config.ini and remote_config.ini. Once you have cloned VM Reliability Tester repository and performed the configuration file changes, you must only run vm-reliability-tester.py. The script will create some csv files that contain failure rates of the VMs and the possible distributions of the failure rate.

Github page

VM Reliability Tester is available on the following Github-Page:

https://github.com/icclab/vm-reliability-tester

Quota Management in Openstack

[Note: This blog post was originally published on the XiFi blog here.]

One of the main jobs performed by the Infrastructures in XiFi is to manage quotas: the resources available are not infinite and consequently resource management is necessary. In Openstack this is done through quotas. Here we discuss how we work with them in Openstack.

Continue reading

How to install a multi region devstack Part-1

Introduction
I have introduced Disaster Recovery (DR) services in the tutorial of last ICCLAB  newsletter which also made an overview of possible OpenStack configurations. Several configuration options could be considered. In particular, in case of a stakeholder having both the role of Cloud provider and DR service provider, a suitable safe configuration consists in distributing the infrastructures in different geographic locations. OpenStack gives the possibility to organise the controllers in different Regions which are sharing the same keystone. Here you will find the an overall specification using heat, I will simulate same configurations using devstack on Virtual BoX environments. One of the scope of this blog post is to support  the students who are using Juno devstack.

There will be a second part of this tutorial to show a possible implementation of  DR services lifecycle between two regions.

Continue reading

Reliability Analysis of OpenStack VMs using Python, fabric and R – Part 1: Reliability Concepts

How reliable are your OpenStack VMs? How many outages do you expect to occur during 8 months of operation? Do your VMs crash regularily, randomly or do VM outages increase over time? These questions can only be answered if we perform a reliability analysis of the virtual machines that we manage. In this small guide we show you how to check reliability of VMs in your OpenStack environment. In part 1 of this 4 part series we explain the basic concepts of reliability engineering.

The vast field of reliability engineering has been used widely in various engineering disciplines like aircraft design, civil engineering, electricity management or product management. Though reliability engineering has proven to help in successfully building high quality engineering products, it has almost never been used in cloud computing so far. There might be some distrust among programmers in these scientifically proven reliability analysis methods, since they involve math and statistical exploration. But with a little introduction this is not a severe problem that we should worry about.

Reliability engineering simply deals with analyzing and measuring the outage behavior of engineered systems, trying out and testing system improvements that make the system more reliable, implementing system improvements and validating if the system improvements have reduced the occurence of outages or not. The first step is the analysis of outage behavior. How can outages be analyzed?

Continue reading

OpenStack Summit – Deep Dive into Day 2

CERN Openstack (super) User Story
CERN is looking for answers to the fundamental questions concerning creation of the Universe and true to its nature, its a a big data challenge. With the historical run of LHC in 2013, their archive now contains ~100PB (with additional 27PB/year) at ~11 000 servers with ~75000 disk drives and ~45 000 tapes and with the reopening of the LHC, they expect a significant increase of data in 2015. CERN recently opened up a new data center in Budapest connected to Geneve’s headquarters by T-Systems 100GbE line. 
CERN currently runs  four Openstack Icehouse clouds and expects these to run 150 000 cores by Q1 2015 in total. All the CERN‘s non-specific code is upstream and are available for anyone who would like to build at the top of it in the future.
CERN put great emphasis on collaboration. Openlab project is public-private partnership between CERN and major ICT companies (e.g. Rackspace) and its goal is to accelerate the development of cutting-edge cloud solutions.
 2014-11-04 09.42.44obrázek2
OpenShift on OpenStack
RedHat and Cisco gave a demo on deploying OpenShift on OpenStack using Heat, Docker & Kubernetes. OpenShift is a PaaS offering from RedHat with both the enterprise and open source versions. The thought process of deploying OpenShift on OpenStack is to maintain a high degree of flexibility and enable a faster deployment of applications. In the demo, Heat was made use of for orchestration. Docker’s pull and push methodology is used for getting a new Image or saving a modified version which could be pulled later on. Along with tagging of the images, diff operation can also be done. Docker containers are also used as daemons. However Docker cannot see beyond a single host and doesn’t have the capacity to manage mass configuration and deployment. That’s where the Kubernetes comes into picture. Here Pod resemble the Docker’s containers and the etc functionality is used to configure the master which would pass it along to the slaves and there by mass configuration is achieved.The link to the presentation can be found here.
2014-11-04 11.15.33 2014-11-04 11.18.49

OpenStack Summit – Deep Dive into Day 1

Towards a Self-Driving Infrastructure
With the increasing popularity of OpenStack, its imperative to have an easy process to deploy and maintain it. In this regard the researchers from Tsinghua University along with Huawei have developed a “Deployment as a Service” called Compass.
The talk described a use case of deploying about 200 VMs for a Big Data application with the main phases being installation and operation of the whole setup. The fundamental problem of operational knowledge not being transferred due to personal change was mentioned as  a main roadblock in projects like these and this is being tackled by the Compass project where the configuration steps have been reduced to bare minimum. The researchers from Huawei mentioned that the project was designed by keeping extensibility & automation as a main feature and also is independent of Openstack. The code has been opensourced and is in a stable state.
20141103_121154
Experience with OpenStack in Telco Infrastructure Transformation
With the wide spread adoption of NFV among the telcos, day 1 had an interesting panel discussion between the cloud vendors and the telco players which included Verizon, AT&T, Vodafone, Huawei & Mirantis. From a cloud vendor perspective, Mirantis had an opinion that the virual infrastructure manager like NFV enhances the agility of a carrier and as a follow up quick survey of the audience, many attendees were aware of NFV and its advantages. Telecom companies were more concerned with the service assurance and the impact of API change that sets in due to frequent update of the open source project like NFV. However both the parties agreed to open source model being the right way forward for standardization especially in the telecom domain.
Load Balancing as a Service v2.0 – Juno and Beyond 
 LBaaS extension allows the tenants load-balance their VMs traffic. For the Juno release cloud providers such as Rackspace, HP, etc. have partnered with the community and load balancer vendors  to redefine the load balancing as a service APIs (API v2.0) to address tenant needs. Load Balancing as a Service also enables adjusting of application resources to changing demands by scaling in and scaling out of the application resources.

MobileCloud Networking Live @ Globecomm

As part of the on-going work in MobileCloud Networking the project will demonstrate outputs of the project at this year’s Globecomm industry-track demonstrations. Globecomm is being held this year in Austin, Texas.

MobileCloud Networking (MCN) approach and architecture will be demonstrated aiming to show new innovative revenue streams based on new service offerings and the optimisation of CAPEX/OPEX. MCN is based on a service-oriented architecture that delivering end-to-end, composed services using cloud computing and SDN technologies. This architecture is NFV compatible but goes beyond NFV to bring new improvements. The demonstration includes real implementations of telco equipment as software and cloud infrastructure, providing a relevant view on how the new virtualised environment will be implemented.

For taking the advantage of the technologies offered by cloud computing today’s communication networks has to be re-designed and adapted to the new paradigm both as developing a comprehensive service enablement platform as well as through the appropriate softwarization of network components. Within the Mobile Cloud Networking project this new paradigm has been developed, and early results are already available to be exploited to the community. In particular this demonstration aims at deploying a Mobile Core Network on a cloud infrastructure and show the automated, elastic and flexible mechanism that are offered by such technologies for typical networking services. This demonstration aims at showing how a mobile core network can be instantiated on demand on top of a standard cloud infrastructure, leveraging key technologies of OpenStack and OpenShift.

Screen Shot 2014-11-05 at 12.21.47

The scenario will be as following:

  1. A tenant (Enterprise End User (EEU), in MCN terminology) – may be an MVNO or an enterprise network – requests the instantiation of a mobile core network service instance via the dashboard of the MCN Service Manager – the the service front-end where tenants can come and request the automated creation of a service instance via API or user interface. In particular the deployment of such core network will be on top of a cloud hosted in Europe. At the end of the provisioning procedures, the mobile core network endpoints will be communicated to the EEU.
  2. The EEU will have the possibility to access the Web frontend of the Home Subscriber Server (HSS) and provision new subscribers. Those subscribers information will be used also for configuring the client device (in our case a laptop).
  3. The client device will send the attachment requests to the mobile core network and establish a connectivity service. Since at the moment of the demonstration the clients will be located in the USA, there will be a VPN connection to the eNodeB emulator through which the attachment request will be sent. At the end of the attachment procedure all the data traffic will be redirected to Europe. It will be possible to show that the public IPs assigned to the subscriber are part of the IP range of the European cloud testbed.
  4. The clients attached to the network will establish a call making use of the IP Multimedia Subsystem provided by the MVNO. During the call the MVNO administrator can open the Monitoring as a Service tool provided by the MCN platform and check the current situation of the services. For this two IMS clients will be installed on the demonstration device.
  5. At the end of the demonstration it will be possible to show that the MVNO can dispose the instantiated core network and release the resources which are not anymore necessary. After this operation the MVNO will receive a bill indicating the costs for running such virtualized core network.

It specifically includes:

  • An end-to-end Service Orchestrator, managing dynamically the deployment of a set of virtual networks and of a virtual telecom platform. The service is delivered from the radiohead all the way through the core network to service delivery of IMS services. The orchestration framework is developed on an open source framework available under the Apache 2.0 license and is where the ICCLab actively develops and contributes.
  • Interoperability is guaranteed throughout the stack through the adoption of telecommunication standards (3GPPP, TMForum) and cloud computing standards (OCCI).
  • A basic monitoring system for providing momentary capacity and triggers for virtual network infrastructure adaptations. This will be part of the orchestrated composition.
  • An accounting-billing system for providing cost and billing functions back to the tenant or the provisioned service instance. This will be part of the orchestrated composition.
  • A set of virtualised network functions:
  • A realistic implementation of a 3GPP IP Multimedia Subsystem (IMS) based on the open source OpenIMSCore
  • A realistic implementation of a virtual 3GPP EPC based on the Fraunhofer FOKUS OpenEPC toolkit,
  • An LTE emulation bases on the Fraunhofer FOKUS OpenEPC eNB implementation
  • Demonstration of IMS call establishment across the provisioned on-demand virtualised network functions.