Month: November 2014 (page 2 of 2)

Dynamic Rating, charging & billing for cloud – a micro service perspective

Rating, Charging & Billing (RCB) has been an ongoing research initiative from ICCLab. As part of this, a proof of concept (PoC) for OpenStack has been developed and released under Apache Licence v2.0. The PoC is a standalone application which can collect the resource usage data from Ceilometer for a certain time period interval, define a pricing function per user, determine the price for the resource used per user and eventually generate a bill and display it via a PDF. This is demonstrated through a video.

Even though the architecture used in PoC was able to demonstrate all the basic functionality of RCB, it was not extensible and not suitable for being deployed in the distributed environment. It was also observed that the data produced at different stages for RCB process had the potential to create a platform upon which other applications using them, could be built. With these drawbacks, an attempt has been made to redesign the architecture from a modular & micro service perspective.

Architecture

CYCLOPS Architecture

The new architecture has been split into multiple micro services consisting of User Data Records (UDR) Generator μ service, Rating & Charging μ service, Billing μ service, Message broker μ service and authentication service.

Continue reading

Swiss FIWARE Acceleration Conference – 5 December 2014

cabecera_fiware3 fi-ware_accelerators

 

 

 

 

IMG_20140430_224441

Register free HERE

1st SWISS FI-WARE acceleration Conference 5-Dec 2014
Zurich University of Applied Sciences ICCLab
Technikumstrasse 9, 8400 – Winterthur CH – Room: TL201 (chemistry building)

FIWARE, in collaboration with FI-PPP projects, join forces to host a series of events in several cities, bringing an excellent opportunity to receive training and coaching on FIWARE enablers and open calls.

The event offers, to SWISS Small Enterprises and WEB entrepreneurs, the opportunity to introduce project ideas to the professional A16 accelerators.
They will guide you through the difficult time of developing your application and building your business with their expertise.

you2Get your funding up to 150.000 Euro (100% of your costs) !

AGENDA
09:00-10:00 Registration and Welcome
10:00 Overview of the Future Internet PPP- European Commission, Ragnar Bergström
10:30 FIWARE project and ecosystem – ZHAW ICCLab, Thomas Michael Bohnert MB
11:00 Swiss industry representative: Equinix
11:30 A16 Speedup! Europe Open Calls – Olaf-Gerd Gemein
12:00 Break
12:15 A16 SOUL-FI Open Calls, Nuno Varandas

12:45 Networking Lunch – A16 face to face meetings

13:30 Guide for the applicants
14:00 Wrap-up and Closing

REGISTER FREE HERE

Sponsors :

Equinix-logo1

CSfi-ppp concord_logo_bigger

Use pacemaker and corosync on Illumos (OmniOS) to run a HA active/passive cluster

In the Linux world, a popular approach to build highly available clusters is with a set of software tools that include pacemaker (as resource manager) and corosync (as the group communication system), plus other libraries on which they depend and some configuration utilities.

On Illumos (and in our particular case, OmniOS), the ihac project is abandoned and I couldn’t find any other platform-specific open source and mature framework for clustering. Porting pacemaker to OmniOS is an option and this post is about our experience with this task.

The objective of the post is to describe how to get an active/passive pacemaker cluster running on OmniOS and to test it with a Dummy resource agent. The use case (or test case) is not relevant, but what should be achieved in a correctly configured cluster is that, if the node of the cluster running the Dummy resource (active node) fails, then that resource should fail-over and be started on the other node (high availability).

I will assume to start from a fresh installation of OmniOS 151012 with a working network configuration (and ssh, for your comfort!). Check the general administration guide, if needed.

This is what we will cover:

  • Configuring the machines
  • Patching and compiling the tools
  • Running pacemaker and corosync from SMF
  • Running an active/passive cluster with two nodes to manage the Dummy resource

Continue reading

OpenStack Summit – Deep Dive into Day 2

CERN Openstack (super) User Story
CERN is looking for answers to the fundamental questions concerning creation of the Universe and true to its nature, its a a big data challenge. With the historical run of LHC in 2013, their archive now contains ~100PB (with additional 27PB/year) at ~11 000 servers with ~75000 disk drives and ~45 000 tapes and with the reopening of the LHC, they expect a significant increase of data in 2015. CERN recently opened up a new data center in Budapest connected to Geneve’s headquarters by T-Systems 100GbE line. 
CERN currently runs  four Openstack Icehouse clouds and expects these to run 150 000 cores by Q1 2015 in total. All the CERN‘s non-specific code is upstream and are available for anyone who would like to build at the top of it in the future.
CERN put great emphasis on collaboration. Openlab project is public-private partnership between CERN and major ICT companies (e.g. Rackspace) and its goal is to accelerate the development of cutting-edge cloud solutions.
 2014-11-04 09.42.44obrázek2
OpenShift on OpenStack
RedHat and Cisco gave a demo on deploying OpenShift on OpenStack using Heat, Docker & Kubernetes. OpenShift is a PaaS offering from RedHat with both the enterprise and open source versions. The thought process of deploying OpenShift on OpenStack is to maintain a high degree of flexibility and enable a faster deployment of applications. In the demo, Heat was made use of for orchestration. Docker’s pull and push methodology is used for getting a new Image or saving a modified version which could be pulled later on. Along with tagging of the images, diff operation can also be done. Docker containers are also used as daemons. However Docker cannot see beyond a single host and doesn’t have the capacity to manage mass configuration and deployment. That’s where the Kubernetes comes into picture. Here Pod resemble the Docker’s containers and the etc functionality is used to configure the master which would pass it along to the slaves and there by mass configuration is achieved.The link to the presentation can be found here.
2014-11-04 11.15.33 2014-11-04 11.18.49

OpenStack Summit – Deep Dive into Day 1

Towards a Self-Driving Infrastructure
With the increasing popularity of OpenStack, its imperative to have an easy process to deploy and maintain it. In this regard the researchers from Tsinghua University along with Huawei have developed a “Deployment as a Service” called Compass.
The talk described a use case of deploying about 200 VMs for a Big Data application with the main phases being installation and operation of the whole setup. The fundamental problem of operational knowledge not being transferred due to personal change was mentioned as  a main roadblock in projects like these and this is being tackled by the Compass project where the configuration steps have been reduced to bare minimum. The researchers from Huawei mentioned that the project was designed by keeping extensibility & automation as a main feature and also is independent of Openstack. The code has been opensourced and is in a stable state.
20141103_121154
Experience with OpenStack in Telco Infrastructure Transformation
With the wide spread adoption of NFV among the telcos, day 1 had an interesting panel discussion between the cloud vendors and the telco players which included Verizon, AT&T, Vodafone, Huawei & Mirantis. From a cloud vendor perspective, Mirantis had an opinion that the virual infrastructure manager like NFV enhances the agility of a carrier and as a follow up quick survey of the audience, many attendees were aware of NFV and its advantages. Telecom companies were more concerned with the service assurance and the impact of API change that sets in due to frequent update of the open source project like NFV. However both the parties agreed to open source model being the right way forward for standardization especially in the telecom domain.
Load Balancing as a Service v2.0 – Juno and Beyond 
 LBaaS extension allows the tenants load-balance their VMs traffic. For the Juno release cloud providers such as Rackspace, HP, etc. have partnered with the community and load balancer vendors  to redefine the load balancing as a service APIs (API v2.0) to address tenant needs. Load Balancing as a Service also enables adjusting of application resources to changing demands by scaling in and scaling out of the application resources.

MobileCloud Networking Live @ Globecomm

As part of the on-going work in MobileCloud Networking the project will demonstrate outputs of the project at this year’s Globecomm industry-track demonstrations. Globecomm is being held this year in Austin, Texas.

MobileCloud Networking (MCN) approach and architecture will be demonstrated aiming to show new innovative revenue streams based on new service offerings and the optimisation of CAPEX/OPEX. MCN is based on a service-oriented architecture that delivering end-to-end, composed services using cloud computing and SDN technologies. This architecture is NFV compatible but goes beyond NFV to bring new improvements. The demonstration includes real implementations of telco equipment as software and cloud infrastructure, providing a relevant view on how the new virtualised environment will be implemented.

For taking the advantage of the technologies offered by cloud computing today’s communication networks has to be re-designed and adapted to the new paradigm both as developing a comprehensive service enablement platform as well as through the appropriate softwarization of network components. Within the Mobile Cloud Networking project this new paradigm has been developed, and early results are already available to be exploited to the community. In particular this demonstration aims at deploying a Mobile Core Network on a cloud infrastructure and show the automated, elastic and flexible mechanism that are offered by such technologies for typical networking services. This demonstration aims at showing how a mobile core network can be instantiated on demand on top of a standard cloud infrastructure, leveraging key technologies of OpenStack and OpenShift.

Screen Shot 2014-11-05 at 12.21.47

The scenario will be as following:

  1. A tenant (Enterprise End User (EEU), in MCN terminology) – may be an MVNO or an enterprise network – requests the instantiation of a mobile core network service instance via the dashboard of the MCN Service Manager – the the service front-end where tenants can come and request the automated creation of a service instance via API or user interface. In particular the deployment of such core network will be on top of a cloud hosted in Europe. At the end of the provisioning procedures, the mobile core network endpoints will be communicated to the EEU.
  2. The EEU will have the possibility to access the Web frontend of the Home Subscriber Server (HSS) and provision new subscribers. Those subscribers information will be used also for configuring the client device (in our case a laptop).
  3. The client device will send the attachment requests to the mobile core network and establish a connectivity service. Since at the moment of the demonstration the clients will be located in the USA, there will be a VPN connection to the eNodeB emulator through which the attachment request will be sent. At the end of the attachment procedure all the data traffic will be redirected to Europe. It will be possible to show that the public IPs assigned to the subscriber are part of the IP range of the European cloud testbed.
  4. The clients attached to the network will establish a call making use of the IP Multimedia Subsystem provided by the MVNO. During the call the MVNO administrator can open the Monitoring as a Service tool provided by the MCN platform and check the current situation of the services. For this two IMS clients will be installed on the demonstration device.
  5. At the end of the demonstration it will be possible to show that the MVNO can dispose the instantiated core network and release the resources which are not anymore necessary. After this operation the MVNO will receive a bill indicating the costs for running such virtualized core network.

It specifically includes:

  • An end-to-end Service Orchestrator, managing dynamically the deployment of a set of virtual networks and of a virtual telecom platform. The service is delivered from the radiohead all the way through the core network to service delivery of IMS services. The orchestration framework is developed on an open source framework available under the Apache 2.0 license and is where the ICCLab actively develops and contributes.
  • Interoperability is guaranteed throughout the stack through the adoption of telecommunication standards (3GPPP, TMForum) and cloud computing standards (OCCI).
  • A basic monitoring system for providing momentary capacity and triggers for virtual network infrastructure adaptations. This will be part of the orchestrated composition.
  • An accounting-billing system for providing cost and billing functions back to the tenant or the provisioned service instance. This will be part of the orchestrated composition.
  • A set of virtualised network functions:
  • A realistic implementation of a 3GPP IP Multimedia Subsystem (IMS) based on the open source OpenIMSCore
  • A realistic implementation of a virtual 3GPP EPC based on the Fraunhofer FOKUS OpenEPC toolkit,
  • An LTE emulation bases on the Fraunhofer FOKUS OpenEPC eNB implementation
  • Demonstration of IMS call establishment across the provisioned on-demand virtualised network functions.

Openstack Summit 2014 Paris – First Impressions

The OpenStack Summit is a five-day conference for developers, users, and administrators of OpenStack Cloud Software. This time it takes place in Palais des congrès in Paris from November 2 to November 7. ICCLab is attending the Openstack Summit for the very first time.

Continue reading

Nagios OpenStack Installer – Automated monitoring of your OpenStack VMs

There are many tools available which can be used to monitor operation of the Opentack infrastructure, but as OpenStack user you might not be interested in monitoring OpenStack itself. Your primary interest should be the operation of the VMs that are hosted on OpenStack. Nagios OpenStack Installer is a tool for exactly that purpose: it uses a Nagios VM inside the OpenStack environment and configures it to monitor all VMs that you own.

Nagios OpenStack Installer configures your OpenStack monitoring environment remotely from your desktop PC or labtop. In order to use Nagios OpenStack Installer you need to fulfil the following prerequisites.

  • You must have an SSH Key for securely accessing the Nagios VM and the VMs you own and you must know the SSH credentials to access the VMs.
  • You must know your OpenStack user account (name and id), your OpenStack password, the OpenStack Keystone authentication URL and the OpenStack tenant (“project”) (name and id) you work with.
  • You must be able to create a VM that serves as Nagios VM and you must own a publicly available IP (“floating IP”) to make the Nagios dashboard accessible to the outside world.
  • Nagios OpenStack Installer is a Python tool and requires some Python packages. Make sure to install Python 2.7 on your desktop. Additionally you need the following packages:
    • pip: The package manager to install Python packages from the PyPI repository (Windows users should refer to the pip developer’s “get pip” manual to install pip, Cygwin users are recommended to follow these guidelines in atbrox blog).
    • fabric: This package is used to access OpenStack VMs via SSH and remotely execute tasks on the VMs.
    • python-keystoneclient: To access the OpenStack Keystone API and authenticate to your OpenStack environment.
    • python-novaclient: To manage VMs which are hosted on OpenStack.
    • cuisine: This is a configuration management tool and lightweight alternative to configuration managers like Puppet or Chef. cuisine is required to manage the packages and configuration files on the Nagios VM and the monitored VMs.
    • pickle: pickle is a object serialization tool that can store objects and their current state in a file dump. Object serilaization is used to get the list of VMs which should be monitored.
    • We recommend to use pip for installation of the required packages, since pip automatically installs package dependencies.
  • You must have Git downloaded and installed.

After having installed the prerequisites on your local PC or labtop, you can use Nagios OpenStack Installer by performing the following steps.

  1. Create a new directory and clone the Nagios OpenStack Installer Github repository in it.git clone https://github.com/icclab/kobe6661-nagios-openstack-installer.git
  2. Edit the credentials in install_autoconfig.py, remote.py, remote_server_config.py and vm_list_extractor.py to match your OpenStack and SSH credentials.
  3. Run remote_server_config.py from Python console. This installs and configures Nagios server on your Nagios VM. After installation you should be able to access the Nagios Dashboard by pointing your webbrowser to “http://<your_nagios_public_ip>/nagios” and providing your Nagios login credentials.
  4. Run vm_list_extractor.py from Python console. This will extract the list of VMs on OpenStack that should be monitored and save the list as pickle file dump on your computer.
  5. Run install_autoconfig.py from Python console. This will upload the Python scripts required to automatically update the Nagios configuration in case of changes in the OpenStack VM environment (nagios_config_updater.py, config_transporter.py, config_generator.py, vm_list_extractor.py). Additionally it will run these Python scripts on the Nagios VM to let Nagios capture the VMs which should be monitored, install and run the required Nagios and NRPE plugins on these VMs and reconfigure and restart Nagios server to monitor these VMs remotely.

Now the Nagios environment is installed and you should be able to monitor your VMs. Nagios OpenStack Installer is available on ICCLab’s Github repository. Feel free to try it out and give feedback about future improvements.

Newer posts »