Register free HERE
1st SWISS FI-WARE acceleration Conference 5-Dec 2014
Zurich University of Applied Sciences ICCLab
Technikumstrasse 9, 8400 – Winterthur CH – Room: TL201 (chemistry building)
FIWARE, in collaboration with FI-PPP projects, join forces to host a series of events in several cities, bringing an excellent opportunity to receive training and coaching on FIWARE enablers and open calls.
The event offers, to SWISS Small Enterprises and WEB entrepreneurs, the opportunity to introduce project ideas to the professional A16 accelerators.
They will guide you through the difficult time of developing your application and building your business with their expertise.
Get your funding up to 150.000 Euro (100% of your costs) !
09:00-10:00 Registration and Welcome
10:00 Overview of the Future Internet PPP- European Commission, Ragnar Bergström
10:30 FIWARE project and ecosystem – ZHAW ICCLab, Thomas Michael Bohnert MB
11:00 Swiss industry representative: Equinix
11:30 A16 Speedup! Europe Open Calls – Olaf-Gerd Gemein
12:15 A16 SOUL-FI Open Calls, Nuno Varandas
12:45 Networking Lunch – A16 face to face meetings
13:30 Guide for the applicants
14:00 Wrap-up and Closing
REGISTER FREE HERE
In the Linux world, a popular approach to build highly available clusters is with a set of software tools that include pacemaker (as resource manager) and corosync (as the group communication system), plus other libraries on which they depend and some configuration utilities.
On Illumos (and in our particular case, OmniOS), the ihac project is abandoned and I couldn’t find any other platform-specific open source and mature framework for clustering. Porting pacemaker to OmniOS is an option and this post is about our experience with this task.
The objective of the post is to describe how to get an active/passive pacemaker cluster running on OmniOS and to test it with a Dummy resource agent. The use case (or test case) is not relevant, but what should be achieved in a correctly configured cluster is that, if the node of the cluster running the Dummy resource (active node) fails, then that resource should fail-over and be started on the other node (high availability).
I will assume to start from a fresh installation of OmniOS 151012 with a working network configuration (and ssh, for your comfort!). Check the general administration guide, if needed.
This is what we will cover:
- Configuring the machines
- Patching and compiling the tools
- Running pacemaker and corosync from SMF
- Running an active/passive cluster with two nodes to manage the Dummy resource
Towards a Self-Driving Infrastructure
and also is independent of Openstack. The code has been
Experience with OpenStack in Telco Infrastructure Transformation
With the wide spread adoption of NFV
among the telcos, day 1 had an interesting panel discussion between the cloud vendors and the telco players which included Verizon
, Vodafone, Huawei & Mirantis
. From a cloud vendor perspective, Mirantis had an opinion that the virual infrastructure manager like NFV enhances the agility of a carrier and as a follow up quick survey of the audience, many attendees were aware of NFV and its advantages. Telecom companies were more concerned with the service assurance and the impact of API change that sets in due to frequent update of the open source project like NFV. However both the parties agreed to open source model being the right way forward for standardization especially in the telecom domain.
Load Balancing as a Service v2.0 – Juno and Beyond
extension allows the tenants load-balance their VMs traffic. For the Juno release cloud providers such as Rackspace
, etc. have partnered with the community and load balancer vendors to redefine the load balancing as a service APIs (API v2.0) to address tenant needs. Load Balancing as a Service also enables adjusting of application resources to changing demands by scaling in and scaling out of the application resources.
As part of the on-going work in MobileCloud Networking the project will demonstrate outputs of the project at this year’s Globecomm industry-track demonstrations. Globecomm is being held this year in Austin, Texas.
MobileCloud Networking (MCN) approach and architecture will be demonstrated aiming to show new innovative revenue streams based on new service offerings and the optimisation of CAPEX/OPEX. MCN is based on a service-oriented architecture that delivering end-to-end, composed services using cloud computing and SDN technologies. This architecture is NFV compatible but goes beyond NFV to bring new improvements. The demonstration includes real implementations of telco equipment as software and cloud infrastructure, providing a relevant view on how the new virtualised environment will be implemented.
For taking the advantage of the technologies offered by cloud computing today’s communication networks has to be re-designed and adapted to the new paradigm both as developing a comprehensive service enablement platform as well as through the appropriate softwarization of network components. Within the Mobile Cloud Networking project this new paradigm has been developed, and early results are already available to be exploited to the community. In particular this demonstration aims at deploying a Mobile Core Network on a cloud infrastructure and show the automated, elastic and flexible mechanism that are offered by such technologies for typical networking services. This demonstration aims at showing how a mobile core network can be instantiated on demand on top of a standard cloud infrastructure, leveraging key technologies of OpenStack and OpenShift.
The scenario will be as following:
- A tenant (Enterprise End User (EEU), in MCN terminology) – may be an MVNO or an enterprise network – requests the instantiation of a mobile core network service instance via the dashboard of the MCN Service Manager – the the service front-end where tenants can come and request the automated creation of a service instance via API or user interface. In particular the deployment of such core network will be on top of a cloud hosted in Europe. At the end of the provisioning procedures, the mobile core network endpoints will be communicated to the EEU.
- The EEU will have the possibility to access the Web frontend of the Home Subscriber Server (HSS) and provision new subscribers. Those subscribers information will be used also for configuring the client device (in our case a laptop).
- The client device will send the attachment requests to the mobile core network and establish a connectivity service. Since at the moment of the demonstration the clients will be located in the USA, there will be a VPN connection to the eNodeB emulator through which the attachment request will be sent. At the end of the attachment procedure all the data traffic will be redirected to Europe. It will be possible to show that the public IPs assigned to the subscriber are part of the IP range of the European cloud testbed.
- The clients attached to the network will establish a call making use of the IP Multimedia Subsystem provided by the MVNO. During the call the MVNO administrator can open the Monitoring as a Service tool provided by the MCN platform and check the current situation of the services. For this two IMS clients will be installed on the demonstration device.
- At the end of the demonstration it will be possible to show that the MVNO can dispose the instantiated core network and release the resources which are not anymore necessary. After this operation the MVNO will receive a bill indicating the costs for running such virtualized core network.
It specifically includes:
- An end-to-end Service Orchestrator, managing dynamically the deployment of a set of virtual networks and of a virtual telecom platform. The service is delivered from the radiohead all the way through the core network to service delivery of IMS services. The orchestration framework is developed on an open source framework available under the Apache 2.0 license and is where the ICCLab actively develops and contributes.
- Interoperability is guaranteed throughout the stack through the adoption of telecommunication standards (3GPPP, TMForum) and cloud computing standards (OCCI).
- A basic monitoring system for providing momentary capacity and triggers for virtual network infrastructure adaptations. This will be part of the orchestrated composition.
- An accounting-billing system for providing cost and billing functions back to the tenant or the provisioned service instance. This will be part of the orchestrated composition.
- A set of virtualised network functions:
- A realistic implementation of a 3GPP IP Multimedia Subsystem (IMS) based on the open source OpenIMSCore
- A realistic implementation of a virtual 3GPP EPC based on the Fraunhofer FOKUS OpenEPC toolkit,
- An LTE emulation bases on the Fraunhofer FOKUS OpenEPC eNB implementation
- Demonstration of IMS call establishment across the provisioned on-demand virtualised network functions.
There are many tools available which can be used to monitor operation of the Opentack infrastructure, but as OpenStack user you might not be interested in monitoring OpenStack itself. Your primary interest should be the operation of the VMs that are hosted on OpenStack. Nagios OpenStack Installer is a tool for exactly that purpose: it uses a Nagios VM inside the OpenStack environment and configures it to monitor all VMs that you own.
Nagios OpenStack Installer configures your OpenStack monitoring environment remotely from your desktop PC or labtop. In order to use Nagios OpenStack Installer you need to fulfil the following prerequisites.
- You must have an SSH Key for securely accessing the Nagios VM and the VMs you own and you must know the SSH credentials to access the VMs.
- You must know your OpenStack user account (name and id), your OpenStack password, the OpenStack Keystone authentication URL and the OpenStack tenant (“project”) (name and id) you work with.
- You must be able to create a VM that serves as Nagios VM and you must own a publicly available IP (“floating IP”) to make the Nagios dashboard accessible to the outside world.
- Nagios OpenStack Installer is a Python tool and requires some Python packages. Make sure to install Python 2.7 on your desktop. Additionally you need the following packages:
- pip: The package manager to install Python packages from the PyPI repository (Windows users should refer to the pip developer’s “get pip” manual to install pip, Cygwin users are recommended to follow these guidelines in atbrox blog).
- fabric: This package is used to access OpenStack VMs via SSH and remotely execute tasks on the VMs.
- python-keystoneclient: To access the OpenStack Keystone API and authenticate to your OpenStack environment.
- python-novaclient: To manage VMs which are hosted on OpenStack.
- cuisine: This is a configuration management tool and lightweight alternative to configuration managers like Puppet or Chef. cuisine is required to manage the packages and configuration files on the Nagios VM and the monitored VMs.
- pickle: pickle is a object serialization tool that can store objects and their current state in a file dump. Object serilaization is used to get the list of VMs which should be monitored.
- We recommend to use pip for installation of the required packages, since pip automatically installs package dependencies.
- You must have Git downloaded and installed.
After having installed the prerequisites on your local PC or labtop, you can use Nagios OpenStack Installer by performing the following steps.
- Create a new directory and clone the Nagios OpenStack Installer Github repository in it.
git clone https://github.com/icclab/kobe6661-nagios-openstack-installer.git
- Edit the credentials in install_autoconfig.py, remote.py, remote_server_config.py and vm_list_extractor.py to match your OpenStack and SSH credentials.
- Run remote_server_config.py from Python console. This installs and configures Nagios server on your Nagios VM. After installation you should be able to access the Nagios Dashboard by pointing your webbrowser to “http://<your_nagios_public_ip>/nagios” and providing your Nagios login credentials.
- Run vm_list_extractor.py from Python console. This will extract the list of VMs on OpenStack that should be monitored and save the list as pickle file dump on your computer.
- Run install_autoconfig.py from Python console. This will upload the Python scripts required to automatically update the Nagios configuration in case of changes in the OpenStack VM environment (nagios_config_updater.py, config_transporter.py, config_generator.py, vm_list_extractor.py). Additionally it will run these Python scripts on the Nagios VM to let Nagios capture the VMs which should be monitored, install and run the required Nagios and NRPE plugins on these VMs and reconfigure and restart Nagios server to monitor these VMs remotely.
Now the Nagios environment is installed and you should be able to monitor your VMs. Nagios OpenStack Installer is available on ICCLab’s Github repository. Feel free to try it out and give feedback about future improvements.