Migrate OpenShift applications with os2os

One of the desirable properties which users expect in a modern cloud-hosted application is portability. Users want to migrate portable applications between private and public clouds or between different cloud regions. With container images as portable application implementations and emerging sophisticated container runtimes, this should be an easy task. But when a containerised application starts to become more complex, a container platform or an orchestration tool needs to be deployed. This add specifics blueprints and together with the persistent data makes the migration of the application tough. This means that the application is not in a condition to be moved as easily between clouds or even between the orchestration tools or container platforms, losing the desirable portability property. With the idea in mind that the next generation of Cloud-Native Applications must be deployable to different cloud providers as the requirements change, we are proud to announce the first proof of concept release of os2os, a tool to migrate cloud-native applications between OpenShift installations. While our research on application migration is not limited to this single container platform, we see it as one of the more popular and technically interesting ones.

Continue reading

Openshift 3.6 on Openstack – developer cluster setup

The ECRP Project uses Kubernetes/Openshift as the base for its Cloud-Robotics PaaS.  Apart from running robotic applications distributed across robots and clouds, we wanted to assess whether latency to the closest public data-center (Frankfurt for both AWS and GKE) would be low enough to run common SLAM and navigation apps. The short answer is YES, although our work there continues.

Thanks to the work of Seán, Bruno, and Remo, the ICCLab has a brand new Openstack cluster. The Cloud-Robotics crew decided to take it for a spin, and use some research grant money on public clouds also for other activities (e.g., FaaS / Serverless computing).

Continue reading

OpenShift custom router with TCP/SNI support

In the context of the ECRP Project, we need to orchestrate intercommunicating components and services running on robots and in the cloud. The communication of this components relies on several protocols including L7 as well as L4 protocols such as TCP and UDP.

One of the solutions we are testing as the base technology for the ECRP cloud platform is OpenShift. As a proof of concept, we wanted to test TCP connectivity to components deployed in our OpenShift 1.3 cluster. We chose to run two RabbitMQ instances and make them accessible from the Internet to act as TCP endpoints for incoming robot connections.

The concept of “route” in OpenShift has the purpose to enable connections from outside the cluster to services and containers. Unfortunately, the default router component in OpenShift only supports HTTP/HTTPS traffic, hence cannot natively support our intended use case. However, Openshift routing can be extended with so called “custom routers”.

This blog post will lead you through the process of creating and deploying a custom router supporting TCP traffic and SNI routing in OpenShift.

Continue reading

Running Google Cloud Functions in OpenShift

Have you created a highly popular and frequently used JavaScript (Node.js) functions for execution in Google Cloud Functions? In this case, the economics of FaaS turn against you due to the per-invocation pricing. You might want to have more options for both testing the same function locally and for deploying it into an environment with fixed monthly pricing. This blog post explains step-by-step how to migrate functions from FaaS environments into a fixed per-month pricing container environment. The running example will be Node.js functions running in Google Cloud Functions albeit the procedure is similarly applicable to other combinations.

Continue reading

Snafu – The Swiss Army Knife of Serverless Computing

The Service Prototyping Lab at Zurich University of Applied Sciences is committed to advancing the state of technology for bringing applications to the cloud, for the benefit of the society of large in general and of the local industry in particular. This obliges us to closely monitor industrial trends along with academic advances. A hot topic currently found in both is the higher-PaaS-level service class of FaaS, or Function-as-a-Service, which coincides with the marketing term Serverless Computing. We have already contributed analytical work on finding the limits and possibilities of today’s FaaS systems (preprint), and engineering work on translating legacy monolithic code into fine-grained functions (preprint). It was only a matter of time until the limits in both commercially operated FaaS services and open-source FaaS prototypes became too severe for our work. Hence, after a careful analysis of what is available, we decided to come up with an alternative FaaS host process design. The design led to an architecture, and the architecture eventually to an implementation called Snafu. This post presents Snafu and positions it as Swiss Army Knife for situations in which functions should be prototyped, tested or hosted.

Continue reading

The intricacies of running containers on OpenShift

In the context of the ECRP Project,  which is part of our cloud robotics initiative, we are aiming to build a PaaS solution for robotic applications.

The “Robot Operating System” (ROS) is widely used on several robotics platforms, and also runs on the turtlebot robots in our lab. One of the ideas behind cloud robotics is to enable ROS components (so called ROS nodes) to run distributed across the cloud infrastructure and the robot itself, so we can shift certain parts of the robotics application to the cloud. As a logical first step we tried to run existing ROS nodes, such as a ROS master in containers on Kubernetes, then we tried to use a proper Platform as a Service (PaaS) solution, in our case Red Hat OpenShift .

OpenShift offers a full PaaS experience, you can build and run code from source or run pre-built containers directly. All of those features can be managed via a intuitive web interface.

However, OpenShift imposes tight security restrictions on the containers it runs.
These are:

  • Prevention from running processes in containers as root
  • Using random user ID for running containers (Support Arbitrary User IDs)

Continue reading

MCN and ICCLab Demo at EUCNC

As part of our on-going work in MobileCloud Networking the project demonstrated at this year’s EUCNC, held in a very warm (> 35*C !!!) Paris, France.

The MCN demonstration was built on top of a standard cloud infrastructure, leveraging key technologies of OpenStack and OpenShift and used (open source outputs of MCN, namely hurtle – the cloud orchestration framework of the ICCLab which is used throughout MCN to enable service delivery. Also demonstrated was the use of the ICCLab’s billing solution, Cyclops that is orchestrated using Hurtle. All of this delivers a NFV-compatible, on-demand, composed service instance.

The MobileCloud Networking (MCN) approach and architecture was demonstrated aiming to show new innovative revenue streams based on new service offerings and the optimisation of CAPEX/OPEX. Of particular note and focus, the work highlighted results of cloudifying the Radio Access Network (RAN) and delivering this capability as an on-demand service.

Supporting this focus was the composition of an end-to-end service (RAN, EPC, IMS, DNS, Monitoring & Billing) instance via the MCN dashboard. This demo service is standards compliant and features interoperable implementations of ETSI NFV, OCCI and 3GPP software.

 

OpenStack Summit – Deep Dive into Day 2

CERN Openstack (super) User Story
CERN is looking for answers to the fundamental questions concerning creation of the Universe and true to its nature, its a a big data challenge. With the historical run of LHC in 2013, their archive now contains ~100PB (with additional 27PB/year) at ~11 000 servers with ~75000 disk drives and ~45 000 tapes and with the reopening of the LHC, they expect a significant increase of data in 2015. CERN recently opened up a new data center in Budapest connected to Geneve’s headquarters by T-Systems 100GbE line. 
CERN currently runs  four Openstack Icehouse clouds and expects these to run 150 000 cores by Q1 2015 in total. All the CERN‘s non-specific code is upstream and are available for anyone who would like to build at the top of it in the future.
CERN put great emphasis on collaboration. Openlab project is public-private partnership between CERN and major ICT companies (e.g. Rackspace) and its goal is to accelerate the development of cutting-edge cloud solutions.
 2014-11-04 09.42.44obrázek2
OpenShift on OpenStack
RedHat and Cisco gave a demo on deploying OpenShift on OpenStack using Heat, Docker & Kubernetes. OpenShift is a PaaS offering from RedHat with both the enterprise and open source versions. The thought process of deploying OpenShift on OpenStack is to maintain a high degree of flexibility and enable a faster deployment of applications. In the demo, Heat was made use of for orchestration. Docker’s pull and push methodology is used for getting a new Image or saving a modified version which could be pulled later on. Along with tagging of the images, diff operation can also be done. Docker containers are also used as daemons. However Docker cannot see beyond a single host and doesn’t have the capacity to manage mass configuration and deployment. That’s where the Kubernetes comes into picture. Here Pod resemble the Docker’s containers and the etc functionality is used to configure the master which would pass it along to the slaves and there by mass configuration is achieved.The link to the presentation can be found here.
2014-11-04 11.15.33 2014-11-04 11.18.49

MobileCloud Networking Live @ Globecomm

As part of the on-going work in MobileCloud Networking the project will demonstrate outputs of the project at this year’s Globecomm industry-track demonstrations. Globecomm is being held this year in Austin, Texas.

MobileCloud Networking (MCN) approach and architecture will be demonstrated aiming to show new innovative revenue streams based on new service offerings and the optimisation of CAPEX/OPEX. MCN is based on a service-oriented architecture that delivering end-to-end, composed services using cloud computing and SDN technologies. This architecture is NFV compatible but goes beyond NFV to bring new improvements. The demonstration includes real implementations of telco equipment as software and cloud infrastructure, providing a relevant view on how the new virtualised environment will be implemented.

For taking the advantage of the technologies offered by cloud computing today’s communication networks has to be re-designed and adapted to the new paradigm both as developing a comprehensive service enablement platform as well as through the appropriate softwarization of network components. Within the Mobile Cloud Networking project this new paradigm has been developed, and early results are already available to be exploited to the community. In particular this demonstration aims at deploying a Mobile Core Network on a cloud infrastructure and show the automated, elastic and flexible mechanism that are offered by such technologies for typical networking services. This demonstration aims at showing how a mobile core network can be instantiated on demand on top of a standard cloud infrastructure, leveraging key technologies of OpenStack and OpenShift.

Screen Shot 2014-11-05 at 12.21.47

The scenario will be as following:

  1. A tenant (Enterprise End User (EEU), in MCN terminology) – may be an MVNO or an enterprise network – requests the instantiation of a mobile core network service instance via the dashboard of the MCN Service Manager – the the service front-end where tenants can come and request the automated creation of a service instance via API or user interface. In particular the deployment of such core network will be on top of a cloud hosted in Europe. At the end of the provisioning procedures, the mobile core network endpoints will be communicated to the EEU.
  2. The EEU will have the possibility to access the Web frontend of the Home Subscriber Server (HSS) and provision new subscribers. Those subscribers information will be used also for configuring the client device (in our case a laptop).
  3. The client device will send the attachment requests to the mobile core network and establish a connectivity service. Since at the moment of the demonstration the clients will be located in the USA, there will be a VPN connection to the eNodeB emulator through which the attachment request will be sent. At the end of the attachment procedure all the data traffic will be redirected to Europe. It will be possible to show that the public IPs assigned to the subscriber are part of the IP range of the European cloud testbed.
  4. The clients attached to the network will establish a call making use of the IP Multimedia Subsystem provided by the MVNO. During the call the MVNO administrator can open the Monitoring as a Service tool provided by the MCN platform and check the current situation of the services. For this two IMS clients will be installed on the demonstration device.
  5. At the end of the demonstration it will be possible to show that the MVNO can dispose the instantiated core network and release the resources which are not anymore necessary. After this operation the MVNO will receive a bill indicating the costs for running such virtualized core network.

It specifically includes:

  • An end-to-end Service Orchestrator, managing dynamically the deployment of a set of virtual networks and of a virtual telecom platform. The service is delivered from the radiohead all the way through the core network to service delivery of IMS services. The orchestration framework is developed on an open source framework available under the Apache 2.0 license and is where the ICCLab actively develops and contributes.
  • Interoperability is guaranteed throughout the stack through the adoption of telecommunication standards (3GPPP, TMForum) and cloud computing standards (OCCI).
  • A basic monitoring system for providing momentary capacity and triggers for virtual network infrastructure adaptations. This will be part of the orchestrated composition.
  • An accounting-billing system for providing cost and billing functions back to the tenant or the provisioned service instance. This will be part of the orchestrated composition.
  • A set of virtualised network functions:
  • A realistic implementation of a 3GPP IP Multimedia Subsystem (IMS) based on the open source OpenIMSCore
  • A realistic implementation of a virtual 3GPP EPC based on the Fraunhofer FOKUS OpenEPC toolkit,
  • An LTE emulation bases on the Fraunhofer FOKUS OpenEPC eNB implementation
  • Demonstration of IMS call establishment across the provisioned on-demand virtualised network functions.

Getting Started with OpenShift and OpenStack

In Mobile Cloud Networking (MCN) we rely heavily on OpenStack, OpenShift and of course Automation. So that developers can get working fast with their own local infrastructure, we’ve spent time setting up an automated workflow, using Vagrant and puppet to setup both OpenStack and OpenShift. If you want to experiment with both OpenStack and OpenShift locally, simply clone this project:

$ git clone https://github.com/dizz/os-ops.git

Once it has been cloned you’ll need to initialise the submodules:

$ git submodule init
$ git submodule update

After that just you can begin the setup of OpenStack and OpenShift. You’ll need an installation of VirtualBox and Vagrant.

OpenStack

  • run in controller/worker mode:
      $ vagrant up os_ctl
      $ vagrant up os_cmp
    

There’s some gotchas, so look at the known issues in the README, specific to OpenStack. Otherwise, open your web browser at: http://10.10.10.51.

OpenShift

You’ve two OpenShift options:

  • run all-in-one:
      $ cd os-ops
      $ vagrant up ops_aio
    
  • run in controller/worker mode:
      $ cd os-ops
      $ vagrant up ops_ctl
      $ vagrant up ops_node
    

Once done open your web browser at: https://10.10.10.53/console/applications. There more info in the README.

In the next post we’ll look at getting OpenShift running on OpenStack, quickly and fast using two approaches, direct with puppet and using Heat orchestration.