Author: Andy Edmonds (page 2 of 7)

MobileCloud Networking Live @ Globecomm

As part of the on-going work in MobileCloud Networking the project will demonstrate outputs of the project at this year’s Globecomm industry-track demonstrations. Globecomm is being held this year in Austin, Texas.

MobileCloud Networking (MCN) approach and architecture will be demonstrated aiming to show new innovative revenue streams based on new service offerings and the optimisation of CAPEX/OPEX. MCN is based on a service-oriented architecture that delivering end-to-end, composed services using cloud computing and SDN technologies. This architecture is NFV compatible but goes beyond NFV to bring new improvements. The demonstration includes real implementations of telco equipment as software and cloud infrastructure, providing a relevant view on how the new virtualised environment will be implemented.

For taking the advantage of the technologies offered by cloud computing today’s communication networks has to be re-designed and adapted to the new paradigm both as developing a comprehensive service enablement platform as well as through the appropriate softwarization of network components. Within the Mobile Cloud Networking project this new paradigm has been developed, and early results are already available to be exploited to the community. In particular this demonstration aims at deploying a Mobile Core Network on a cloud infrastructure and show the automated, elastic and flexible mechanism that are offered by such technologies for typical networking services. This demonstration aims at showing how a mobile core network can be instantiated on demand on top of a standard cloud infrastructure, leveraging key technologies of OpenStack and OpenShift.

Screen Shot 2014-11-05 at 12.21.47

The scenario will be as following:

  1. A tenant (Enterprise End User (EEU), in MCN terminology) – may be an MVNO or an enterprise network – requests the instantiation of a mobile core network service instance via the dashboard of the MCN Service Manager – the the service front-end where tenants can come and request the automated creation of a service instance via API or user interface. In particular the deployment of such core network will be on top of a cloud hosted in Europe. At the end of the provisioning procedures, the mobile core network endpoints will be communicated to the EEU.
  2. The EEU will have the possibility to access the Web frontend of the Home Subscriber Server (HSS) and provision new subscribers. Those subscribers information will be used also for configuring the client device (in our case a laptop).
  3. The client device will send the attachment requests to the mobile core network and establish a connectivity service. Since at the moment of the demonstration the clients will be located in the USA, there will be a VPN connection to the eNodeB emulator through which the attachment request will be sent. At the end of the attachment procedure all the data traffic will be redirected to Europe. It will be possible to show that the public IPs assigned to the subscriber are part of the IP range of the European cloud testbed.
  4. The clients attached to the network will establish a call making use of the IP Multimedia Subsystem provided by the MVNO. During the call the MVNO administrator can open the Monitoring as a Service tool provided by the MCN platform and check the current situation of the services. For this two IMS clients will be installed on the demonstration device.
  5. At the end of the demonstration it will be possible to show that the MVNO can dispose the instantiated core network and release the resources which are not anymore necessary. After this operation the MVNO will receive a bill indicating the costs for running such virtualized core network.

It specifically includes:

  • An end-to-end Service Orchestrator, managing dynamically the deployment of a set of virtual networks and of a virtual telecom platform. The service is delivered from the radiohead all the way through the core network to service delivery of IMS services. The orchestration framework is developed on an open source framework available under the Apache 2.0 license and is where the ICCLab actively develops and contributes.
  • Interoperability is guaranteed throughout the stack through the adoption of telecommunication standards (3GPPP, TMForum) and cloud computing standards (OCCI).
  • A basic monitoring system for providing momentary capacity and triggers for virtual network infrastructure adaptations. This will be part of the orchestrated composition.
  • An accounting-billing system for providing cost and billing functions back to the tenant or the provisioned service instance. This will be part of the orchestrated composition.
  • A set of virtualised network functions:
  • A realistic implementation of a 3GPP IP Multimedia Subsystem (IMS) based on the open source OpenIMSCore
  • A realistic implementation of a virtual 3GPP EPC based on the Fraunhofer FOKUS OpenEPC toolkit,
  • An LTE emulation bases on the Fraunhofer FOKUS OpenEPC eNB implementation
  • Demonstration of IMS call establishment across the provisioned on-demand virtualised network functions.

OpenShift on OpenStack: Round One

In the our last article on OpenStack and OpenShift we showed how you could setup these two large systems quickly and fast but on your local system. Now what if you wanted to use OpenStack to provide OpenShift capabilities to a set of users?

Well why not use the cloud?! So the idea in this post is to use OpenStack as a provider of a computation and storage (essentially a VM), quickly provision the VM and then through a quick set of commands have the VM ready to serve an OpenShift service. This is something that is done in the Mobile Cloud Networking project, where we use OpenShift as a means to provide a service orchestration capability for all types of services, including ones like EPC (evolved packet core) and IMS (IP Multimedia Subsystem).

So let’s get started!

First we need a VM and so you’ll need a friendly IaaS provider. Luckily, the ICCLab operates OpenStack and so for this exercise we’ll create a VM…

$ source $my_openstack_credentials
$ nova boot --flavor m1.large --image centos-6.5 --key-name my_key ops_aio

Once done you should assign a floating IP to this VM. Now that the VM is accessible, we’ll set things up for our puppet run. There is some pre-step (all could be puppetised) that need to be done, once having SSH’ed into the newly created host:

$ sudo -i

$ hostname ops.cloudcomplab.ch

$ rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm 

$ echo "[openshift-origin-deps]
name=OpenShift Origin Dependencies - EL6
baseurl=http://mirror.openshift.com/pub/origin-server/release/3/rhel-6/dependencies/x86_64/
gpgcheck=0" &> /etc/yum.repos.d/openshift-origin-deps.repo

$ yum -y update --exclude kernel*
$ yum -y install libcgroup dbus puppet wget

$ sed -i 's/enforcing/permissive/g' /etc/selinux/config
$ setenforce 0

$ wget https://dl.dropboxusercontent.com/u/165239/mcn_cc.tar.gz
$ tar xvf mcn_cc.tar.gz
$ rm -f mcn_cc.tar.gz

$ puppet apply --verbose --modulepath '/root/mcn_cc/modules' --manifestdir /root/mcn_cc/manifests --detailed-exitcodes /root/mcn_cc/manifests/site.pp

Once the run has completed you will be able to access the OpenShift service at $FL_IP. At this point you would be able to create various runtime containers (e.g. Python, ruby etc) but in order for this to completely work, you will need some control over DNS settings. Access details can be found in the puppet manifest. To create new users run this:

oo-register-user -l admin -p adminpass --username me --userpass mypass

The last and final step that you might want to do is to setup a wildcard DNS entry. You only need do this if you want to serve users from a domain name (e.g. acme.com) that you have control over. The entry for bind would look like this:

 *.openshift      IN      A       1.23.33.46

By having this you will be able to access your applications under the domain, for example http://mywebapp-theuser.openshift.acme.com

Next up we’ll look at modifying this setup so a number of VMs are used to provide parts of the overall OpenShift system.

International Workshop on Cloud Automation, Intelligent Management and Scalability

The First International Workshop on Cloud Automation, Intelligent Management and Scalability (CAIMS 2014) is to be co-located with the 7th IEEE/ACM International Conference on Utility and Cloud Computing (UCC 2014) held in London, UK, December 8th – 11th 2014.

Objectives: The emergence of infrastructure as a service (IaaS) cloud computing environments which allow the dynamic scaling of resources provides us with new ways of harnessing computing resources. As both the number and types of IaaS offerings increase it is becoming increasingly difficult to select the right type, to use them efficiently and to ensure that the computing resources fit the demands of large-scale applications with dynamic levels of load. This workshop intends to bring interested researchers from around the world to explore the challenges and opportunities that exist in deploying intelligent systems to manage and dynamically auto-scale and adjust infrastructures to meet the needs of modern applications.

Scope: Topics of interest include but are not limited to:

  • Intelligent management and auto-scaling of cloud applications
  • Intelligent systems for the provisioning and monitoring of cloud resources
  • Green computing via the intelligent management of virtual machine numbers and types
  • The prediction and management of demand in cloud environments
  • Mechanisms for pooling spare virtual machines and sharing them between providers
  • Budget-aware cloud computing
  • Economic models for selecting the best cloud resource type for a given application
  • Dynamic pricing mechanisms for dynamic cloud resources
  • Automation of cloud management tasks
  • Cloud-based intelligent system applications.

Paper Submissions Due: 1st August, 2014

Submit a paper now

Getting Started with OpenShift and OpenStack

In Mobile Cloud Networking (MCN) we rely heavily on OpenStack, OpenShift and of course Automation. So that developers can get working fast with their own local infrastructure, we’ve spent time setting up an automated workflow, using Vagrant and puppet to setup both OpenStack and OpenShift. If you want to experiment with both OpenStack and OpenShift locally, simply clone this project:

$ git clone https://github.com/dizz/os-ops.git

Once it has been cloned you’ll need to initialise the submodules:

$ git submodule init
$ git submodule update

After that just you can begin the setup of OpenStack and OpenShift. You’ll need an installation of VirtualBox and Vagrant.

OpenStack

  • run in controller/worker mode:
      $ vagrant up os_ctl
      $ vagrant up os_cmp
    

There’s some gotchas, so look at the known issues in the README, specific to OpenStack. Otherwise, open your web browser at: http://10.10.10.51.

OpenShift

You’ve two OpenShift options:

  • run all-in-one:
      $ cd os-ops
      $ vagrant up ops_aio
    
  • run in controller/worker mode:
      $ cd os-ops
      $ vagrant up ops_ctl
      $ vagrant up ops_node
    

Once done open your web browser at: https://10.10.10.53/console/applications. There more info in the README.

In the next post we’ll look at getting OpenShift running on OpenStack, quickly and fast using two approaches, direct with puppet and using Heat orchestration.

FluidCloud presented at USENIX

The work on cloud service relocation that is being investigated by the ICCLab was presented at USENIX HotCloud13. In FluidCloud we ask the key question of

How to intrinsically enable and fully automate relocation of service instances between clouds?

and present an architecture to realise service relocation. Below you can have a look at the presentation (PDF here) and the paper itself (and eventually a video of the talk) is available at the HotCloud13 proceedings’ site.

Solidna

Solidna is a project that is funded by the Commission for Technology and Innovation. Solidna will develop a core strategic cloud-based storage product and service area for a major Infrastructure as a Service provider (CloudSigma). The three key innovations that will be developed in Solidna are:

1. Upgraded Compute Storage Performance: this will focus on stability and dependability, guaranteeing a minimum performance level for critical systems. Here we will focus on stability and dependability, guaranteeing a minimum performance level for critical systems. Customers directly affect each other’s performance on many IaaS platforms today. This is the most critical problem for a public cloud provider to solve. By having a virtual drive in a public cloud and storing that drive across hundreds of physical drives, that performance limitation can be reduced significantly.

Solidna will develop the means to deliver a cloud storage solution with a high level of stability and dependability and to guarantee a minimum performance level for critical systems. This innovation will be delivered through the following technical innovations of:

  • Mechanisms to guarantee a minimum expected performance
  • Reliable clients that ensure the data is read/written consistently
  • Definition of specific performance critical system metrics and reporting of those metrics
  • Optimisation of the system based on system metrics (e.g. variable block sizing based on data stored)
  • Data segmentation optimisations including block-size optimisation and distributed striping
  • On-demand performance guarantees can grow as requested by the user

2. Advanced Storage Management Functionality will be another focus in Solidna. This will focus on enabling a number of abilities including creation of live snapshots, the backup of virtual drives and to geo-replicate a drive to one or more additional locations. The same rich feature-set as a high end commercial SAN product will result from this project but using standard low-cost, commodity hardware and a new upgraded software storage system. These new features will form the basis of new revenue streams. Key features include:

  • The ability to create live snapshots and backups of virtual drives. This ability allows to backup data from drives and keep these as separate copies and is important for the data resilience and security reasons,
  • The capability to geo-replicate a drive to one or more additional locations.

This innovation will be delivered through the follwing technical innovations:

  • System agents to watch and discover failed or potential failing system nodes
  • Mechanisms and algorithms for deregistration, recreation and associated redistribution and rebalancing of the storage nodes
  • Active reliability automated testing of the cloud storage service
  • Logically centralised control centre for the entire system
  • Storage system with the ability to rebalance the storage nodes
  • Expansion the ICCLab framework to accommodate the DFS
  • Functionality of policy-defined geo-replication
  • Functionality of volume migration

3. Object-based Storage Environment: Massive capacity cloud storage and multi-modal API access to a reliable storage. The scalability of the storage offered to customers is limited to the maximum drive size of 2TB per drive. Although a server can mount multiple drives to form a larger storage volume, the practical maximum size per server could be estimated at around 20-30 TB. Even with multiple drives this becomes difficult to manage. As well as the usual API interface allowing access to virtual drives, the proposed work aims to expose directories of files in the object storage as network mount points to the compute cloud. In effect this gives customers two access points to their storage, based on usage needs, a network drive API and an object storage API interface. This innovation will be delivered through the following technical innovations:

  • Accessing stored data using POSIX and HTTP from within the VM with implementation of file system drivers and HTTP API
  • Review of existing storage APIs and recommendation

ICCLab Presents OCCI @ Future Internet Assembly

The ICCLab presented on the latest developments (PDF) in the Open Cloud Computing Interface at the Future Internet Assembly in Dublin. The session was organised by Cloud4SOA and the main theme was MultiCloud. In this regard, OCCI figures in many projects striving from this including EGI FedCloud, CompatibleOne and BonFire. In the presentation some future points of work that will be carried out in Mobile Cloud Networking, which took the audience’s interest.

ICCLab @ Swiss Academic Cloud Computing Experience

We presented at the Swiss Academic Cloud Computing Experience conference. Below are the slides as presented (or you can grab the PDF here).

3rd Swiss OpenStack User Group Meetup


chosug
Following on from our 2nd meeting, the Swiss OpenStack user group met on 24th of April at the University of Bern.It was an excellent event with many attention grabbing presentations! A big thanks goes out to the sponsors:

 

 

Once we kicked off
, there were five presentations, 3 which were more detailed and 2 that were more lightning talks in nature. The presentations in there running order were:

Upcoming

There are other upcoming Swiss events that will include much talk of OpenStack. Of note are:

Also the Swiss Informatics Society have started a cloud computing special interest group, where all folk active in cloud are welcomed to join. More details can be found at their site.

Swiss OpenStack User Group channels

 

ICCLab at The Second National Conference on Cloud Computing and Commerce

The ICCLab presented at Ireland’s second national conference on cloud computing and commerce (NC4). Below is the presentation (PDF here) given on providing a guide on how to assess the openness of cloud standards.

 

« Older posts Newer posts »