ESOCC 2017 – Oslo

The past 27, 28 and 29 of September were dedicated to the 6th European Conference on Service-Oriented and Cloud Computing (ESOCC) in Oslo, Norway. It is one of the traditional community-run conferences in Europe with a cloud and community history dating back into the year 2012 and a (web) service history of about a decade before that. As in previous years, it featured the co-located event CloudWays: the 3rd International Workshop on Cloud Adoption and Migration, which is focused on cloud applications more than on infrastructure and platforms. The topic is thus of high interest for the Service Prototyping Lab and specially for its Cloud-Native Applications (CNA) research initiative in which we partner with Swiss SMEs to explore new cloud-native designs and architectures for elastically scalable, resilient, price-efficient and portable services. Our participation was therefore centered around the presentation of research results from one of these partnerships.

Continue reading

ROSCon 2017 – Vancouver

For third time in a row we attended ROSCon, this year held in beautiful Vancouver.
Our goals besides seeing the newest trends in the ROS and Robotics universe first hand, and finding some new robotic hardware directly from manufacturers, was to support our partners from Rapyuta Robotics (RR) in presenting and performing a demo of the first preview of their upcoming Cloud Robotics Platform.

Continue reading

OpenShift custom router with TCP/SNI support

In the context of the ECRP Project, we need to orchestrate intercommunicating components and services running on robots and in the cloud. The communication of this components relies on several protocols including L7 as well as L4 protocols such as TCP and UDP.

One of the solutions we are testing as the base technology for the ECRP cloud platform is OpenShift. As a proof of concept, we wanted to test TCP connectivity to components deployed in our OpenShift 1.3 cluster. We chose to run two RabbitMQ instances and make them accessible from the Internet to act as TCP endpoints for incoming robot connections.

The concept of “route” in OpenShift has the purpose to enable connections from outside the cluster to services and containers. Unfortunately, the default router component in OpenShift only supports HTTP/HTTPS traffic, hence cannot natively support our intended use case. However, Openshift routing can be extended with so called “custom routers”.

This blog post will lead you through the process of creating and deploying a custom router supporting TCP traffic and SNI routing in OpenShift.

Continue reading

New Release of DISCO – easier than ever, more powerful than before

Almost one year ago, the first version of DISCO was publicly released. Since then, a major refactoring of DISCO has taken place and we are proud to announce the fresh version with even better usability and a user-friendly dashboard. But first of all, how can DISCO help you? And what is new after the refactoring? We would like to present you the ways how DISCO can make your life as a Big Data analyst much easier. A short wrap-up is presented before the new features are explained more closely.

How can DISCO help me?

DISCO is a framework for the automatic deployment of distributed computing clusters. But not just that, DISCO even provisions the distributed computing software. You can lean back and have the tedious task done by DISCO so that you can focus entirely on the Big Data analysis part.

The new DISCO framework – even more versatile

What is new in the new DISCO edition? To say it shortly: almost everything! Here is a list containing the major new features:

  • Dashboard to hide the command line
  • easy setup for front-end and backend
  • many more Distributed Computing Frameworks
  • hassle-free extensibility with new components
  • automatic dependency handling for components
  • more intuitive commands over CRUD interface (though still no update functionality)

The Dashboard – a face for DISCO

A new dashboard hides the entire background complexity from the end user. Now, everything from planning over deployment to deletion can be done over an intuitive web interface. The dashboard will also provide you with real-time information about the status of the installed frameworks on your computing cluster.

Easy setup

Installing DISCO has never been as easy as it is now! The backend only needs 3 settings to be entered, two of which are not even external settings. And the dashboard? The dashboard comes even with its own installation script – so the most difficult part is cloning the github repository.

New Distributed Computing frameworks

The first version of DISCO could only provision Hadoop. The new release has more, most importantly another major Distributed Computing framework. Here is a list of all supported frameworks right now:

Extensibility made easy

Is there a framework that you would like to provision, but which is not implemented in DISCO yet? This is not a problem anymore! The new system is very easy to extend with new components. You can just write the new component (for instance by copying and modifying an existing component) and drop its directory structure to the other components! There is no installation needed; you can have the new component deployed immediately. DISCO has a built-in functionality which will greatly enhance your provisioning experience – everything is done in parallel on the entire cluster! Just take a look at the Wiki for further reference.

Dependency handling automated

When it comes to dependencies among the frameworks, things can get complicated easily. Unless you are using DISCO. DISCO automatically installs each required component for a smooth provisioning process. You don’t have to bother yourself with questions about which additional components to install. You just select the ones you need access to and DISCO will take care of the rest.

Future work

DISCO did a huge leap forward over the last year. Still, there are some visions what can be done to improve or extend it even beyond its current state. In the future, DISCO will not only provision distributed computing clusters but it will find out on its own what the end user needs for his current task. There will be a recommendation engine, which will propose the best fitting distributed computing frameworks upon a completed questionnaire. Of course, as the world of distributed computing frameworks is always evolving, more components are going to be included on the go. Still, this doesn’t mean that DISCO will get more complicated – on contrary: the Dashboard will make the choice of frameworks and settings easier than ever. We already have many ideas how to provide  an even more fulfilled user experience. Just wait and see the new additions! Don’t forget to check back regularly or to sign up for our mailing list for news! And if there is something that we have missed (or something that you specially like), please contact us – we will happily help you!

DISCO 2.0 release can be downloaded from our git repo here: https://github.com/icclab/disco and extensive documentation is available under the github wiki at https://github.com/icclab/disco/wiki, we wish you happy testing!

Cloud Services: An Academic Perspective

An academic entity – more concretely, a research laboratory – resembles a stateful function: It receives input and generates output based on both the input and on previously generated knowledge and results. The input is typically a mix of fancy ideas, industry necessities, as well as funding and equipment. The output encompasses publications, software and other re-usable artefacts.

In the Service Prototyping Lab, we rely on access to well-maintained cloud environments as one form of input to come up with and test relevant concepts and methods on how to bring applications and services online on top of programmable platforms and infrastructure (i.e., PaaS and IaaS). This Samichlaus post reports on our findings after having used several such environments in parallel over several months.

Continue reading

Cyclops 2.0 is here!

Our flagship open-source framework for cloud billing – Cyclops has matured to version 2.0 today. Over the past several months, Cyclops team at ICCLab have gathered community feedbacks, worked systematically updating and re-updating the framework core architecture to make the whole work-flow of billing of cloud services clean and seamless.

The core components in principle are still same as in our previous releases: udr, rc and billing micro-services, but they have been written again from scratch with main focus on modularity, extensibility, and elasticity. The framework is highly configurable and can be deployed as per the unique needs of billing use-cases of any organization.

RCB Cyclops architecture 2.0

RCB Cyclops architecture 2.0

Continue reading

Wanted: Senior Researcher / Researcher for Cloud Robotics

The Service Engineering (SE, blog.zhaw.ch/icclab) group at the Zurich University of Applied Sciences (ZHAW) / Institute of Applied Information Technology (InIT) in Switzerland is seeking applications for a full-time position at its Winterthur facility.

The successful candidate will work in the Service Prototyping Lab (SPLab) and will contribute to the research initiative on cloud robotics, see https://blog.zhaw.ch/icclab/category/research-approach/themes/cloud-robotics
Continue reading

Experimental evaluation of post-copy live migration in OpenStack using 10Gb/s interfaces

Up to now, we have published several blog posts focusing on the live migration performance in our experimental Openstack deployment – performance analysis of post-copy live migration in Openstack and an analysis of the performance of live migration in Openstack. While we analyzed the live migration behaviour using different live migration algorithms (read our previous blog posts regarding pre-copy and post-copy (hybrid) live migration performance) we observed that both live migration algorithms can easily saturate our 1Gb/s infrastructure and that is not fast enough, not for us! Fortunately, our friends Robayet Nasim and Prof. Andreas Kassler from Karlstad University, Sweden also like their live migrations as fast and reliable as possible, so they kindly offered their 10 Gb/s infrastructure for further performance analysis. Since this topic is very much in line with the objectives of the COST ACROSS action which both we (ICCLab!) and Karlstad are participants of,  this analysis  was carried out under a 2-week short term scientific mission (STSM) within this action.
This blog post presents a short wrap-up of the results obtained focusing on the evaluation of post-copy live migration in OpenStack using 10Gb/s interfaces and comparing them with the performance of the 1Gb/s setup. The full STSM report can be found here. Continue reading

GPU support in the cloud

It is well recognized that GPUs can greatly outperform standard CPUs for certain types of work – typically those which can be decomposed into many basic computations which can be parallelized; matrix operations are the classical example. However, GPUs have evolved primarily in the context of the quite independent video subsystem and even there, the key driver has been support for advanced graphics and gaming. Consequently, they have not been architected to support diverse applications within the cloud. In this blog post we comment on the state of the art regarding GPU support in the cloud.

Continue reading

Cloud Orchestration: Hurtle Released

We are proud to announce that the ICCLab has released Hurtle!

Hurtle logo Hurtle

 

Hurtle is a result of the ICCLab’s Cloud Orchestration Initiative.

What is hurtle?

With Hurtle, you automate the life-cycle management of any number of service instances in the cloud, from deployment of resources all the way to configuration and runtime management (e.g., scaling) of each instance. Our motivation is that software vendors often face questions such as “How can I easily provision and manage new instances of the service I offer for each new customer?”, this is what Hurtle aims to solve.

In short, Hurtle lets you:

offer your software as a service i.e. “hurtle it!”

In Hurtle terms, a service represents an abstract functionality that, in order to be performed, requires a set of resources, such as virtual machines or storage volumes, and an orchestrator which describes what has to be done at each step of a service lifecycle.
A “service instance” is the concrete instantiation of a service functionality with its associated set of concrete resources and service endpoints.

On top of this, Hurtle has been designed since its inception to support service composition, so that you can design complex services by (recursively!) composing simple ones.

Hurtle’s functionality revolves around the idea of services as distributed systems composed of multiple sub-applications, so the services offered are also ones that can be designed with the cloud in mind, based on the cloud-native application research of the ICCLab.

What does it mean to offer software as a service?

A bit of history first. Traditionally software has been ran locally, then was centralised and shared through intra-nets. All of this was still on company-specific infrastructure. This made hosting, provisioning and managing such software difficult and the full time job of many IT engineers and system administrators.

This quickly brought about the argument that IT in a SME or an enterprise was a cost centre that should be minimised and lead to outsourcing of such tasks to 3rd parties.

Now today with the ever growing acceptance and use of cloud computing the cost equation is again further reduced, but more interestingly, cloud computing reverses the trend of outsourcing operations to third parties if you consider the movement of devops.

In this new world organisations that create software don’t want nor need third parties to manage their software deployments. They have much of the tooling needed, developed in-house. If they don’t, yet still want to follow a devops approach they’ve quite an amount of work ahead of them.

It is in this scenario where hurtle can help!

What can hurtle do?

What will hurtle do?

  • More examples including the cloud native Zurmo implementation from ICCLab
  • Enhanced workload placement, dynamic policy-based
  • Support for docker-registry deployed containers
  • Runtime updates to service and resource topologies
  • CI and CD support
    • safe monitored dynamic service updates
  • TOSCA support
  • Support for VMware and CloudStack
  • User interface to visualise resource and services relationships
  • Additional external service endpoint protocol support

Want to know more?

Checkout: hurtle.it