KubeCon’18 – Cloud, containers, edge, nets, robots, and philosophy of science

KubeCon / CloudNativeCon Europe 2018 took place at the shiny Bella Center of Copenhagen on May 2 – 4, 2018.
Here at ICCLab/SPLab we use extensively Kubernetes / CNCF technologies both in teaching and research, but we had one extra reason for being there this year: our friends and colleagues from Rapyuta Robotics (RR) were scheduled to give a talk on Cloud Robotics PaaS.

Bella Center - Copenhagen

Bella Center – Copenhagen

Continue reading

UCC 2017 Coverage – Day 1

Our own researchers Piyush and Josef are in Austin, the capital of the lone star state Texas to attend the current iteration of IEEE/ACM International Conference on Utility and Cloud Computing which takes place in conjunction with the International Conference on Big Data Computing, Applications and Technologies. ICCLab’s and SPLab’s recent research results have been accepted as multiple peer-reviewed workshop papers and a tutorial presented on the first day and a work in progress poster which will be presented in the next days.

In this series of blog posts, starting with this one, we will present our views and analysis of the results that will be presented at this event by cloud researchers from around the world.

Continue reading

Enhancing OpenStack Swift to support edge computing context

As the trend continues to move towards Serverless Computing, Edge Computing and Functions as a Service (FaaS), the need for a storage system that can adapt to these architectures grows ever bigger. In a scenario where smart cars have to make decisions on a whim, there is no chance for that car to ask a data center what to do in this scenario. These scenarios constitute a driver for new storage solutions in more distributed architectures. In our work, we have been considering a scenario in which there is a distributed storage solution which exposes different local endpoints to applications distributed over a mix of cloud and local resources; such applications can give the storage infrastructure and indicator of the nature of the data which can then be used to determine where it should be stored. For example, data could be considered to be either latency-sensitive (in which case the storage system should try to store it as locally as possible) or loss sensitive (in which case the storage system should ensure it is on reliable storage). Continue reading

ESOCC 2017 – Oslo

The past 27, 28 and 29 of September were dedicated to the 6th European Conference on Service-Oriented and Cloud Computing (ESOCC) in Oslo, Norway. It is one of the traditional community-run conferences in Europe with a cloud and community history dating back into the year 2012 and a (web) service history of about a decade before that. As in previous years, it featured the co-located event CloudWays: the 3rd International Workshop on Cloud Adoption and Migration, which is focused on cloud applications more than on infrastructure and platforms. The topic is thus of high interest for the Service Prototyping Lab and specially for its Cloud-Native Applications (CNA) research initiative in which we partner with Swiss SMEs to explore new cloud-native designs and architectures for elastically scalable, resilient, price-efficient and portable services. Our participation was therefore centered around the presentation of research results from one of these partnerships.

Continue reading

ROSCon 2017 – Vancouver

For third time in a row we attended ROSCon, this year held in beautiful Vancouver.
Our goals besides seeing the newest trends in the ROS and Robotics universe first hand, and finding some new robotic hardware directly from manufacturers, was to support our partners from Rapyuta Robotics (RR) in presenting and performing a demo of the first preview of their upcoming Cloud Robotics Platform.

Continue reading

OpenShift custom router with TCP/SNI support

In the context of the ECRP Project, we need to orchestrate intercommunicating components and services running on robots and in the cloud. The communication of this components relies on several protocols including L7 as well as L4 protocols such as TCP and UDP.

One of the solutions we are testing as the base technology for the ECRP cloud platform is OpenShift. As a proof of concept, we wanted to test TCP connectivity to components deployed in our OpenShift 1.3 cluster. We chose to run two RabbitMQ instances and make them accessible from the Internet to act as TCP endpoints for incoming robot connections.

The concept of “route” in OpenShift has the purpose to enable connections from outside the cluster to services and containers. Unfortunately, the default router component in OpenShift only supports HTTP/HTTPS traffic, hence cannot natively support our intended use case. However, Openshift routing can be extended with so called “custom routers”.

This blog post will lead you through the process of creating and deploying a custom router supporting TCP traffic and SNI routing in OpenShift.

Continue reading

New Release of DISCO – easier than ever, more powerful than before

Almost one year ago, the first version of DISCO was publicly released. Since then, a major refactoring of DISCO has taken place and we are proud to announce the fresh version with even better usability and a user-friendly dashboard. But first of all, how can DISCO help you? And what is new after the refactoring? We would like to present you the ways how DISCO can make your life as a Big Data analyst much easier. A short wrap-up is presented before the new features are explained more closely.

How can DISCO help me?

DISCO is a framework for the automatic deployment of distributed computing clusters. But not just that, DISCO even provisions the distributed computing software. You can lean back and have the tedious task done by DISCO so that you can focus entirely on the Big Data analysis part.

The new DISCO framework – even more versatile

What is new in the new DISCO edition? To say it shortly: almost everything! Here is a list containing the major new features:

  • Dashboard to hide the command line
  • easy setup for front-end and backend
  • many more Distributed Computing Frameworks
  • hassle-free extensibility with new components
  • automatic dependency handling for components
  • more intuitive commands over CRUD interface (though still no update functionality)

The Dashboard – a face for DISCO

A new dashboard hides the entire background complexity from the end user. Now, everything from planning over deployment to deletion can be done over an intuitive web interface. The dashboard will also provide you with real-time information about the status of the installed frameworks on your computing cluster.

Easy setup

Installing DISCO has never been as easy as it is now! The backend only needs 3 settings to be entered, two of which are not even external settings. And the dashboard? The dashboard comes even with its own installation script – so the most difficult part is cloning the github repository.

New Distributed Computing frameworks

The first version of DISCO could only provision Hadoop. The new release has more, most importantly another major Distributed Computing framework. Here is a list of all supported frameworks right now:

Extensibility made easy

Is there a framework that you would like to provision, but which is not implemented in DISCO yet? This is not a problem anymore! The new system is very easy to extend with new components. You can just write the new component (for instance by copying and modifying an existing component) and drop its directory structure to the other components! There is no installation needed; you can have the new component deployed immediately. DISCO has a built-in functionality which will greatly enhance your provisioning experience – everything is done in parallel on the entire cluster! Just take a look at the Wiki for further reference.

Dependency handling automated

When it comes to dependencies among the frameworks, things can get complicated easily. Unless you are using DISCO. DISCO automatically installs each required component for a smooth provisioning process. You don’t have to bother yourself with questions about which additional components to install. You just select the ones you need access to and DISCO will take care of the rest.

Future work

DISCO did a huge leap forward over the last year. Still, there are some visions what can be done to improve or extend it even beyond its current state. In the future, DISCO will not only provision distributed computing clusters but it will find out on its own what the end user needs for his current task. There will be a recommendation engine, which will propose the best fitting distributed computing frameworks upon a completed questionnaire. Of course, as the world of distributed computing frameworks is always evolving, more components are going to be included on the go. Still, this doesn’t mean that DISCO will get more complicated – on contrary: the Dashboard will make the choice of frameworks and settings easier than ever. We already have many ideas how to provide  an even more fulfilled user experience. Just wait and see the new additions! Don’t forget to check back regularly or to sign up for our mailing list for news! And if there is something that we have missed (or something that you specially like), please contact us – we will happily help you!

DISCO 2.0 release can be downloaded from our git repo here: https://github.com/icclab/disco and extensive documentation is available under the github wiki at https://github.com/icclab/disco/wiki, we wish you happy testing!

Cloud Services: An Academic Perspective

An academic entity – more concretely, a research laboratory – resembles a stateful function: It receives input and generates output based on both the input and on previously generated knowledge and results. The input is typically a mix of fancy ideas, industry necessities, as well as funding and equipment. The output encompasses publications, software and other re-usable artefacts.

In the Service Prototyping Lab, we rely on access to well-maintained cloud environments as one form of input to come up with and test relevant concepts and methods on how to bring applications and services online on top of programmable platforms and infrastructure (i.e., PaaS and IaaS). This Samichlaus post reports on our findings after having used several such environments in parallel over several months.

Continue reading

Cyclops 2.0 is here!

Our flagship open-source framework for cloud billing – Cyclops has matured to version 2.0 today. Over the past several months, Cyclops team at ICCLab have gathered community feedbacks, worked systematically updating and re-updating the framework core architecture to make the whole work-flow of billing of cloud services clean and seamless.

The core components in principle are still same as in our previous releases: udr, rc and billing micro-services, but they have been written again from scratch with main focus on modularity, extensibility, and elasticity. The framework is highly configurable and can be deployed as per the unique needs of billing use-cases of any organization.

RCB Cyclops architecture 2.0

RCB Cyclops architecture 2.0

Continue reading

Wanted: Senior Researcher / Researcher for Cloud Robotics

The Service Engineering (SE, blog.zhaw.ch/icclab) group at the Zurich University of Applied Sciences (ZHAW) / Institute of Applied Information Technology (InIT) in Switzerland is seeking applications for a full-time position at its Winterthur facility.

The successful candidate will work in the Service Prototyping Lab (SPLab) and will contribute to the research initiative on cloud robotics, see https://blog.zhaw.ch/icclab/category/research-approach/themes/cloud-robotics
Continue reading