Month: March 2017

ElasTest KickOff Meeting

The most limiting factor in development today is software validation, which typically requires very costly and complex testing processes. It will develop a novel test orchestration theory and toolbox enabling the creation of complex test suites as the composition of simple testing units. The ElasTest project wants to develop an elastic platform for testing complex distributed large software systems.

The ElasTest kickoff meeting took place during January in Madrid. ElasTest’s consortium comprises of 10 partners including IBM, ATOS, Technical University of Berlin and is coordinated by the University of Rey Juan Carlos.

Consortium:

  • Universidad Rey Juan Carlos (URJC)
  • Fraunhofer Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. (Fraunhofer)
  • Technische Universitaet Berlin (TUB)
  • Consiglio Nazionale Delle Ricerche (CNR)
  • Fundación Imdea Software (IMDEA)
  • Atos Spain S.A. (ATOS)
  • Zürcher Hochschule Für Angewandte Wissenschaften (ZHAW)
  • Tikal Technologies S.L. (NAEVATEC)
  • IBM Ireland Limited (IBM IRE)
  • Production Trade And Support of Machinable Products of Software and Informatics – Relational Technology A.E. (RELATIONAL)

For more information on the ElasTest project visit our ElasTest section!

Elastest on the EU Portal

ElasTest

An elastic platform for testing complex distributed large software systems.

An elastic platform for testing complex distributed large software systems.

Summary:

The most limiting factor in development today is software validation, which typically requires very costly and complex testing processes. ElasTest pretends to improve significantly the efficiency and effectiveness of the testing process. The project aims to build an elastic platform that eases the testing phase for large and distributed software systems. This is done in order to reduce the time-to-market of the software projects and increase its quality, quality of service (QoS) as well as its quality of experience (QoE). ElasTest (Project ID: 731535) will also develop a novel test orchestration theory and toolbox enabling the creation of complex test suites as the composition of simple testing units (T-Jobs).

ElasTest just started on January of 2017 and its consortium comprises of 10 partners including IBM, ATOS, Technical University of Berlin and is coordinated by the University of Rey Juan Carlos.

In this project ZHAW is going to be working on the development of the ElasTest Platform Manager (EPM) and the ElasTest Test Orchestration and Recommendation Manager (TORM).

The EPM is the interface between the ElasTest testing components (e.g. TORM, Test Support Services, etc.) and the cloud infrastructure where ElasTest is deployed. This platform must abstract the cloud services so that Elastest becomes fully agnostic to them and provide this abstraction via Software Development Toolkits or REST APIs to the users of it. The objective of the EPM is to implement such a Platform Manager that enables ElasTest to be deployed in a target cloud infrastructure (e.g. OpenStack, CloudStack, AWS, etc.).

The TORM is the ElasTest brain and the main entry point for developers. This requires identifying, specifying and implementing a number of interfaces through which the TORM exposes its capabilities to testers. This interface include the following:

  • SuT (Software under Test) specification. Developers need to be able to specify their SuT so that the TORM can execute the tests on it.
  • Engine Hosting. The TORM enables engines to be plugged as modules.
  • Development APIs and interfaces. The TORM is the main entry point for testers, that is why it needs to expose the appropriate interfaces and tools enabling developers to consume the different capabilities exposed by the platform.

Coordinator: Universidad Rey Juan Carlos (URJC)

Consortium:

  • Universidad Rey Juan Carlos (URJC)
  • Fraunhofer Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. (Fraunhofer)
  • Technische Universitaet Berlin (TUB)
  • Consiglio Nazionale Delle Ricerche (CNR)
  • Fundación Imdea Software (IMDEA)
  • Atos Spain S.A. (ATOS)
  • Zürcher Hochschule Für Angewandte Wissenschaften (ZHAW)
  • Tikal Technologies S.L. (NAEVATEC)
  • IBM Ireland Limited (IBM IRE)
  • Production Trade And Support of Machinable Products of Software and Informatics – Relational Technology A.E. (RELATIONAL)

Snafu – The Swiss Army Knife of Serverless Computing

by Josef Spillner

The Service Prototyping Lab at Zurich University of Applied Sciences is committed to advancing the state of technology for bringing applications to the cloud, for the benefit of the society of large in general and of the local industry in particular. This obliges us to closely monitor industrial trends along with academic advances. A hot topic currently found in both is the higher-PaaS-level service class of FaaS, or Function-as-a-Service, which coincides with the marketing term Serverless Computing. We have already contributed analytical work on finding the limits and possibilities of today’s FaaS systems (preprint), and engineering work on translating legacy monolithic code into fine-grained functions (preprint). It was only a matter of time until the limits in both commercially operated FaaS services and open-source FaaS prototypes became too severe for our work. Hence, after a careful analysis of what is available, we decided to come up with an alternative FaaS host process design. The design led to an architecture, and the architecture eventually to an implementation called Snafu. This post presents Snafu and positions it as Swiss Army Knife for situations in which functions should be prototyped, tested or hosted.

Continue reading

T-Nova final year demo and project review outcome

The ICCLab T-Nova members are back from the 3rd and final project review meeting that was held on March 16th in Athens, Greece. Couple of days before that, we were enrolled in the preparation and setup of the final demos at N.C.S.R. Demokritos premises.

Several demonstrations were shown to the reviewers during the demo session in order to point out the achievements in the last year of the project. Moreover a poster session was held in parallel to stress the contributions in terms of conference and journal papers published within the project’s scope, Figure 1.

Continue reading

NetSys’17 conference report

by Josef Spillner

NetSys is a regular biennial conference covering networked and distributed systems. This year, NetSys’17 took place in Göttingen, Germany. We have attended and report on some of the research and technology trends, obviously with a focus on our own research directions. Apologies ahead for not giving a full account of the whole conference as our presence was limited by lecturing duties.

Continue reading

New Release of DISCO – easier than ever, more powerful than before

Almost one year ago, the first version of DISCO was publicly released. Since then, a major refactoring of DISCO has taken place and we are proud to announce the fresh version with even better usability and a user-friendly dashboard. But first of all, how can DISCO help you? And what is new after the refactoring? We would like to present you the ways how DISCO can make your life as a Big Data analyst much easier. A short wrap-up is presented before the new features are explained more closely.

How can DISCO help me?

DISCO is a framework for the automatic deployment of distributed computing clusters. But not just that, DISCO even provisions the distributed computing software. You can lean back and have the tedious task done by DISCO so that you can focus entirely on the Big Data analysis part.

The new DISCO framework – even more versatile

What is new in the new DISCO edition? To say it shortly: almost everything! Here is a list containing the major new features:

  • Dashboard to hide the command line
  • easy setup for front-end and backend
  • many more Distributed Computing Frameworks
  • hassle-free extensibility with new components
  • automatic dependency handling for components
  • more intuitive commands over CRUD interface (though still no update functionality)

The Dashboard – a face for DISCO

A new dashboard hides the entire background complexity from the end user. Now, everything from planning over deployment to deletion can be done over an intuitive web interface. The dashboard will also provide you with real-time information about the status of the installed frameworks on your computing cluster.

Easy setup

Installing DISCO has never been as easy as it is now! The backend only needs 3 settings to be entered, two of which are not even external settings. And the dashboard? The dashboard comes even with its own installation script – so the most difficult part is cloning the github repository.

New Distributed Computing frameworks

The first version of DISCO could only provision Hadoop. The new release has more, most importantly another major Distributed Computing framework. Here is a list of all supported frameworks right now:

Extensibility made easy

Is there a framework that you would like to provision, but which is not implemented in DISCO yet? This is not a problem anymore! The new system is very easy to extend with new components. You can just write the new component (for instance by copying and modifying an existing component) and drop its directory structure to the other components! There is no installation needed; you can have the new component deployed immediately. DISCO has a built-in functionality which will greatly enhance your provisioning experience – everything is done in parallel on the entire cluster! Just take a look at the Wiki for further reference.

Dependency handling automated

When it comes to dependencies among the frameworks, things can get complicated easily. Unless you are using DISCO. DISCO automatically installs each required component for a smooth provisioning process. You don’t have to bother yourself with questions about which additional components to install. You just select the ones you need access to and DISCO will take care of the rest.

Future work

DISCO did a huge leap forward over the last year. Still, there are some visions what can be done to improve or extend it even beyond its current state. In the future, DISCO will not only provision distributed computing clusters but it will find out on its own what the end user needs for his current task. There will be a recommendation engine, which will propose the best fitting distributed computing frameworks upon a completed questionnaire. Of course, as the world of distributed computing frameworks is always evolving, more components are going to be included on the go. Still, this doesn’t mean that DISCO will get more complicated – on contrary: the Dashboard will make the choice of frameworks and settings easier than ever. We already have many ideas how to provide  an even more fulfilled user experience. Just wait and see the new additions! Don’t forget to check back regularly or to sign up for our mailing list for news! And if there is something that we have missed (or something that you specially like), please contact us – we will happily help you!

DISCO 2.0 release can be downloaded from our git repo here: https://github.com/icclab/disco and extensive documentation is available under the github wiki at https://github.com/icclab/disco/wiki, we wish you happy testing!

Rancher – initial experience report

In the context of the FINEXT project, we have been reviewing Rancher as a tool to support easy deployment of FIWARE components. (Our colleagues in the project have more experience with this tool – we’re still climbing the learning curve). Here are a few observations relating to Rancher.

The primary problem that Rancher solves is management of potentially disparate (sets of) IaaS resources to provide support for deploying containerized applications. Another important aspect of the Rancher vision is the application catalog – a well defined set of containerized applications that can be deployed to a container platform.

This is, of course, a very noisy area with much technology competition: the Rancher team developed their own orchestration framework – Cattle – but it was clear from some time ago that there would be many different orchestration frameworks and they intelligently decided to integrate with other platforms which were gaining traction. Specifically, they provide support for Kubernetes, Swarm and Mesos.

While playing with Rancher to understand how it works, we looked at how it supports three use cases:

  • Deployment of applications on IaaS with Cattle based container management
  • Deployment of applications on IaaS with Kubernetes container management
  • Deployment of applications on IaaS with Swarm container management

Although the Mesos case is also interesting (and we like Mesos!), we decided not to consider it as Mesos does not currently have as much momentum as the other technologies.

The basic Rancher approach

Before discussing our initial observations, it is appropriate to give some details on key concepts in Rancher.

Rancher supports so-called Environments which are defined by a specific orchestration framework (eg Cattle, Kubernetes, Swarm) and comprise of a set of Hosts. Applications can be deployed to Environments from the Application Catalog; obviously, Applications need to be defined in a manner that is compatible with the Environment’s orchestration mechanisms – for Cattle and Swarm, applications can be defined using docker-compose format; Kubernetes environments require a different pod-based format.

Hosts are typically VMs that run in cloud platforms – although it is possible to configure these manually, the intended use case is that these are created by docker-machine. docker-machine contains drivers for many different hosting providers and Rancher leverages these to enable Hosts to be provisioned on a wide range of different providers. Rancher provisioning is quite complex, but generally it comprises of deploying rancher-agent which enables Rancher to monitor and control the host and deployment of an overlay network which enables the Host to network with the other Hosts in the Environment.

The workflow then is one in which rancher-server is first deployed (typically in a VM). In rancher-server an Environment is created, Hosts are added to the environment and then Applications can be deployed from the catalog. Note that rancher-server can – typically should – manage multiple different Environments. Rancher provides good support for monitoring the state of the system: for example, it is straightforward to see all containers running on the Hosts, their logs and if they are in an error state.

Standard Cattle Management

Standard Cattle Management is the most developed Rancher capability. In this mode, Cattle – running in rancher-server – is responsible for orchestrating the application on the host. rancher-agent runs on each of the Hosts in privileged mode and hence it has the power to create and destroy containers on each Host. rancher-server communicates with rancher-agent over a websockets connection to obtain the state of the host.

The Application Catalog for this mode is well developed. Rancher comes with a default Application Catalog and supports import of applications to the catalog. Further, it supports the use of private docker registries as it is clear that many applications would not be public. In the Cattle Environment, the Application Catalog is evident (a menu item at the top of the screen). Applications comprise of three files:

  • docker-compose.yml: contains the standard information for launching and managing a multi-container service
  • rancher-compose.yml: which contains the descriptor for the application in the service catalog, what parameters it requires as well as information pertaining to deploying the application over multiple VMs, scaling, health checks etc
  • Answers.txt: contains the default values of the parameters required to launch the application

We did not experiment much with Cattle orchestration, but documentation indicates that it is a sensible orchestration framework which deploys applications in a balanced manner across the Hosts in the Environment.

Rancher with Kubernetes

Rancher also supports Environments based on Kubernetes. As such, Rancher supports rapid and easy deployment of a Kubernetes cluster across disparate hosts: the Environment is created in Rancher and Hosts are added via docker-machine.

As with the Cattle deployment, rancher-agent is deployed on all nodes in the cluster: this enables Rancher to have full visibility of each of the nodes in the Environment – what containers are running etc. It is also used in the process of deploying the Kubernetes environment.

It took us some time to understand the role of the Application Catalog in the Kubernetes context. Although Rancher has some support for a Kubernetes Application Catalog and the Catalog differs from that available for standard Cattle Environments – applications are described in terms of pods – we found that deploying these applications did not work.

The Kubernetes cluster was deployed successfully and usable. Rancher offers a web-based CLI by which applications can be deployed to the cluster (both with kubectl and helm); applications can also be deployed outside of the Kubernetes interface of course with Rancher making the Kubernetes credentials available which can be used by kubectl.

Rancher with Swarm

Rancher support for Swarm is similar to that of Kubernetes in the sense that the primary focus is on managing the Hosts in the Swarm Environment. Rancher provides support for bringing up a Swarm and enabling it to be controlled via the standard Swarm toolset.

It is worth noting that we did have some confusion working with Applications in Swarm. The Swarm deployment mode has the capability to deploy applications from a catalog, although it is not so prominent in the interface. It took us some time to realize that this was not the intended deployment mode – this mechanism uses Cattle for the application deployment rather than Swarm. This was non-obvious – the applications were docker-compose applications and, as such, we assumed that they could be deployed via Swarm. Deploying the applications appeared to work in the sense that they were visible in Rancher, but on closer inspection we found irregularities. Specifically, the application was not deployed in ‘managed’ mode, even though this was stipulated in the Application Catalog; also, docker service ls did not show the application.

The limitations noted above most probably arise because Swarm support is still experimental and will be resolved as the solution matures.

Another noteworthy point relating to Swarm usage is that Rancher provides a very useful interface to both the containers and nodes within the Swarm: this can be used to understand current state and perform troubleshooting. Unlike the Kubernetes environment, the Swarm environment has no such standard tool and Rancher provides significant value add here.

Final Comments

Rancher is a useful and interesting evolving platform. It focuses primarily on the important problem of bridging between the classical VM/IaaS world to the newer container ecosystems; another important aspect of the Rancher vision is application management. As the world of container ecosystems is evolving rapidly – with some technologies offering key parts of Rancher’s vision – it will be challenging for Rancher to span all aspects of application management from VM management to container management to application deployment and management, but the technology has obtained some momentum and solves a real problem and hence it’s likely to be around for a while.

(Thanks to Bruno and Martin for reviewing this!)