Tag: containers

Migrate OpenShift applications with os2os

by Josef Spillner

One of the desirable properties which users expect in a modern cloud-hosted application is portability. Users want to migrate portable applications between private and public clouds or between different cloud regions. With container images as portable application implementations and emerging sophisticated container runtimes, this should be an easy task. But when a containerised application starts to become more complex, a container platform or an orchestration tool needs to be deployed. This add specifics blueprints and together with the persistent data makes the migration of the application tough. This means that the application is not in a condition to be moved as easily between clouds or even between the orchestration tools or container platforms, losing the desirable portability property. With the idea in mind that the next generation of Cloud-Native Applications must be deployable to different cloud providers as the requirements change, we are proud to announce the first proof of concept release of os2os, a tool to migrate cloud-native applications between OpenShift installations. While our research on application migration is not limited to this single container platform, we see it as one of the more popular and technically interesting ones.

Continue reading

Cloud-Native Microservices Reference Architecture

by Josef Spillner

How are cloud-native applications engineered? In contrast to the increasing popularity of the topic, there are surprisingly few reference applications available. In the previous blogpost we described a first version of a prototypical document management application consisting of composed containers which is called ARKIS Microservices. We elaborated on the challenges involved when designing and developing a cloud-native application. In addition, we showed some details about the architecture and functionalities of version 2.5 of this generic reference application .

In this blogpost, we now dive deeper into the architecture of the latest version 3.3, paying attention to each component. The document management software is a cloud-native application based on a microservices architecture. The software permits multiple tenants to manage their documents (create, read, update, delete and search patterns in documents). It manages the different tenants and offers different isolation models to store the documents of a tenant. Furthermore, the services are discoverable through declarative service descriptions, and their use is billed according to a pay-per-use scheme.

Continue reading

Rancher – initial experience report

In the context of the FINEXT project, we have been reviewing Rancher as a tool to support easy deployment of FIWARE components. (Our colleagues in the project have more experience with this tool – we’re still climbing the learning curve). Here are a few observations relating to Rancher.

The primary problem that Rancher solves is management of potentially disparate (sets of) IaaS resources to provide support for deploying containerized applications. Another important aspect of the Rancher vision is the application catalog – a well defined set of containerized applications that can be deployed to a container platform.

This is, of course, a very noisy area with much technology competition: the Rancher team developed their own orchestration framework – Cattle – but it was clear from some time ago that there would be many different orchestration frameworks and they intelligently decided to integrate with other platforms which were gaining traction. Specifically, they provide support for Kubernetes, Swarm and Mesos.

While playing with Rancher to understand how it works, we looked at how it supports three use cases:

  • Deployment of applications on IaaS with Cattle based container management
  • Deployment of applications on IaaS with Kubernetes container management
  • Deployment of applications on IaaS with Swarm container management

Although the Mesos case is also interesting (and we like Mesos!), we decided not to consider it as Mesos does not currently have as much momentum as the other technologies.

The basic Rancher approach

Before discussing our initial observations, it is appropriate to give some details on key concepts in Rancher.

Rancher supports so-called Environments which are defined by a specific orchestration framework (eg Cattle, Kubernetes, Swarm) and comprise of a set of Hosts. Applications can be deployed to Environments from the Application Catalog; obviously, Applications need to be defined in a manner that is compatible with the Environment’s orchestration mechanisms – for Cattle and Swarm, applications can be defined using docker-compose format; Kubernetes environments require a different pod-based format.

Hosts are typically VMs that run in cloud platforms – although it is possible to configure these manually, the intended use case is that these are created by docker-machine. docker-machine contains drivers for many different hosting providers and Rancher leverages these to enable Hosts to be provisioned on a wide range of different providers. Rancher provisioning is quite complex, but generally it comprises of deploying rancher-agent which enables Rancher to monitor and control the host and deployment of an overlay network which enables the Host to network with the other Hosts in the Environment.

The workflow then is one in which rancher-server is first deployed (typically in a VM). In rancher-server an Environment is created, Hosts are added to the environment and then Applications can be deployed from the catalog. Note that rancher-server can – typically should – manage multiple different Environments. Rancher provides good support for monitoring the state of the system: for example, it is straightforward to see all containers running on the Hosts, their logs and if they are in an error state.

Standard Cattle Management

Standard Cattle Management is the most developed Rancher capability. In this mode, Cattle – running in rancher-server – is responsible for orchestrating the application on the host. rancher-agent runs on each of the Hosts in privileged mode and hence it has the power to create and destroy containers on each Host. rancher-server communicates with rancher-agent over a websockets connection to obtain the state of the host.

The Application Catalog for this mode is well developed. Rancher comes with a default Application Catalog and supports import of applications to the catalog. Further, it supports the use of private docker registries as it is clear that many applications would not be public. In the Cattle Environment, the Application Catalog is evident (a menu item at the top of the screen). Applications comprise of three files:

  • docker-compose.yml: contains the standard information for launching and managing a multi-container service
  • rancher-compose.yml: which contains the descriptor for the application in the service catalog, what parameters it requires as well as information pertaining to deploying the application over multiple VMs, scaling, health checks etc
  • Answers.txt: contains the default values of the parameters required to launch the application

We did not experiment much with Cattle orchestration, but documentation indicates that it is a sensible orchestration framework which deploys applications in a balanced manner across the Hosts in the Environment.

Rancher with Kubernetes

Rancher also supports Environments based on Kubernetes. As such, Rancher supports rapid and easy deployment of a Kubernetes cluster across disparate hosts: the Environment is created in Rancher and Hosts are added via docker-machine.

As with the Cattle deployment, rancher-agent is deployed on all nodes in the cluster: this enables Rancher to have full visibility of each of the nodes in the Environment – what containers are running etc. It is also used in the process of deploying the Kubernetes environment.

It took us some time to understand the role of the Application Catalog in the Kubernetes context. Although Rancher has some support for a Kubernetes Application Catalog and the Catalog differs from that available for standard Cattle Environments – applications are described in terms of pods – we found that deploying these applications did not work.

The Kubernetes cluster was deployed successfully and usable. Rancher offers a web-based CLI by which applications can be deployed to the cluster (both with kubectl and helm); applications can also be deployed outside of the Kubernetes interface of course with Rancher making the Kubernetes credentials available which can be used by kubectl.

Rancher with Swarm

Rancher support for Swarm is similar to that of Kubernetes in the sense that the primary focus is on managing the Hosts in the Swarm Environment. Rancher provides support for bringing up a Swarm and enabling it to be controlled via the standard Swarm toolset.

It is worth noting that we did have some confusion working with Applications in Swarm. The Swarm deployment mode has the capability to deploy applications from a catalog, although it is not so prominent in the interface. It took us some time to realize that this was not the intended deployment mode – this mechanism uses Cattle for the application deployment rather than Swarm. This was non-obvious – the applications were docker-compose applications and, as such, we assumed that they could be deployed via Swarm. Deploying the applications appeared to work in the sense that they were visible in Rancher, but on closer inspection we found irregularities. Specifically, the application was not deployed in ‘managed’ mode, even though this was stipulated in the Application Catalog; also, docker service ls did not show the application.

The limitations noted above most probably arise because Swarm support is still experimental and will be resolved as the solution matures.

Another noteworthy point relating to Swarm usage is that Rancher provides a very useful interface to both the containers and nodes within the Swarm: this can be used to understand current state and perform troubleshooting. Unlike the Kubernetes environment, the Swarm environment has no such standard tool and Rancher provides significant value add here.

Final Comments

Rancher is a useful and interesting evolving platform. It focuses primarily on the important problem of bridging between the classical VM/IaaS world to the newer container ecosystems; another important aspect of the Rancher vision is application management. As the world of container ecosystems is evolving rapidly – with some technologies offering key parts of Rancher’s vision – it will be challenging for Rancher to span all aspects of application management from VM management to container management to application deployment and management, but the technology has obtained some momentum and solves a real problem and hence it’s likely to be around for a while.

(Thanks to Bruno and Martin for reviewing this!)

Cloud-Native Document Management

by Josef Spillner

Document management is an established software-powered business domain. As with most software applications, an ongoing trend is to move the functionality of document management systems (DMS) and related functionality (Enterprise Content Management – ECM, Content Services – CS) into well-defined services, primarily in cloud environments, resulting in cloud-native document management systems/services (CNDMS).

In the context of the research initiative on cloud-native applications and the ARKIS project within the Service Prototyping Lab, we have been looking deeper into the issues surrounding cloud-native document management and have built a prototype implementation to test-drive any ideas and new concepts. This post introduces the software and the challenges already solved with it. Continue reading

Container management with Kubernetes: Practical Example

by Josef Spillner

Following the series that we started with the Vamp Blog post, we proceed to take a look of one more of the container management tools which includes running a simple practical example while we pay attention to the main advantages and limitations. This series happens in the context of the work on cloud-native applications in the Service Prototyping Lab to explore how easily developers can decompose their applications and fit them into the emerging platforms.

On this occasion, we inspect Kubernetes, one of the most popular open-source container orchestration tool for production environments. Kubernetes builds upon 15 years of experience of running production workloads at Google. Moreover the community of Kubernetes appears to be the biggest among all the open source container management communities. Kubernetes provides a Slack channel with more than 8000 users who share ideas and are often Kubernetes engineers. Also, one can find community support in Stack Overflow using the tag kubernetes. Inside the Github repository, we can see more than 970 contributors, 1500 watches, 18500 starts and 6000 forks. In the community it is popular to abbreviate the system as K8s. 

Continue reading

Container Management with Vamp: Practical Example

by Josef Spillner

In the world of containerized architectures, there are different and new container deployment and orchestration tools which help turning monolithic applications into running composite microservices. Some of them are intended to be used in a development environment like Docker-Compose or in a production environment like Kubernetes, Docker-Swarm or Marathon. Also, we can observe some tools executing atop other container schedulers, like Rancher or Vamp. In this blog post, we take a look at the latter while at the same time we continue to inspect the alternatives in order to compare all solutions eventually.

Continue reading

Benchmarking cloud-native database systems

by Josef Spillner

For reliable, user-controlled and trustworthy file storage in the cloud, free software prototypes like NubiSave have become great tools to investigate and lift the barrier towards acceptable migration paths. For structured data storage and processing, several approaches to database-as-a-service (DBaaS) have been proposed by researchers and developers but a clear recommendation of how to best manage rows or records of data in the cloud from a practicality angle is still absent. Partially, the question about how to do this is due to the different pricing structures and availability guarantees by the providers which are not trivial to compare. Often, running the database system as set of replicated or sharded containers being part of the application appears to be a valid alternative to the binding of existing commercial DBaaS, if done correctly. After all, cloud providers would offer the same technical guarantees for any of their services. An analysis of which configuration works better and is less expensive would thus be needed.

Continue reading

Process Management in Docker Containers

In the context of the Cloud Native Applications (CNA) Seed Project we are working on migrating an open source CRM application into to the cloud. After enabling horizontal scalabilty of the original application and moving it onto our OpenStack cloud with the help of CoreOS (etcd, fleet) and docker we’ve now just finished adding the monitoring / logging / log-collection functionality – a blog post describing this process in its detail will follow – which is needed for the next part of the project: enabling automatic scaling. As part of this process we’ve learnt some lessons concerning the process management in docker containers which we’d like to share in this post.

Continue reading