Tag: docker (page 1 of 2)

Supporting Container based Application Deployment to Heterogeneous Hardware using Rancher and Swarm

Rancher is a container management platform focused on delivering containers on any infrastructure. It has support for multiple environments which could make use of one of the multiple container orchestrators available (at the time of this writing) – Cattle, Kubernetes, Swarm and Mesos. In our previous blog post we showed how we built rancher-agent containers for arm64 systems in a Cattle environment. We made a few improvements since then, mostly on porting a Swarm environment to arm64, and these are documented in this blog post. Continue reading

An overview of networking in Rancher using Cattle

As noted elsewhere, we’re looking at Rancher in the context of one of our projects. We’ve been doing some work on enabling it to work over heterogeneous compute infrastructures – one of which could be an ARM based edge device and one a standard x86_64 cloud execution environment. Some of our colleagues were asking how the networking works – we had not looked into this in much detail, so we decided to find – turns out it’s pretty complex.

Continue reading

Snafu – The Swiss Army Knife of Serverless Computing

by Josef Spillner

The Service Prototyping Lab at Zurich University of Applied Sciences is committed to advancing the state of technology for bringing applications to the cloud, for the benefit of the society of large in general and of the local industry in particular. This obliges us to closely monitor industrial trends along with academic advances. A hot topic currently found in both is the higher-PaaS-level service class of FaaS, or Function-as-a-Service, which coincides with the marketing term Serverless Computing. We have already contributed analytical work on finding the limits and possibilities of today’s FaaS systems (preprint), and engineering work on translating legacy monolithic code into fine-grained functions (preprint). It was only a matter of time until the limits in both commercially operated FaaS services and open-source FaaS prototypes became too severe for our work. Hence, after a careful analysis of what is available, we decided to come up with an alternative FaaS host process design. The design led to an architecture, and the architecture eventually to an implementation called Snafu. This post presents Snafu and positions it as Swiss Army Knife for situations in which functions should be prototyped, tested or hosted.

Continue reading

Rancher – initial experience report

In the context of the FINEXT project, we have been reviewing Rancher as a tool to support easy deployment of FIWARE components. (Our colleagues in the project have more experience with this tool – we’re still climbing the learning curve). Here are a few observations relating to Rancher.

The primary problem that Rancher solves is management of potentially disparate (sets of) IaaS resources to provide support for deploying containerized applications. Another important aspect of the Rancher vision is the application catalog – a well defined set of containerized applications that can be deployed to a container platform.

This is, of course, a very noisy area with much technology competition: the Rancher team developed their own orchestration framework – Cattle – but it was clear from some time ago that there would be many different orchestration frameworks and they intelligently decided to integrate with other platforms which were gaining traction. Specifically, they provide support for Kubernetes, Swarm and Mesos.

While playing with Rancher to understand how it works, we looked at how it supports three use cases:

  • Deployment of applications on IaaS with Cattle based container management
  • Deployment of applications on IaaS with Kubernetes container management
  • Deployment of applications on IaaS with Swarm container management

Although the Mesos case is also interesting (and we like Mesos!), we decided not to consider it as Mesos does not currently have as much momentum as the other technologies.

The basic Rancher approach

Before discussing our initial observations, it is appropriate to give some details on key concepts in Rancher.

Rancher supports so-called Environments which are defined by a specific orchestration framework (eg Cattle, Kubernetes, Swarm) and comprise of a set of Hosts. Applications can be deployed to Environments from the Application Catalog; obviously, Applications need to be defined in a manner that is compatible with the Environment’s orchestration mechanisms – for Cattle and Swarm, applications can be defined using docker-compose format; Kubernetes environments require a different pod-based format.

Hosts are typically VMs that run in cloud platforms – although it is possible to configure these manually, the intended use case is that these are created by docker-machine. docker-machine contains drivers for many different hosting providers and Rancher leverages these to enable Hosts to be provisioned on a wide range of different providers. Rancher provisioning is quite complex, but generally it comprises of deploying rancher-agent which enables Rancher to monitor and control the host and deployment of an overlay network which enables the Host to network with the other Hosts in the Environment.

The workflow then is one in which rancher-server is first deployed (typically in a VM). In rancher-server an Environment is created, Hosts are added to the environment and then Applications can be deployed from the catalog. Note that rancher-server can – typically should – manage multiple different Environments. Rancher provides good support for monitoring the state of the system: for example, it is straightforward to see all containers running on the Hosts, their logs and if they are in an error state.

Standard Cattle Management

Standard Cattle Management is the most developed Rancher capability. In this mode, Cattle – running in rancher-server – is responsible for orchestrating the application on the host. rancher-agent runs on each of the Hosts in privileged mode and hence it has the power to create and destroy containers on each Host. rancher-server communicates with rancher-agent over a websockets connection to obtain the state of the host.

The Application Catalog for this mode is well developed. Rancher comes with a default Application Catalog and supports import of applications to the catalog. Further, it supports the use of private docker registries as it is clear that many applications would not be public. In the Cattle Environment, the Application Catalog is evident (a menu item at the top of the screen). Applications comprise of three files:

  • docker-compose.yml: contains the standard information for launching and managing a multi-container service
  • rancher-compose.yml: which contains the descriptor for the application in the service catalog, what parameters it requires as well as information pertaining to deploying the application over multiple VMs, scaling, health checks etc
  • Answers.txt: contains the default values of the parameters required to launch the application

We did not experiment much with Cattle orchestration, but documentation indicates that it is a sensible orchestration framework which deploys applications in a balanced manner across the Hosts in the Environment.

Rancher with Kubernetes

Rancher also supports Environments based on Kubernetes. As such, Rancher supports rapid and easy deployment of a Kubernetes cluster across disparate hosts: the Environment is created in Rancher and Hosts are added via docker-machine.

As with the Cattle deployment, rancher-agent is deployed on all nodes in the cluster: this enables Rancher to have full visibility of each of the nodes in the Environment – what containers are running etc. It is also used in the process of deploying the Kubernetes environment.

It took us some time to understand the role of the Application Catalog in the Kubernetes context. Although Rancher has some support for a Kubernetes Application Catalog and the Catalog differs from that available for standard Cattle Environments – applications are described in terms of pods – we found that deploying these applications did not work.

The Kubernetes cluster was deployed successfully and usable. Rancher offers a web-based CLI by which applications can be deployed to the cluster (both with kubectl and helm); applications can also be deployed outside of the Kubernetes interface of course with Rancher making the Kubernetes credentials available which can be used by kubectl.

Rancher with Swarm

Rancher support for Swarm is similar to that of Kubernetes in the sense that the primary focus is on managing the Hosts in the Swarm Environment. Rancher provides support for bringing up a Swarm and enabling it to be controlled via the standard Swarm toolset.

It is worth noting that we did have some confusion working with Applications in Swarm. The Swarm deployment mode has the capability to deploy applications from a catalog, although it is not so prominent in the interface. It took us some time to realize that this was not the intended deployment mode – this mechanism uses Cattle for the application deployment rather than Swarm. This was non-obvious – the applications were docker-compose applications and, as such, we assumed that they could be deployed via Swarm. Deploying the applications appeared to work in the sense that they were visible in Rancher, but on closer inspection we found irregularities. Specifically, the application was not deployed in ‘managed’ mode, even though this was stipulated in the Application Catalog; also, docker service ls did not show the application.

The limitations noted above most probably arise because Swarm support is still experimental and will be resolved as the solution matures.

Another noteworthy point relating to Swarm usage is that Rancher provides a very useful interface to both the containers and nodes within the Swarm: this can be used to understand current state and perform troubleshooting. Unlike the Kubernetes environment, the Swarm environment has no such standard tool and Rancher provides significant value add here.

Final Comments

Rancher is a useful and interesting evolving platform. It focuses primarily on the important problem of bridging between the classical VM/IaaS world to the newer container ecosystems; another important aspect of the Rancher vision is application management. As the world of container ecosystems is evolving rapidly – with some technologies offering key parts of Rancher’s vision – it will be challenging for Rancher to span all aspects of application management from VM management to container management to application deployment and management, but the technology has obtained some momentum and solves a real problem and hence it’s likely to be around for a while.

(Thanks to Bruno and Martin for reviewing this!)

The intricacies of running containers on OpenShift

by Josef Spillner

In the context of the ECRP Project,  which is part of our cloud robotics initiative, we are aiming to build a PaaS solution for robotic applications.

The “Robot Operating System” (ROS) is widely used on several robotics platforms, and also runs on the turtlebot robots in our lab. One of the ideas behind cloud robotics is to enable ROS components (so called ROS nodes) to run distributed across the cloud infrastructure and the robot itself, so we can shift certain parts of the robotics application to the cloud. As a logical first step we tried to run existing ROS nodes, such as a ROS master in containers on Kubernetes, then we tried to use a proper Platform as a Service (PaaS) solution, in our case Red Hat OpenShift .

OpenShift offers a full PaaS experience, you can build and run code from source or run pre-built containers directly. All of those features can be managed via a intuitive web interface.

However, OpenShift imposes tight security restrictions on the containers it runs.
These are:

  • Prevention from running processes in containers as root
  • Using random user ID for running containers (Support Arbitrary User IDs)

Continue reading

Rapid API generation with Ramses

by Josef Spillner

Rapid service prototyping, cloud application prototyping and API prototyping are closely related techniques which share a common goal: To get a first working prototype designed, implemented and placed online quickly, with small effort and with little headache over tooling concerns. The approaches in this area are still emerging and thus often ad-hoc or even immature. Several prototyping frameworks do nevertheless show a potential to become part of serious engineering workflows. In this post, the Ramses framework will be presented and evaluated regarding this goal.

Continue reading

Monitoring an Openstack deployment with Prometheus and Grafana

Following our previous blog post, we are still looking at tools for collecting metrics from an Openstack deployment in order to understand its resource utilization. Although Monasca has a comprehensive set of metrics and alarm definitions, the complex installation process combined with a lack of documentation makes it a frustrating experience to get it up and running. Further, although it is complex, with many moving parts, it was difficult to configure it to obtain the analysis we wanted from the raw data, viz how many of our servers are overloaded over different timescales in different respects (cpu, memory, disk io, network io). For these reasons we decided to try Prometheus with Grafana which turned out to be much easier to install and configure (taking less than an hour to set up!). This blog post covers the installation process and configuration of Prometheus and Grafana in a Docker container and how to install and configure Canonical’s Prometheus Openstack exporter to collect a small set of metrics related to an Openstack deployment.

Continue reading

Installing Monasca – a happy ending after much sadness and woe

In one of our projects we are making contributions to an Openstack project called Watcher, this project focuses on optimizing resource utilization of a cloud according to a given strategy. As part of this work it is important to understand the resource utilization of the cloud beforehand in order to make a meaningful contribution. This requires collection of metrics from the system and processing them to understand how the system is performing. The Ceilometer project was our default choice for collecting metrics in an Openstack deployment but as work has evolved we are also exploring alternatives – specifically Monasca. In this blog post I will cover my personal experience installing Monasca (which was more challenging than expected) and how we hacked the monasca/demo docker image to connect it to our Openstack deployment. Continue reading

7th Docker Switzerland User Group Meetup highlights

IMG_1671

Last evening we organised the 7th Docker user group meetup in Bern. The kind folks from Die Mobiliar hosted and sponsored the event. There was incredible participation for this meetup and about 160 people joined in.

There were 3 talks held in the meetup, Die Mobiliar and SBB shared their experiences about using Docker in production. The third presentation was from Swisscom, who talked about using Docker in Cloud Foundry. The speaker took the opportunity to release the Docker support feature for the Swisscom Application Cloud. The slides are in line, in the respective names.  Continue reading

Challenges with running ROS on Kubernetes

The goal of the Cloud Robotics initiative of the SPLab is to ease the integration of Cloud Computing and Robotics workloads. One of the first things we need to sort out is how to leverage different networking models available on the cloud to support these mixed workloads.

In this blog post we’ll see one little handy trick to have ROS nodes run as pods (and services) in any Kubernetes cluster so that they can transparently communicate using a ROS topic.

Continue reading

« Older posts