Tag: kubernetes

Storage & Data Analytics – Swiss 2018

On the 24th of May we attended the “Storage & Data Analytics – Swiss 2018” day which was organized at the Seedamm Plaza in Pfäffikon SZ.
Our interest and expertise at the ICCLab for innovative solutions in the area of Cloud Storage motivated us to join the event with the aim to exchange expertise with colleagues from both the industrial and the academic realms.

Welcome and introduction to the day

The program for the event offered a well-balanced mix of keynote speeches from top-experts in the field of storage and data analytics, presentations from specialists and companies actively working in the continuously evolving market, workshops, round-tables, and live demos on specific aspects of interest, and important moments for networking and knowledge exchange with the participants.
Besides the keynotes, the program was organized with four sessions running in parallel. The high number of persons attending the sessions and the stands proposed by the industrial partners for the event witnesses the high interest in the topics in focus. Five major areas of interest were covered: Data Management, Data Analytics, Cloud Storage, Technology and Security. You can find the complete program at the following link https://www.storage-day.ch/

Harald Seipp (IBM) presents Storage in Container-based Cloud Infrastructure

The research and development interests at the ICCLab naturally attracted our interest towards presentations in the area of Cloud Storage and Technology. The first Keynote of the day by Prof. Brinkmann from the University of Mainz, guided us through a classification of Storage with a view on the future of Storage. In the subsequent presentation by IBM, Storage in container-based Cloud Infrastructures was discussed underlying the importance of persistant storage and multi-cloud environments. Of particular interest to us was the presentation given by the company SUSE. Software Defined Storage was discussed as the de-facto Standard for storage in the Cloud, highlighting also the importance of open source based solutions when they presented their Enterprise Storage solution based on Openstack and Ceph. A further interesting analysis on Cloud Storage was later presented by the company Nutanix which introduced their full-stack solution for Storage in the Cloud.

As an icing on the cake, the day was concluded by the insightful keynote given by Moshe Rappaport, Executive Technologist at IBM Research, which guided the audience in the future shedding light on the new disruptive technologies being ahead of us. The future of Storage was also predicted as this is rapidly evolving towards high density data storage applications requiring innovative research and development solutions.

Moshe Rappaport’s insightful keynote on the future of Business and IT from an IBM research perspective

In conclusion, our participation to the “Storage & Data Analytics – Swiss 2018” was well worth the time investment. The event has clearly fulfilled the expectations as an important source of inspiration for our research activities and as an opportunity for networking with experts in the field. We are already looking forward to the next event of this kind!

KubeCon’18 – Cloud, containers, edge, nets, robots, and philosophy of science

KubeCon / CloudNativeCon Europe 2018 took place at the shiny Bella Center of Copenhagen on May 2 – 4, 2018.
Here at ICCLab/SPLab we use extensively Kubernetes / CNCF technologies both in teaching and research, but we had one extra reason for being there this year: our friends and colleagues from Rapyuta Robotics (RR) were scheduled to give a talk on Cloud Robotics PaaS.

Bella Center - Copenhagen

Bella Center – Copenhagen

Continue reading

Rightsizing Kubernetes applications

by Josef Spillner

Applications are increasingly delivered for cloud deployment as set of composite artefacts such as containers. The composition descriptions vary widely: There are Docker compose files, Vamp blueprints, Kubernetes descriptors, OpenShift service instance templates, and more. Ideally, taking these compositions and deploying them somewhere would always work. In practice, it is more complex than that. Commercial production environments are often constrained depending on the chosen pricing plan. Many applications would still run but due to over-estimating deployment information do not “fit” into the target environment. In this blog post, we look at how to “right-size” an application deployed into such a constrained Kubernetes instance, and furthermore propose a tool to automate this process.

Continue reading

Rancher – initial experience report

In the context of the FINEXT project, we have been reviewing Rancher as a tool to support easy deployment of FIWARE components. (Our colleagues in the project have more experience with this tool – we’re still climbing the learning curve). Here are a few observations relating to Rancher.

The primary problem that Rancher solves is management of potentially disparate (sets of) IaaS resources to provide support for deploying containerized applications. Another important aspect of the Rancher vision is the application catalog – a well defined set of containerized applications that can be deployed to a container platform.

This is, of course, a very noisy area with much technology competition: the Rancher team developed their own orchestration framework – Cattle – but it was clear from some time ago that there would be many different orchestration frameworks and they intelligently decided to integrate with other platforms which were gaining traction. Specifically, they provide support for Kubernetes, Swarm and Mesos.

While playing with Rancher to understand how it works, we looked at how it supports three use cases:

  • Deployment of applications on IaaS with Cattle based container management
  • Deployment of applications on IaaS with Kubernetes container management
  • Deployment of applications on IaaS with Swarm container management

Although the Mesos case is also interesting (and we like Mesos!), we decided not to consider it as Mesos does not currently have as much momentum as the other technologies.

The basic Rancher approach

Before discussing our initial observations, it is appropriate to give some details on key concepts in Rancher.

Rancher supports so-called Environments which are defined by a specific orchestration framework (eg Cattle, Kubernetes, Swarm) and comprise of a set of Hosts. Applications can be deployed to Environments from the Application Catalog; obviously, Applications need to be defined in a manner that is compatible with the Environment’s orchestration mechanisms – for Cattle and Swarm, applications can be defined using docker-compose format; Kubernetes environments require a different pod-based format.

Hosts are typically VMs that run in cloud platforms – although it is possible to configure these manually, the intended use case is that these are created by docker-machine. docker-machine contains drivers for many different hosting providers and Rancher leverages these to enable Hosts to be provisioned on a wide range of different providers. Rancher provisioning is quite complex, but generally it comprises of deploying rancher-agent which enables Rancher to monitor and control the host and deployment of an overlay network which enables the Host to network with the other Hosts in the Environment.

The workflow then is one in which rancher-server is first deployed (typically in a VM). In rancher-server an Environment is created, Hosts are added to the environment and then Applications can be deployed from the catalog. Note that rancher-server can – typically should – manage multiple different Environments. Rancher provides good support for monitoring the state of the system: for example, it is straightforward to see all containers running on the Hosts, their logs and if they are in an error state.

Standard Cattle Management

Standard Cattle Management is the most developed Rancher capability. In this mode, Cattle – running in rancher-server – is responsible for orchestrating the application on the host. rancher-agent runs on each of the Hosts in privileged mode and hence it has the power to create and destroy containers on each Host. rancher-server communicates with rancher-agent over a websockets connection to obtain the state of the host.

The Application Catalog for this mode is well developed. Rancher comes with a default Application Catalog and supports import of applications to the catalog. Further, it supports the use of private docker registries as it is clear that many applications would not be public. In the Cattle Environment, the Application Catalog is evident (a menu item at the top of the screen). Applications comprise of three files:

  • docker-compose.yml: contains the standard information for launching and managing a multi-container service
  • rancher-compose.yml: which contains the descriptor for the application in the service catalog, what parameters it requires as well as information pertaining to deploying the application over multiple VMs, scaling, health checks etc
  • Answers.txt: contains the default values of the parameters required to launch the application

We did not experiment much with Cattle orchestration, but documentation indicates that it is a sensible orchestration framework which deploys applications in a balanced manner across the Hosts in the Environment.

Rancher with Kubernetes

Rancher also supports Environments based on Kubernetes. As such, Rancher supports rapid and easy deployment of a Kubernetes cluster across disparate hosts: the Environment is created in Rancher and Hosts are added via docker-machine.

As with the Cattle deployment, rancher-agent is deployed on all nodes in the cluster: this enables Rancher to have full visibility of each of the nodes in the Environment – what containers are running etc. It is also used in the process of deploying the Kubernetes environment.

It took us some time to understand the role of the Application Catalog in the Kubernetes context. Although Rancher has some support for a Kubernetes Application Catalog and the Catalog differs from that available for standard Cattle Environments – applications are described in terms of pods – we found that deploying these applications did not work.

The Kubernetes cluster was deployed successfully and usable. Rancher offers a web-based CLI by which applications can be deployed to the cluster (both with kubectl and helm); applications can also be deployed outside of the Kubernetes interface of course with Rancher making the Kubernetes credentials available which can be used by kubectl.

Rancher with Swarm

Rancher support for Swarm is similar to that of Kubernetes in the sense that the primary focus is on managing the Hosts in the Swarm Environment. Rancher provides support for bringing up a Swarm and enabling it to be controlled via the standard Swarm toolset.

It is worth noting that we did have some confusion working with Applications in Swarm. The Swarm deployment mode has the capability to deploy applications from a catalog, although it is not so prominent in the interface. It took us some time to realize that this was not the intended deployment mode – this mechanism uses Cattle for the application deployment rather than Swarm. This was non-obvious – the applications were docker-compose applications and, as such, we assumed that they could be deployed via Swarm. Deploying the applications appeared to work in the sense that they were visible in Rancher, but on closer inspection we found irregularities. Specifically, the application was not deployed in ‘managed’ mode, even though this was stipulated in the Application Catalog; also, docker service ls did not show the application.

The limitations noted above most probably arise because Swarm support is still experimental and will be resolved as the solution matures.

Another noteworthy point relating to Swarm usage is that Rancher provides a very useful interface to both the containers and nodes within the Swarm: this can be used to understand current state and perform troubleshooting. Unlike the Kubernetes environment, the Swarm environment has no such standard tool and Rancher provides significant value add here.

Final Comments

Rancher is a useful and interesting evolving platform. It focuses primarily on the important problem of bridging between the classical VM/IaaS world to the newer container ecosystems; another important aspect of the Rancher vision is application management. As the world of container ecosystems is evolving rapidly – with some technologies offering key parts of Rancher’s vision – it will be challenging for Rancher to span all aspects of application management from VM management to container management to application deployment and management, but the technology has obtained some momentum and solves a real problem and hence it’s likely to be around for a while.

(Thanks to Bruno and Martin for reviewing this!)

Challenges with running ROS on Kubernetes

The goal of the Cloud Robotics initiative of the SPLab is to ease the integration of Cloud Computing and Robotics workloads. One of the first things we need to sort out is how to leverage different networking models available on the cloud to support these mixed workloads.

In this blog post we’ll see one little handy trick to have ROS nodes run as pods (and services) in any Kubernetes cluster so that they can transparently communicate using a ROS topic.

Continue reading

OpenStack Summit – Deep Dive into Day 2

CERN Openstack (super) User Story
CERN is looking for answers to the fundamental questions concerning creation of the Universe and true to its nature, its a a big data challenge. With the historical run of LHC in 2013, their archive now contains ~100PB (with additional 27PB/year) at ~11 000 servers with ~75000 disk drives and ~45 000 tapes and with the reopening of the LHC, they expect a significant increase of data in 2015. CERN recently opened up a new data center in Budapest connected to Geneve’s headquarters by T-Systems 100GbE line. 
CERN currently runs  four Openstack Icehouse clouds and expects these to run 150 000 cores by Q1 2015 in total. All the CERN‘s non-specific code is upstream and are available for anyone who would like to build at the top of it in the future.
CERN put great emphasis on collaboration. Openlab project is public-private partnership between CERN and major ICT companies (e.g. Rackspace) and its goal is to accelerate the development of cutting-edge cloud solutions.
 2014-11-04 09.42.44obrázek2
OpenShift on OpenStack
RedHat and Cisco gave a demo on deploying OpenShift on OpenStack using Heat, Docker & Kubernetes. OpenShift is a PaaS offering from RedHat with both the enterprise and open source versions. The thought process of deploying OpenShift on OpenStack is to maintain a high degree of flexibility and enable a faster deployment of applications. In the demo, Heat was made use of for orchestration. Docker’s pull and push methodology is used for getting a new Image or saving a modified version which could be pulled later on. Along with tagging of the images, diff operation can also be done. Docker containers are also used as daemons. However Docker cannot see beyond a single host and doesn’t have the capacity to manage mass configuration and deployment. That’s where the Kubernetes comes into picture. Here Pod resemble the Docker’s containers and the etc functionality is used to configure the master which would pass it along to the slaves and there by mass configuration is achieved.The link to the presentation can be found here.
2014-11-04 11.15.33 2014-11-04 11.18.49

Setup a Kubernetes Cluster on OpenStack with Heat

In this post we take a look at Kubernetes and help you setup a Kubernetes Cluster on your existing OpenStack Cloud using its Orchestration Service Heat. This Kubernetes Cluster should only be used as a Proof of Concept.

Technology involved:
Kubernetes: https://github.com/GoogleCloudPlatform/kubernetes
CoreOS: https://coreos.com/
etcd: https://github.com/coreos/etcd
fleet: https://github.com/coreos/fleet
flannel: https://github.com/coreos/flannel
kube-register: https://github.com/kelseyhightower/kube-register

The Heat Template used in this Post is available on Github.

What is Kubernetes?

Kubernetes allows the management of docker containers at scale. Its core concepts are covered in this presentation, held at the recent OpenStack&Docker Usergroup meetups.

A complete overview of Kubernetes is found on the Kubernetes Repo.

Architecture

The provisioned Cluster consists of 5 VMs. The first one, discovery, is a dedicated etcd host. This allows easy etcd discovery thanks to a static IP-Address.

A Kubernetes Master host is setup with the Kubernetes components apiserver, scheduler, kube-register, controller-manager as well as proxy. This machine also gets a floating IP assined and acts as a access point to your Kubernetes cluster.

Three Kubernetes Minion hosts are setup with the Kubernetes components kubelet and proxy.

HowTo

Follow the instructions on the Github repo to get your Kubernetes cluster up and running:

https://github.com/icclab/kubernetes-on-openstack-demo 

Examples

Two examples are provided in the repo: