Supporting Container based Application Deployment to Heterogeneous Hardware using Rancher and Swarm

Rancher is a container management platform focused on delivering containers on any infrastructure. It has support for multiple environments which could make use of one of the multiple container orchestrators available (at the time of this writing) – Cattle, Kubernetes, Swarm and Mesos. In our previous blog post we showed how we built rancher-agent containers for arm64 systems in a Cattle environment. We made a few improvements since then, mostly on porting a Swarm environment to arm64, and these are documented in this blog post. Continue reading

The 1st International Workshop on Heterogeneous Distributed Cloud Computing

As we look to the future of cloud computing, there are good reasons to think that the cloud of the future will differ significantly from that which we know today. Although nobody knows exactly how it will evolve, it is likely we will see significant changes in two important dimensions – heterogeneity and decentralization. Let’s consider each of these in turn.

The earlier cloud systems were characterized by homogeneity to the point that they were considered analogous to commodities: however, as these systems have evolved, they had to increasingly cater for the general complexity of IT systems and hence more and more options became available. For example, AWS currently provides 56 different instance types. Storage has also become differentiated with different types of physical storage – primarily spinning disks and SSD storage at present, but this will be augmented in future with newer storage technologies such as Intel Optane which can be considered as something between memory and classical secondary storage – but also in terms of types of storage with object storage clearly in the ascendency, block storage being around for some time and also a need for longer term archival solutions. Further, there is increasing heterogeneity relating to the basic compute units that are being used in Data Centres: GPUs are catering for many large and complex workloads, ARM processors are being increasingly seen as credible within the Data Centre, customized ASICs such as the TPU are on offer and there is important innovation coming from the open source hardware movement – specifically the open source ISA of RISC-V.

As well as increased heterogeneity, there are good reasons to believe that the highly centralized systems that characterized the first wave of cloud computing will give way to much more decentralized systems in which the large data centres will be augmented by smaller scale resources. Hybrid cloud is one aspect of this trend which is well established and poised for rapid growth. One particularly interesting example which fits clearly in the hybrid cloud arena is Microsoft’s Azure Stack which is intended to enable Azure to operate within the enterprise DC as well as inside Microsoft’s large DCs: while this can have benefits for the enterprise, from the cloud operator’s perspective, it’s a way of realizing a much more decentralized cloud. The telecoms sector is also investigating more decentralized approaches with initiatives such as Central Office Rearchitected as Data Centres.

The combination of these two fundamental trends in the evolution of cloud computing will give rise to many new and interesting problems which are interesting both from an industry perspective as well as an academic perspective. For this reason, we decided to organize a workshop co-located with the Utility and Cloud Computing Conference 2017 which focuses on these issues: the 1st International Workshop on Heterogeneous Distributed Cloud Computing which will take place in December 2017.

We’re looking forward to an exciting, interactive workshop with interesting contributions covering diverse topics: if these are topics that interest you, we invite you to make a submission to the workshop before the deadline of July 30. Just click here to submit.

 

Cloud-Native Microservices Reference Architecture

How are cloud-native applications engineered? In contrast to the increasing popularity of the topic, there are surprisingly few reference applications available. In the previous blogpost we described a first version of a prototypical document management application consisting of composed containers which is called ARKIS Microservices. We elaborated on the challenges involved when designing and developing a cloud-native application. In addition, we showed some details about the architecture and functionalities of version 2.5 of this generic reference application .

In this blogpost, we now dive deeper into the architecture of the latest version 3.3, paying attention to each component. The document management software is a cloud-native application based on a microservices architecture. The software permits multiple tenants to manage their documents (create, read, update, delete and search patterns in documents). It manages the different tenants and offers different isolation models to store the documents of a tenant. Furthermore, the services are discoverable through declarative service descriptions, and their use is billed according to a pay-per-use scheme.

Continue reading