Martin Blöchlinger is a researcher at the ICCLab.
After an IT apprenticeship and an additional year of programming experience he decided to study at the ZHAW. In summer 2014 he graduated (Bachelor of Science ZFH in Informatics) and a few weeks later started to work at the InIT in the focus area ‘Distributed Software Systems’. He is currently working on a project in the ‘Cloud-Native Applications‘ initiative.
Arcus is an internally funded project which focuses on correlating energy consumption with cloud usage information to enable a cloud provider to understand in detail how her energy is consumed – as energy continues to account for an increasing amount of a cloud provider’s operating costs, this issue is increasing in importance.
The work focuses on correlating cloud usage information obtained from Openstack (primarily via Ceilometer) with energy consumption information obtained from the devices, using a mix of internal readings and wireless metering infrastructure. It involves determining which users of the cloud stack are consuming energy at any point in time by fine-grained monitoring of the energy consumption in the system coupled with information on how the systems will be used.
The output of the project will be a tool which will enable an Openstack provider to see this relationship: it could be used by a public cloud provider to help them understand how their tariffing structure relates to their costs or for a private cloud operator to understand which internal applications or departments may be most responsible for energy consumption.
The project started in January 2014.
The primary focus of the Energy Theme is on reducing the energy consumption of cloud computing resources. As compute nodes consume most of the energy in cloud computing systems, work to date has been focused on reducing the energy consumed by compute loads, particularly within the Openstack context. Although, as servers get increasingly instrumented, it is clear that there is potential in understanding the energy consumption with finer granularity and ultimately this can lead to energy efficiencies and cost savings.
In the current work, the primary mechanism to achieve energy efficiencies is load consolidation combined with power control of servers. This could be augmented with managing server CPU power states, but it remains to be seen if this will lead to significant power savings. Another tool to achieve energy efficiencies is to add elastic load when the resources are underutilized – this does not reduce the overall energy consumption per se, but rather enables providers to get more bang for their energy buck.
The current architecture of the Cloud Energy Efficiency Subsystem is shown above with the components performing the following functions:
- an energy monitoring component: this obtains information on the energy consumption of the entire system – it may also make some kind of abstraction rather than working with highly granular data for each node;
- a load characterization component: this component uses primarily ceilometer data to understand what is going on in the cloud – it makes an abstraction of the usage of the system over different timescales and particularly determines which level of burstiness exists in the load patterns;
- a load consolidation mechanism: this will take the info on the system state and identify where load consolidation can be performed – it then issues a set of live migration instructions to the cloud to perform the consolidate. In general, it would be necessary to add some filters to support different hypervisors, bare metal servers, etc which makes it more complex;
- physical server manager: this will turn off servers and turn them on as necessary – this will take input from the load characterization component to determine how much spare capacity to keep in the system to deal with variations in demand.
The specific interactions between these components is evolving as this is a work in progress.
At present, the theme comprises of two initiatives. These are
Since Amazon started offering cloud services (AWS) in 2006, cloud computing in all its forms became evermore popular and has steadily matured since. A lot of experience has been collected and today a high number of companies are running their applications in the cloud either for themselves or to offer services to their customers. The basic characteristics of this paradigm1 offer capabilities and possibilities to software applications that were unthinkable before and are the reason why cloud computing was able to establish itself the way it did.
What is a Cloud-Native Application?
In a nutshell, a cloud-native application (CNA) is a distributed application that runs on a cloud infrastructure (irrespective of infrastructure or platform level) and is in its core scalable and resilient as well as adapted to its dynamic and volatile environment. These core requirements are derived from the essential characteristics that every cloud infrastructure must by definition possess, and from user expectations. It is of course possible to run an application in the cloud that doesn’t meet all those criteria. In that case it would be described as a cloud-aware or cloud-ready application instead of a cloud-native application. Through a carefully cloud-native application design based on composed stateful and stateless microservices, the hosting characteristics can be exploited so that scalability and elasticity do not translate into significantly higher cost.
- The CNA initiative provides architecture and design guidelines for cloud-native applications, based on lessons-learned of existing applications and by taking advantage of best-practices (Cloud-Application Architecture Patterns).
- Evaluate microservice technology mappings, related to container compositions, but also other forms of microservice implementations.
- Provide recommendations for operation of cloud native applications (Continuous Delivery, Scaling, Monitoring, Incident Management,…)
- Provide economic guidelines on how to operate cloud native applications (feasibility, service model (mix), microservice stacks, containers, …)
- Investigate in, develop and establish a set of open source technologies, tools and services to build, operate and leverage state of the art cloud-native applications.
- Support SMEs to build their own cloud-native solutions or reengineer and migrate existing applications to the cloud.
- Ensure that all new applications developed within the SPLab and the ICCLab are cloud-native.
Relevance to current and future markets
– Business impact
- Using cloud infrastructures (IaaS/PaaS) it is possible to prototype and test new business ideas quickly and without spending a lot of money up-front.
- An application running on a cloud infrastructure – if designed in a cloud-native way – only ever uses as many resources as needed. This avoids under- or over- provisioning of resources and ensures cost-savings.
- Developing software with services offered by cloud infrastructure and -platform providers enables even a small team to create highly scalable applications serving a high number of customers.
- Developing cloud-native applications with a microservice architecture style allows for shorter development-cycles which reduces the time to adapt to customer feedback, new customer requirements and changes in the market.
– Correlation to industry forecasts
- Cloud-native applications are tightly bound to cloud computing resp. to IaaS and PaaS since these technologies are used to develop and host applications and in the best case these applications are cloud native. So wherever these technologies stand in the Gartner Hype-Cycle Cloud-Native Applications can be thought of as being at the same stage.
The Cloud-Native Computing Foundation (CNCF.io) and other industry groups are formed to shape the evolution of technologies that are container packaged, dynamically scheduled and microservices oriented.
- Container composition languages and tools are on the rise. A careful evaluation and assessment of technologies, lock-ins, opportunities is required. The CNA initiative brings sufficient academic rigor to afford long-term perspectives on these trends.
Relevant Standards and Articles
- The NIST Definition of Cloud Computing
- The Twelve-Factor App
- Microservices (Martin Fowler / James Lewis)
Cloud-native applications are typically designed as distributed applications with a shared-nothing architecture composed of autonomous and stateless services that can horizontally scale and communicate asynchronously via message queues. The focus lies on the scalability and resilience of an application. The architecture style and current state of the art of how to design such applications is described with the term Microservices. While this is in no way the only way to architect cloud-native applications it is the current state of the art.
Generic CNA Architecture
The following architecture has been initially analysed, refined and realised by the SPLab CNA initiative team with a business application (Zurmo CRM) based on the CoreOS/fleet stack as well as on Kubernetes.
More recent works include a cloud-native document management architecture with stateful and stateless microservices implemented as composed containers with Docker-Compose, Vamp and Kubernetes.
Articles and Publications
G. Toffetti, S. Brunner, M. Blöchlinger, J. Spillner, T. M. Bohnert: Self-managing cloud-native applications: design, implementation and experience. FGCS special issue on Cloud Incident Management, 2016.
S. Brunner, M. Blöchlinger, G. Toffetti, J. Spillner, T. M. Bohnert, “Experimental Evaluation of the Cloud-Native Application Design”, 4th International Workshop on Clouds and (eScience) Application Management (CloudAM), Limassol, Cyprus, December 2015. (slides; author version; IEEExplore/ACM DL: to appear)
Note: Latest posts are at the bottom.
- Blog: Cloud-Native Applications – Seed Project Kickoff
- Blog: CNA Seed Project: Evaluation of Application to Migrate
- Blog: CNA Seed Project: Migration Process Part 1
- Blog: Process Management in Docker Containers
- Blog: MySQL Galera cluster with Fleet on CoreOS
- Blog: CNA seed project: CoreOS/Fleet implementation wrap-up
- Blog: Benchmarking cloud-native database systems
- Blog: Container Management with Vamp: Practical Example
- Blog: Container management with Kubernetes: Practical Example
- Blog: Cloud-Native Document Management
- Slides: Cloud-Native Application Design, Presented at 10th KuVS Expert-Talks, March 16th 2015, Fraunhofer Institute Berlin
- Slides: Migrating an Application into the Cloud with Docker and CoreOS, Presented at 3rd Docker Swiss User Group Meetup, Zurich, Switzerland, March 24th 2015
- Slides: Experimental Evaluation of the Cloud-Native Application Design, Presented at the 4th International Workshop on Clouds and (eScience) Application Management (CloudAM), Limassol, Cyprus, December 7th 2015.
- Josef Spillner, Cloud Applications: Less Guessing, more Planning and Knowing (slides), University of Coimbra, May 2016.
Open Source Software
- CNDBbench: Cloud-native database benchmark
- CNDBresults: Reproducible experimental results when using CNDBbench
- ARKIS Microservices: Cloud-native document management
- KubeGUI: Early user interface for Kubernetes
- Zurmo CNA-seed-project implementation: https://github.com/icclab/cna-seed-project/
Josef Spillner: josef.spillner(at)zhaw.ch
1. On-Demand Self-Service, Broad Network Access, Resource Pooling, Rapid Elasticity and Measured Service as defined in NIST Definition of Cloud Computing