Tag: cloud (page 2 of 5)

Cloud Services: An Academic Perspective

by Josef Spillner

An academic entity – more concretely, a research laboratory – resembles a stateful function: It receives input and generates output based on both the input and on previously generated knowledge and results. The input is typically a mix of fancy ideas, industry necessities, as well as funding and equipment. The output encompasses publications, software and other re-usable artefacts.

In the Service Prototyping Lab, we rely on access to well-maintained cloud environments as one form of input to come up with and test relevant concepts and methods on how to bring applications and services online on top of programmable platforms and infrastructure (i.e., PaaS and IaaS). This Samichlaus post reports on our findings after having used several such environments in parallel over several months.

Continue reading

Cyclops 2.0 is here!

Our flagship open-source framework for cloud billing – Cyclops has matured to version 2.0 today. Over the past several months, Cyclops team at ICCLab have gathered community feedbacks, worked systematically updating and re-updating the framework core architecture to make the whole work-flow of billing of cloud services clean and seamless.

The core components in principle are still same as in our previous releases: udr, rc and billing micro-services, but they have been written again from scratch with main focus on modularity, extensibility, and elasticity. The framework is highly configurable and can be deployed as per the unique needs of billing use-cases of any organization.

RCB Cyclops architecture 2.0

RCB Cyclops architecture 2.0

Continue reading

Wanted: Senior Researcher / Researcher for Cloud Robotics

The Service Engineering (SE, blog.zhaw.ch/icclab) group at the Zurich University of Applied Sciences (ZHAW) / Institute of Applied Information Technology (InIT) in Switzerland is seeking applications for a full-time position at its Winterthur facility.

The successful candidate will work in the Service Prototyping Lab (SPLab) and will contribute to the research initiative on cloud robotics, see https://blog.zhaw.ch/icclab/category/research-approach/themes/cloud-robotics
Continue reading

Experimental evaluation of post-copy live migration in OpenStack using 10Gb/s interfaces

Up to now, we have published several blog posts focusing on the live migration performance in our experimental Openstack deployment – performance analysis of post-copy live migration in Openstack and an analysis of the performance of live migration in Openstack. While we analyzed the live migration behaviour using different live migration algorithms (read our previous blog posts regarding pre-copy and post-copy (hybrid) live migration performance) we observed that both live migration algorithms can easily saturate our 1Gb/s infrastructure and that is not fast enough, not for us! Fortunately, our friends Robayet Nasim and Prof. Andreas Kassler from Karlstad University, Sweden also like their live migrations as fast and reliable as possible, so they kindly offered their 10 Gb/s infrastructure for further performance analysis. Since this topic is very much in line with the objectives of the COST ACROSS action which both we (ICCLab!) and Karlstad are participants of,  this analysis  was carried out under a 2-week short term scientific mission (STSM) within this action.
This blog post presents a short wrap-up of the results obtained focusing on the evaluation of post-copy live migration in OpenStack using 10Gb/s interfaces and comparing them with the performance of the 1Gb/s setup. The full STSM report can be found here. Continue reading

GPU support in the cloud

It is well recognized that GPUs can greatly outperform standard CPUs for certain types of work – typically those which can be decomposed into many basic computations which can be parallelized; matrix operations are the classical example. However, GPUs have evolved primarily in the context of the quite independent video subsystem and even there, the key driver has been support for advanced graphics and gaming. Consequently, they have not been architected to support diverse applications within the cloud. In this blog post we comment on the state of the art regarding GPU support in the cloud.

Continue reading

Cloud Orchestration: Hurtle Released

We are proud to announce that the ICCLab has released Hurtle!

Hurtle logo Hurtle

 

Hurtle is a result of the ICCLab’s Cloud Orchestration Initiative.

What is hurtle?

With Hurtle, you automate the life-cycle management of any number of service instances in the cloud, from deployment of resources all the way to configuration and runtime management (e.g., scaling) of each instance. Our motivation is that software vendors often face questions such as “How can I easily provision and manage new instances of the service I offer for each new customer?”, this is what Hurtle aims to solve.

In short, Hurtle lets you:

offer your software as a service i.e. “hurtle it!”

In Hurtle terms, a service represents an abstract functionality that, in order to be performed, requires a set of resources, such as virtual machines or storage volumes, and an orchestrator which describes what has to be done at each step of a service lifecycle.
A “service instance” is the concrete instantiation of a service functionality with its associated set of concrete resources and service endpoints.

On top of this, Hurtle has been designed since its inception to support service composition, so that you can design complex services by (recursively!) composing simple ones.

Hurtle’s functionality revolves around the idea of services as distributed systems composed of multiple sub-applications, so the services offered are also ones that can be designed with the cloud in mind, based on the cloud-native application research of the ICCLab.

What does it mean to offer software as a service?

A bit of history first. Traditionally software has been ran locally, then was centralised and shared through intra-nets. All of this was still on company-specific infrastructure. This made hosting, provisioning and managing such software difficult and the full time job of many IT engineers and system administrators.

This quickly brought about the argument that IT in a SME or an enterprise was a cost centre that should be minimised and lead to outsourcing of such tasks to 3rd parties.

Now today with the ever growing acceptance and use of cloud computing the cost equation is again further reduced, but more interestingly, cloud computing reverses the trend of outsourcing operations to third parties if you consider the movement of devops.

In this new world organisations that create software don’t want nor need third parties to manage their software deployments. They have much of the tooling needed, developed in-house. If they don’t, yet still want to follow a devops approach they’ve quite an amount of work ahead of them.

It is in this scenario where hurtle can help!

What can hurtle do?

What will hurtle do?

  • More examples including the cloud native Zurmo implementation from ICCLab
  • Enhanced workload placement, dynamic policy-based
  • Support for docker-registry deployed containers
  • Runtime updates to service and resource topologies
  • CI and CD support
    • safe monitored dynamic service updates
  • TOSCA support
  • Support for VMware and CloudStack
  • User interface to visualise resource and services relationships
  • Additional external service endpoint protocol support

Want to know more?

Checkout: hurtle.it

Software-release “powdernote – Notes. Simplified.”

Today the ICCLab introduces a new way on how to interact with your notes.

http://icclab.github.io/powdernote/

What is powdernote?

Powdernote is a note-taking, cloud-based application for the terminal.

Why powdernote?

Because we never leave the terminal, not even to read our notes or create new ones.

This tool is for busy engineers, developers, power users, devops, …

Our target is to have an un-cluttered, distraction-free application to solve the simple task of taking notes and having them available everywhere (storage is on the cloud)!

Main functions

  • Create, edit, delete, print, tag notes
  • Search content
  • Browse versions of notes
  • Export/import to/from powdernote files

Cloud High Availability: how to select the right technologies

There are many different technologies which can increase availability of a cloud infrastructure. In our newest Techcouting paper we evaluate several HA technologies in order to define a HA architecture for an OpenStack deployment which is part of the XiFi project. HA technologies can be grouped in the following classes:

  • Resource monitors that check if IT-services are alive and (sometimes automatically) recover them in case of failure.
  • Load balancers that direct end user requests to those resources that are still alive and show reasonable prformance.
  • Distributed disks and file systems that increase redundancy of data and help to prevent data loss in case of failure.
  • Distributed databases which help to prevent loss of database records.

Every OpenStack component has the purpose to deliver a service to an end user. Availability of a cloud instance is dependent on the availability of the delivered end users services as perceived by end users. If we want to use a HA technology to increase availability of OpenStack we have to analyze dependencies of end user services on IT and infrastructure components. Therefore we created a dependability model of the provided IT services and the business services consumed by end users.

dependencies

As availability always depends on the requirements that are defined by end users we asked several OpenStack end users in a survey on the importance of each business service. The result is that end users tended to rate “Infrastructure Management” and “Security Management” as the most important services. Therefore we had to ensure that these services have high availability levels.
By linking the importance of the service to the IT components that provide it, we can assign a target availability level to each component. Furthermore we can compare several HA architectures to each other and check the availability levels they can achieve. We built several fault tree diagrams that represent the link of component failures to service outages:

fta

A simulation of service outages by given inputs of failure rates revealed that adding HA technologies to OpenStack can add up to 7-8 percent points to the average availability level of the provided services.

We tested several technologies that belong to one of the HA technology classes. Our evaluation included chances and risks associated with implementing the technology and technological maturity. We assigned each technology a chances, risks and maturity score.

ha_tech_assess_results

The result of our evaluation is that we prefer to use keepalived, HAProxy, Ceph/RADOS and MySQL Galera as HA technologies to improve availability of our OpenStack installation. These technologies are all open-source. They have been preferred because their performance is not significantly lower than the performance of commercial products, but they are available for free, while commercial products are not. The final HA architecture is able to increase availability levels of all OpenStack services up to three nines – which is a very high availability level in cloud computing.

It is clear that another organization would come to other conclusions when the concrete implementation of a HA technology has to be selected, but the evaluation methodology used in our paper shows how to make more reasonable technology choice decisions by linking end user requirements with system architecture characteristics and rate several architectural alternatives by the availability levels that are reasonably achievable.

Cloud High Availability

Overview

Cloud computing means:

  • On-demand self service
  • Virtualization
  • Elastic resource provisioning

Cloud computing service is comparable to public utility services like gas, telephone or water supply.

Economical value of cloud computing service is determined by reliability, availability and maintainability (RAM) characteristics.

Availability impacts the value of cloud computing as it is perceived by end users. High Availability systems increase guaranteed availability of a cloud computing service. Therefore they increase the economical value of a cloud computing service.

Objectives

Cloud HA initiative has the objectives:

  • To provide a service to analyze problems related with reliability and availability of cloud computing systems
  • To provide systems and services that increase reliability and availability of cloud computing systems

Research Challenges

The following challenges exist currently:

  • Measuring and analyzing availability: how can we experimentally determine reliability of cloud computing systems (VMs, storage etc.)? Design of adequate reliability measurement experiments is difficult, since we often have to rely on simulation of an outage.

  • Adapt reliability engineering methods to cloud computing: many reliability analysis and engineering techniques do exist (Fault Tree Analysis, FME(C)A, HAZOP, Markov Chains). How can we apply them to the area of cloud computing?

  • Analytic and monitoring systems: build systems that automatically monitor reliability of cloud resources and analyze problems.

  • Failure recovery and intelligent event management systems: build systems that intelligently detect and react to failures.

Currently there is almost no data available on reliability of different virtualization technologies like OpenStack or Docker.

Cloud vendors and manufacturers simply claim that their systems operate reliably without providing data to prove their claims. Think about an engineering company (like e. g. ABB or Siemens). Would they still be on the market if they were not able to tell their customers the exact hazard rates and MTBFs of their products? The IT industry is lagging behind other engineering industries. IT reliability engineering could be an interesting discipline that adds value to IT products and services.

Relevance to current and future markets

Business impact

Existing High Availability solutions:

  • Pacemaker: resource monitor that automatically detects failures and recovers failed components. Highly configurable, but also heavyweight. System administrators notoriously complain about its bad configuration interface. A bad configuration can make the system 7-8 times slower than a good configuration.

  • Keepalived: lightweight resource monitor. Unclear if this tool is well supported by its community.

  • IBM Tivoli: extremely heavyweight resource monitor and configuration management tool.

  • HAProxy: light load balancer. Great for web applications, but only applicable to HTTP-based services.

  • DRBD: disk replication technology. Fast and lightweight. Suitable for small disk networks.

  • Ceph: distributed storage and file system. Highly decentralized and great scalability.

  • GlusterFS: distributed storage and file system. Better scalability, but sometimes problem with partition tolerance.

  • Galera: MySQL cluster. True multimaster solution.

  • MySQL NDB Cluster: maps MySQL to simple key,value store. Requires adaption of applications to database interface.

  • Nagios: great monitoring system. Extendability and many plugins available.

  • Elasticsearch, Logstash, Kibana (ELK): log file monitoring system.

There are many HA systems available on the market, but almost no tool to analyze reliability of OpenStack and allow for automated intelligent recovery from failure.

Results

Presentation

HA_initiative_factsheet

Contact

Konstantin Benz
Obere Kirchgasse 2
CH-8400 Winterthur
Mail: benn__(at)__zhaw.ch

Introduction to SmartDataCenter

Joyent recently open sourced the IaaS Platform SmartDataCenter and the Object Storage Manta, the software they use for their own service offerings. So, what’s all the buzz about? Why should you be excited? Why is it even worth talking (or in this case, writing) about SDC when we have OpenStack? In this blog post I will cover some of the fundamentals of SDC and why it’s worth a second look.

Continue reading

« Older posts Newer posts »