Month: September 2012 (page 1 of 2)

European Commission Cloud Announcements

While the [ICCLab presented](http://ec.europa.eu/information_society/events/cf/ictpd12/document.cfm?doc_id=23258) at the [ICT Proposer’s Day in Warsaw](http://ec.europa.eu/information_society/events/ictproposersday/2012/index_en.htm), a very interesting announcement was made in relation to Europe’s strategy on Cloud Computing.

On Thursday, the vice president of the European commission, [Neelie Kroes](http://en.wikipedia.org/wiki/Neelie_Kroes), announced [further details](http://europa.eu/rapid/pressReleasesAction.do?reference=IP/12/1025&format=HTML&aged=0&language=EN&guiLanguage=en) on the European Cloud Partnership.

From the ICCLab’s perspective this is a very exciting announcement as it underlines some of our key research themes that investigated here, namely [dependability and interoperability](http://www.cloudcomp.ch/research/foundation/themes/). Also encouraging is [the reuse](http://ec.europa.eu/information_society/activities/cloudcomputing/docs/com/swd_com_cloud.pdf) of much good work carried out in the area of standardisation by [the SIENA initiative](www.sienainitiative.eu) as quoted in the “[Staff Working Paper](http://ec.europa.eu/information_society/activities/cloudcomputing/docs/com/swd_com_cloud.pdf)”.

In the announcement on Thursday arguments for why Europe should be engaging more with cloud were given. For many in the ICT domain these are well known but what is more interesting in this announcement and the accompanying report are the set of 3 key actions ([from the accompanying ECP document](http://ec.europa.eu/information_society/activities/cloudcomputing/docs/com/com_cloud.pdf)):

1. Cutting through the Jungle of Standards
– Promote trusted and reliable cloud offerings by tasking ETSI to coordinate with stakeholders in a transparent and open way to identify by 2013 a detailed map of the necessary standards (inter alia for security, interoperability, data portability and reversibility).
– Enhance trust in cloud computing services by recognising at EU-level technical specifications in the field of information and communication technologies for the protection of personal information in accordance with the new Regulation on European Standardisation.
– Work with the support of ENISA and other relevant bodies to assist the development of EU-wide voluntary certification schemes in the area of cloud computing (including as regards data protection) and establish a list of such schemes by 2014.
– Address the environmental challenges of increased cloud use by agreeing, with industry, harmonised metrics for the energy consumption, water consumption and carbon emissions of cloud services by 2014.
2. Safe and Fair Contract Terms and Conditions
– Develop with stakeholders model terms for cloud computing service level agreements for contracts between cloud providers and professional cloud users, taking into account the developing EU acquis in this field.
– In line with the Communication on a Common European Sales Law29, propose to consumers and small firms European model contract terms and conditions for those issues that fall within the Common European Sales Law proposal. The aim is to standardise key contract terms and conditions, providing best practice contract terms for cloud services on aspects related with the supply of “digital content”.
– Task an expert group set up for this purpose and including industry to identify before the end of 2013 safe and fair contract terms and conditions for consumers and small firms, and on the basis of a similar optional instrument approach, for those cloud-related issues that lie beyond the Common European Sales Law .
– Facilitate Europe’s participation in the global growth of cloud computing by: reviewing standard contractual clauses applicable to transfer of personal data to third countries and adapting them, as needed, to cloud services; and by calling upon national data protection authorities to approve Binding Corporate Rules for cloud providers.30
– Work with industry to agree a code of conduct for cloud computing providers to support a uniform application of data protection rules which may be submitted to the Article 29 Working Party for endorsement in order to ensure legal certainty and coherence between the code of conduct and EU law.

3. Establishing a European Cloud Partnership to drive innovation and growth from the public sector.
– identify public sector cloud requirements; develop specifications for IT procurement and procure reference implementations to demonstrate conformance and performance.33
– Advance towards joint procurement of cloud computing services by public bodies based on the emerging common user requirements.
– Set up and execute other actions requiring coordination with stakeholders as described in this document.

This annoucement was coupled with the news that the EU commission will supporting its cloud strategy with [160B EUR to the EU GDP by 2020](http://techcrunch.com/2012/09/27/europe-shoots-for-the-clouds-ec-lays-out-new-cloud-strategy-to-add-e160b-to-eu-gdp-by-2020/).

# What is the ECP?
The ECP is a coming together of public authorities and industry, both Cloud buyers and suppliers. It consists of 3 main phases:

1. Common requirements for cloud technology procurement. Typical examples here include standards and security.
2. The delivery of proof-of-concepts for the common requirements
3. Creation of reference implementations

It was originally outlined [in a speech](http://europa.eu/rapid/pressReleasesAction.do?reference=SPEECH/12/38&format=HTML&aged=0&language=EN&guiLanguage=en) by Neelie Kroes in late January.

ICCLab Infrastructure Relocation

by Josef Spillner


The relocation of the ICCLab hardware and the integration of 9 additional nodes is now complete. The whole movement was done within one day thanks to the support of Pietro, Philipp and Michael – Thanks Guys! Now our lab runs 15 compute nodes, 1 controller node and 1 NAS. We will segment this infrastructure to build a development environment including 10 nodes where we can develop and test our work on OpenStack and a production environment including 5 nodes for production purposes. As the next step we are will to redeploy OpenStack by means of automation tools Puppet and Foreman as was presented at the EGI Technical Forum. Let’s see how fast we can deploy 15 nodes from scratch! We’ll be studying, timing and evaluating it!

Ewald Mund

Ewald Maria Mund is senior lecturer at the Zurich University of Applied Sciences.

After a computer science degree  (Dip.Inform) from the University of Karlsruhe in 1977, he joined Hewlett-Packard Germany as software engineer and was soon promoted to a senior software engineer position (1971-1981).

From 1982-1984 he was with Cap Gemini Switzerland as Ingenieur en Chef until he decided to start-up his own consultancy and software development venture.

Since 2001 he is with ZHAW teaching a broad range of topics in the area of software development and databases, in which he pays special attention to practical relevance and solid technical depth.

After three decades of software and systems design, development, and operations Ewald processes an comprehensive background, like only true industry veterans do.

His most profound areas are software development (JDBC/JPA, HIBERNATE, EJB3.0 and Java EE, JSF2, GROOVY), Relational and NoSQL databases,

His research contributions to the ICCLab are in the area of BigData, NoSQL, Hadoop applications, and scalable software development.

Ewald’s private website can be found here.

Lucas Graf

Lucas Graf is pursuing his Bachelor of Science degree at ZHAW. He is in the sixth semester.

In 2009 he finished his apprenticeship as a computer specialist before he began his study in Information Technology at ZHAW.

He contributes as research assistant to the cloud computing research of the ICCLab, in particular investigating and developing  a monitoring system for the ICCLab Cloud Infrastructure.

1st Swiss OpenStack User Group Event

The ICCLab along with the the ZH-Geeks community will be hosting the very first Swiss OpenStack User group. The event will take place on the 15th of November, starting from 1800 onwards. Amongst other to be announced presentations, Tim Bell will be giving the keynote presentation. He will also detail how OpenStack is used in CERN. For more updates stay tuned to the @OpenStackCH twitter account, the ZH-Geeks meetup site or the OpenStack CH LinkedIn group. If you have an idea for a talk don’t hesitate to let us know about it through twitter, meetup or linkedin!

EU Report: “Advances in Clouds: Report from the Cloud Computing Expert Working Group”

# Introduction
This is a brief summary of the [EU Report:”Advances in Clouds: Report from the CLOUD Computing Expert Working Group.”](http://cordis.europa.eu/fp7/ict/ssai/docs/future-cc-2may-finalreport-experts.pdf) In this report a set of appointed Cloud experts have studied the current cloud computing landscape and have came out with a set of recommendations for advancing the future cloud. They note a large number of challenges present today in cloud computing and where tackled provide an opportunity to European innovators. Quoting the resport: *”Many long-known ICT challenges continue and may be enhanced in a CLOUD environment. These include large data transmission due to inadequate bandwidth; proprietarily of services and programming interfaces causing lock-in; severe problems with trust, security and privacy (which has legal as well as technical aspects); varying capabilities in elasticity and scaling; lack of interoperation interfaces between CLOUD (resources and services) offerings and between CLOUDs and other infrastructures and many more.”*

They see that performance aspects in cloud are as ever prescient and require tackling. *”What is more, spawning (scaling) of objects – no matter whether for the purpose of horizontal or vertical scale – is thereby still slow in modern CLOUD environments and therefore also suboptimal, as it has to take a degree of lag (and hence variance) into account.”*

As ever the topics of **SLAs and QoS** , which provide aspects of **dependability and transparency** to clients arise: *”lacking quality of service control on network level, limitations of storage, consistency management.” The worry here is “If the QoS is only observable per resource instance, instead of per user, some users will not get the quality they subscribed to.”*

They say that **interoperability and portability** are still challenges and that “In general there is a lack of support for porting applications (source code) with respect to all aspects involved in the process” and that due to demand of cloud services “the need for resources will exceed the availability of individual providers” however “current federation and interoperability support is still too weak to realise this”.

More related to **business models**, “generally insufficient experience and expertise about the relationship between pricing, effort and benefit: most users cannot assess the impact of moving to the CLOUD”.

Many of the topics highlight in this report are themes that are being pursued here the **ICCLab**, especially on areas of performance, work load management, dependability and interoperability.

# Identified Essential Research Issues
From the report the following key research issues and challenges were noted.

– **Business and cost models**
– Accounting, billing, auditing: pricing models and appropriate dynamic systems are required including monitoring of resources and charging for them with associated audit functions. This should ideally be supported by integrated quota management for both provider and user, to help keep within budget limits
– Monitoring: common monitoring standards and methods are required to allow user choice over offerings and to match user expectations in billing. There are issues in managing multi-tenancy accounting, real time monitoring and the need for feedback from expectations depending on resource usage and costs.
– Expertise: The lack of expertise requires research to develop best practice. This includes user choices and their effect on costs and other parameters and the impact of CLOUDs on an ICT budget and user experience. Use cases could be a useful tool.

– **Data management and handling**
– Handling of big data across large scales;
– Dealing with real-time requirements – particularly streamed multimedia;
– Distribution of a huge amount of data from sensors to CLOUD centres;
– Relationship to code – there is a case for complete independence and mobile code move the code to the (bulky) data;
– Types of storage & types of data – there is a need for appropriate storage for the access pattern (and digital preservation) pattern required. Different kinds of data may optimally utilise different kinds of storage technology. Issues of security and privacy are also factors.
– Data structuring & integrity – the problem is to have the representation of the real world encoded appropriately inside the computer – and to validate the stored representation against the real world. This takes time (constraint handling) and requires elastic scalable solutions for distributed transactions across multiple nodes;
– Scalability & elasticity are needed in all aspects of data handling to deal with ‘bursty’ data, highly variable demand for access for control and analysis and for simulation work including comparing analytical and simulated representations;

– **Resource awareness/Management**

– Generic ways to define characteristics: there is a need for an architecture of metadata to a common framework (with internal standards) to describe all the components of a system from end-user to CLOUD centre;
– Way to exploit these characteristics (programmatically, resource management level): the way in which software (dominantly middleware but also, for example, user interface management) interacts with and utilises the metadata is the key to elasticity, interoperation, federation and other aspects;
– Relates to programmability & resource management: there are issues with the systems development environment such that the software generated has appropriate interfaces to the metadata;
– Depending on the usage, “resources” may incorporate other services Virtualisation – by metadata descriptions utilised by middleware –
– Of all types of devices
– Of network
– Of distributed infrastructures
– Of distributed data / files / storage
– Deal with scale and heterogeneity: the metadata has to have rich enough semantics;
– Multidimensional, dynamic and large scale scheduling respecting timing and QoS;
– Efficient scale up & down: this requires dynamic rescheduling based on predicted demand;
Allow portable programmability: this is critical to move the software to the appropriate resource;
– Exploit specifics on all levels: high performance and high throughput applications tend to have specific requirements which must be captured by the metadata;
– Energy efficient management of resources: in the ‘green environment’ the cost of energy is not only financial and so good management practices – another factor in the scheduling and optimisation of resources – have to be factored in;
– Resource consumption management : clearly managing the resources used contributes to the expected cost savings in an elastic CLOUD environment; Advanced reservation: this is important for time or business critical tasks and a mechanism is required;
– Fault tolerance, resilience, adaptability: it is of key importance to maintain the SLA/QoS

– **Multi-tenancy impact**
– Isolate performance, isolate network slices: this is needed to manage resources and security;
– No appropriate programming mechanism: this requires research and development to find an appropriate systems development method, probably utilising service-oriented techniques;
– Co-design of management and programming model: since the execution of the computation requires management of the resources co-design is an important aspect requiring the programmer to have extensive knowledge of the tools available in the environment;

– **Programmability**

– Restructure algorithms / identify kernels: in order to place in the new systems development context – this is re-use of old algorithms in a new context; Design models (reusability, code portability, etc): to provide a systematic basis for the above;
– Control scaling behaviour (incl. scale down, restrict behaviour etc.): this requires to be incorporated in the parameters of the metadata associated with the code;
Understand and deal with the interdependency of (different) applications with the management of large scale environments
– Different levels of scale: this is important depending on the application requirements and the characteristics of different scales need to be recorded in the metadata;
– Integrate monitoring information: dynamic re-orchestration and execution time changes to maintain SLA/QoS require the monitoring information to be available to the environment of the executing application;
– Multi-tenancy: as discussed above this raises particular aspects related to systems development and programmability;
– Ease of use: the virtualised experience of the end-user depends on the degree with which the non-functional aspects of the executing application are hidden and managed autonomically;
Placement optimisation algorithms for energy efficiency, load balancing, high availability and QoS: this is the key aspect of scheduling resources for particular executing applications to optimise resource usage within the constraints of SLA and QoS;
– Elasticity, horizontal & vertical: as discussed before this feature is essential to allow optimised resource usage maintaining SLA/QoS;
– Relationship between code and data: the greater the separation of code and data (with the relationships encoded in metadata) the better the optimisation opportunities. Includes aspects of external data representation;
– Consider a wide range of device types and according properties, including energy efficiency etc.; but also wide range of users & use cases (see also business models): this concerns the optimal use of device types for particular applications;
– Personalisation vs. general programming: as programming moves from a ’cottage knitting’ industry to a managed engineering discipline the use of general code modules and their dynamic recomposition and parameterisation (by metadata) will increasingly become the standard practice. However this requires research in systems development methods including requirements capture and matching to available services.

– **Network Management**

– Guaranteeing bandwidth / latency performance, but also adjusting it on demand for individual tenants (elastic bandwidth / latency): this is a real issue for an increasing number of applications. It is necessary for the network to exhibit some elasticity to match that of the CLOUD centres. This may require network slices with adaptive QoS for virtualising the communication paths;
– Compensating for off-line time / maintain mobile connectivity (internationally): intermittent mobile connectivity threatens integrity in computer systems (and also allows for potential security breaches). This relates to better mechanisms for maintaining sessions / restarting sessions from a checkpoint;
– Isolating performance, connectivity etc.: there is a requirement for the path from end-user to CLOUD to be virtualised but maintaining the QoS and any SLA. This leads to intelligent diagnostics to discover any problems in connectivity or performance and measures to activate autonomic processes to restore elastically the required service.

– **Legalisation and Policy**
– Privacy concerns: especially in international data transfers from user to CLOUD;
– Location awareness: required to certify conformity with legislation;
– Self-destructive data; if one-off processing is allowed;

– **Federation**
– Portability, orchestration, composition: this is a huge and important topic requiring research into semi-automated systems development methods allowing execute time dynamic behaviour;
– Merged CLOUDs: virtualisation such that the end-user does not realise the application is running on multiple CLOUD providers’ offerings;
– Management: management of an application in a federated environment requires solutions from the topics listed above but with even higher complexity;
– Brokering algorithms: are needed to find the best services given the user requirements and the resource provision;
– Sharing of resources between CLOUD providers: this mechanism would allow CLOUD providers to take on user demands greater than their own capacity by expanding elastically (with appropriate agreements) to utilise the resources of other CLOUD suppliers;
– Networking in the deployment of services across multiple CLOUD providers: this relates to the above and also to the Networking topic earlier;
– SLA negotiation and management between CLOUD providers: this is complex with technical, economic and legal aspects;
– Support for context-aware services: is necessary for portability of (fragments of) an application across multiple CLOUD service providers;
– Common standards for interfaces and data formats: if this could be achieved then federated CLOUDs could become a reality;
– Federation of virtualized resources (this is not the same as federation of CLOUDs!) is required to allow selected resources from different CLOUD suppliers to be utilised for a particular application or application instance. It has implications for research in
– Gang-Scheduling
– End-to-End Virtualisation
– Scalable orchestration of virtualized resources and data: co-orchestration is highly complex and requires earlier research on dynamic re- orchestration/composition of services;
– CLOUD bursting, replication & scale of applications across CLOUDs: this relies on all of the above.

– **Security**
– Process applications without disclosing information: Homomorphic security: this offers some chance of preserving security (and privacy);
– Static & dynamic compliance: this requires the requirements for compliance to be available as metadata to be monitored by the running application;
– Interoperability, respectively common standards for service level and security: this relates to standard interfaces since the need is to encode in metadata;
– Security policy management: policies change with the perceived threats and since the CLOUD environment is so dynamic policies will need to also be dynamic.
– Detection of faults and attacks: in order to secure the services, data and resources, threads need to be detected early (relates to reliability)
– Isolation of workloads: particular workloads of high security may require isolation and execution at specific locations with declared security policies that are appropriate;

Mobile Cloud Computing Research Group

Mobility, ubiquity, and simply anywhere, anytime, anything combined with on-demand services and the powers of the Cloud.

Mobile and Cloud Computing seems to be a natural match.

But Mobile Cloud Computing lacks an accepted and technically sound definition. Instead, these terms are frequently used for marketing purposes and thus blurring (and devaluing) the actual potential of Mobile Cloud Computing.

It is thus important to establish an solid definition of Mobile Cloud Computing that is accepted by the community.

If you agree on this then don’t hesitate to share your thoughts with us. Join our Mobile Cloud Computing Research Group and let us know!

 

Automating OCCI Installations

As part of the work here in the ICCLab not only are we active in the [OCCI working group](http://www.occi-wg.org) but also contributing not only [contributing to its implementation on OpenStack](https://github.com/tmetsch/occi-os) but we also make available our work on automating the install of OpenStack. We recently made a contribution to the [puppetlab-nova project](https://github.com/puppetlabs/puppetlabs-nova). This [contribution allows](https://github.com/puppetlabs/puppetlabs-nova/pull/150) users of the nova module to specify the APIs to enable in nova, as well as enabling the OCCI if specified.

The contribution, [submitted as a pull request](https://github.com/puppetlabs/puppetlabs-nova/pull/150) can be used in the following fashion:

[gist id=3778884]

The `nova::api` class declared above enables all the usual OpenStack APIs as well as the OCCI interface. Where the OCCI API is enabled, puppet then will look after installing the necessary components.

From Bare Metal to Cloud

This is the presentation that was presented at the [EGI Technical Forum 2012 in Prague](http://tf2012.egi.eu/).

If you like, [download the slides as pdf](http://blog.zhaw.ch/icclab/files/2012/09/From-Bare-Metal-to-Cloud.pdf).

There is also a youtube video showing the various stages of bring bare metal machines to a state such that they have OpenStack installed and operational.

For those in attendance or those that are interested in how all of this is done, all information, HOWTOs, code, virtual machine images are available from this site.

The talk had an excellent attendance and there is great interest in using OpenStack within the EGI FedCloud environment, especially one where the installation is automated as with our work.

ICCLab EGI TF Audience

 

In Quest of the “Open Cloud” (updated)

A remarkable and wonderful feature of Zurich is its vivid computer science and technology community. Hardly any week passes without an interesting event around new innovations and technologies, like the Internet of Things, novel programming languages (Go, etc), security, and of course Cloud Computing technologies (e.g. MongoDB) [footnote1]. Particularly interesting – from our / ICCLab perspective – is the ZhGeeks (@zhgeeks) community, run by one of our fellow technology and cloud evangelists Muharem. Last weeks Zhgeeks meeting was about Open Cloud and no less prominent figure than Samj was about to update us on the Open Cloud Initiative. A truly inspiring talk  (download slides).

Notwithstanding of Sam’s comprehensive and sound introduction into the world of “Cloud Openness” (from an OCI perspective) I can’t help but have to ask myself – hellya, what is this Open Cloud thing?

So let’s see what the universal oracle has to tell [footnote2]. And here we go. “Open Clouds” all over; what a surprise. So again, what is it then?

Indeed, this is an interesting question if you bet on CC, either as adopter, contributor, or simply user. As Sam summed it up, “Open Cloud is about technology openness such that the market can drive it’s evolution”, reflecting the OCI definition and principles:

Open Cloud must meet the following requirements:

  • Open Formats: All user data and metadata must be represented in Open Standard formats.
  • Open Interfaces: All functionality must be exposed by way of Open Standard interfaces.”

Having sacrificed several hours of sleep for studying the OpenStack Foundation bylaws in order to understand the implications on OpenStack’s future (you my call it “openness”) – see my earlier blog post Open- or NotSo-OpenStack? – I was wondering whether OCI’s definition also poses open governance requirements onto OSS Cloud Computing projects, like OpenStack, CloudStack, Eucalyptus and the likes, but this is seemingly not the case.

Red Hat’s Open Cloud definition goes further by not only requesting open source, standards, and interfaces, but also incorporating “technical governance” into “The Open Cloud: Red Hat’s Perspective”. It says “An Open Cloud …

Has a viable, independent community. Open source isn’t just about the code, its license, and how it can be used and extended. At least as important is the community associated with the code and how it’s governed. Realizing the collaborative potential of open source and the innovation it can deliver to everyone means having the structures and organization in place to tap it fully.”

Another popular reference is the Open Cloud Manifesto, which defines a set of open cloud principles, similar to OCI but focuses further on features of an open cloud and the defining community, in particular by embracing the fact that Cloud Computing is a community effort,  that different stakeholder groups exist, and that these ought to collaborate to avoid fragmentation.

“6. Cloud computing standards organizations, advocacy groups, and communities should work together and stay coordinated, making sure that efforts do not conflict or overlap.”
RackSpace, notably one of the founding members of OpenStack, has too an opinion; “Open Cloud Computing : History of Open Source Coding and the Open Cloud” – in this post exclusively linked to Open Source Software.
Alex Williams from TechCruch “spend some time with the technologists at CloudOpen – the Linux Foundation’s first cloud only event” and summarizes in his article “To sum it up: if VMworld is about the data center then CloudOpen is about the software” and continues “Over the past few days, I’ve tried to crystallize the conversation to some extent. Here is my take:
  • An open cloud has open APIs.
  • An open cloud has a developer community that collaborates on developing the cloud infrastructure or platform environment.
  • An open cloud has people who have deep experience in running open source projects.
  • An open cloud gives users the rights to move data as wished.
  • An open cloud is federated — you can run your apps and data across multiple cloud environments.
  • An open cloud does not require an IT administrator to provision and manage.
  • An open cloud does not require new hardware.
  • An open cloud is not a rat’s nest of licenses.
  • An open cloud is not a proprietary, new age mainframe.
  • An open cloud is not washed with marketing.
  • An open cloud can be forked.
  • An open cloud has full view into the infrastructure environment.
  • An open cloud is not hosted, legacy software
David Linthicum seems to share equal motivations which led me to this blog post. He tells in his post “InfoWorld: The ‘open cloud’ is getting awfully confusing”. The article does not go into detail but nicely summarizes the status quo:
“If you’re looking at adopting an “open cloud” technology, you have complex work ahead. Assessing their value is complicated by the fact that many of the vendors are less than a two years old and have a minimal install base that can provide insight into fit, issues, and value.

As with any significant IT endeavor, you need to do your homework, understand your requirements, and make sure to test this stuff well before you use it in your enterprise. At some point, the “open cloud” market will normalize, and when that happens, you hope your seat will still be available in the ensuing game of musical chairs.”

 from Gigao picks up this thread, citing Alex and David, for his “Prediction: More Cloud Confusion Ahead“.

Alex Williams from TechCruch cites the CloudOpen conference as trigge and here goes a summary of the discussions, 10 Insights from Linux Leaders in the Open Cloud.

Richard Kaufmann, chief technologist, HP Cloud Services, July 3, 2012, “HP Public Cloud Aims to Boost OpenStack Customer Base.”There are two important APIs out there, one is Amazon’s and the other is OpenStack. And OpenStack has Amazon compatibility. HP will continue to support those Amazon compatibility layers; We’re not trying to lead on a position about what customers should do with APIs… I (personally) believe there should be a popular cloud API for IaaS and it should not be Amazon. It could be anything else but it can’t float from above. It has to be based on popular usage.

Lew Moorman, Rackspace, July 10, 2012, “Open Cloud Key to Modern Networking.”Some people seem to think that APIs are the cloud and one thing that made the cloud so revolutionary is it’s programmatically accessible by API.  But (Amazon) S3 is a really complex distributed system. The issue with a model that says “clone Amazon” is that, unless you have the core technology underneath it, you can’t have a cloud…

OpenStack is really setting out to build an open alternative from end to end. They say we’re going to do networking, not just set out to copy Amazon. We need to really innovate and build a visionary system that can power the future of computing. Amazon, VMware and Microsoft don’t have all the answers.

Christopher Brown, CTO, Opscode, July 17, 2012, “Chef Offers a Recipe for the Open Source Cloud.”The open cloud lies both below and above the waterline of the API. At the beginning we all wanted to treat the cloud as an easier way to get the compute units that looked like the old thing we used to get buying a physical machine. But that’s not actually true. It’s not the same thing and it requires a different design underneath of a common substrate. If you look above the water line at the consumer, the way you build applications, the way they scale, etc., designing the cloud and for the cloud are different than what is now legacy.

Mark Hinkle, senior director of cloud computing community, Citrix July 24, 2012, “Citrix’s Hinkle Proposes Linux Model for an Open Source Cloud.It’s first and foremost that the orchestration platform is open source. The data you store within the cloud is open to you as the end user in a format you can manipulate easily and it’s easily transferable. The API is also open and clearly documented.

Ross Turk, vice president of community, InkTank, July 31, 2012, “An Open Source Storage Solution for the Enterprise.It can mean a cloud stack that is built on open source like OpenStack or CloudStack and that reflects the economic and community advantages behind building something that’s akin to what Amazon has done, but built on commodity hardware. It’s an open alternative to AWS. Another way to think of the open cloud doesn’t exclude AWS. It’s having cloud services with standardized APIs so applications written for one cloud can work on another cloud.

Imad Sousou, director of Intel’s Open Source Technology Center, Aug. 7, 2012, “Open Cloud Standards will Emerge With More Collaboration.”The open cloud must meet these requirements:  Use Open Formats, where all user data and metadata must be represented in Open Standard formats. Use Open Interfaces, where functionality must be exposed through Open Standard interfaces. In addition, in the open cloud, various open source technologies should be available to build the right solutions efficiently, and to drive innovation. These would include software stack and tools, such as the hypervisors or operating systems, middleware, such as databases and web servers, web content management systems, and development tools and languages. Such open source-based software solutions would reinforce interoperability of the open cloud.

Kyle MacDonald, vice president of cloud, Canonical, Aug. 14, 2012, “Canonical: Making the Open Cloud Seamless for Users.True power comes when the users can move from one cloud service to another. That’s the state of nirvana. The cool word is ‘interoperability.’ …  It will be almost required that if you’re a cloud service you publish an API that’s clear. And eventually there will be a common API or it becomes so simple the minor differences won’t be a big deal to end users. And then partners who define the services can use those same open source technologies and provide a good service.

Alan Clark, director of industry initiatives, emerging standards and open source, SUSE Aug. 21, 2012, “SUSE Aims for One-Click Enterprise Deployment on OpenStack.Enterprise IT must deliver the most efficient, scalable and flexible services possible. The open cloud provides that through the ability to have a flexible infrastructure, quick and easy deployment, service management and complete life cycle management. We’re working with partners — many are part of these open source projects – to build this together and that builds interoperability. It’s a collaboration of ideas as well as code. It accelerates bringing a solution to market that works across all the different partners.

Angel Diaz, vice president of software standards and cloud, IBM Aug. 28, 2012, “3 Projects Creating User-Driven Standards for the Open Cloud.” Our clients who use technology have a heterogeneous environment. They need to take existing systems, extend them and deal with it and they don’t want to be locked into a singe vendor solution. That is how (IBM) defines an open cloud: where end users want to have these points of interoperability.

Joe Brockmeier, open source cloud computing evangelist, Citrix Sept. 6, 2012, “Defining the Open Cloud.Some folks will argue that a cloud service or offering is open if it has open APIs and open standards.  For my money, the best definition of the open cloud came from Red Hat’s Scott Crenshaw: It enables portability across clouds; Has a pluggable, extensible, and open API; Lets you deploy to your choice of infrastructure; It’s unencumbered by patents and other IP restrictions; It’s based on open standards; It has a viable and independent community; It is open source. Having open APIs is necessary, but it’s not enough. If you depend on one vendor to provide your cloud, and you can’t pull it in-house for any reason, it’s just not open.

So are we any smarter after this, only little I fear. There are common themes, like open source, standards, interfaces, royalty-freeness from a technology ankle and freedom of choice and cross-cloud portability on the other. And perhaps the community should indeed take this one Open Cloud Manifesto principle on “open cloud community collaboration” to heart and drive consolidation; instead of contributing to one of those fundamental innovation hindrances that they try to avoid, that is fragmentation. The CloudOpen conference was one great step into this direction and the Google Hangout on OpenClouds (on Youtube) by Ben Kepes (Ben Kepes on G+) will hopefully be another way to continue this important discussion.

ANNEX

Open Cloud InitiativeOpen Cloud Principles (OCP)

Interoperability (the ability to exchange and use information) between cloud computing products and services is required for unfettered competition between vendors and unrestricted choice for users.

Users must be able to come (no barriers to entry) and go (no barriers to exit) regardless of who they are (no discrimination) and what systems they use (technological neutrality).

Supporting vendors must therefore cooperate on standards, implementing those that exist (where applicable) and collaborating via an open process to develop those that don’t, with a view to competing fairly on quality.
Open Cloud must meet the following requirements:
  • Open Formats: All user data and metadata must be represented in Open Standard formats.
  • Open Interfaces: All functionality must be exposed by way of Open Standard interfaces.

Open Standards must meet the following requirements:

  • Copyrights: The standard must be documented in all its details, published and both accessible and [re]usable free of charge.
  • Patents: Any patents possibly present on [parts of] the standard must be irrevocably made available on a royalty-free basis.
  • Trademarks: Any trademarks possibly present on identifier(s) must be used for non-discriminatory enforcement of compliance only.
  • Implementations: There must be multiple full, faithful, independent and interoperable implementations (for both client and server where applicable) and at least one such implementation must be licensed in its entirety under an Open Source Initiative (OSI) approved license or placed into the public domain.

Red Hat : An open cloud has the following characteristics:

  • Is open source. This allows adopters to control their particular implementation and doesn’t restrict them to the technology and business roadmap of a specific vendor. It lets them build and manage clouds that put them in control of their own destiny and provides them with visibility into the technology on which they’re basing their business. It provides them with the flexibility to run the workloads of their choice, including proprietary ones, in their cloud. Open source also lets them collaborate with other communities and companies to help drive innovation in the areas that are important to them.
  • Has a viable, independent community. Open source isn’t just about the code, its license, and how it can be used and extended. At least as important is the community associated with the code and how it’s governed. Realizing the collaborative potential of open source and the innovation it can deliver to everyone means having the structures and organization in place to tap it fully.
  • Is based on open standards, or protocols and formats that are moving toward standardization and that are independent of vendor and platform. Standardization in the sense of “official” cloud standards blessed by standards bodies is still in early days. That said, approaches to interoperability that aren’t under the control of individual vendors and that aren’t tied to specific platforms offer important flexibility. This allows the API specification to evolve beyond implementation constraints and creates the opportunity for communities and organizations to develop variants that meet their individual technical and commercial requirements.
  • Freedom to use IP.  Recent history has repeatedly shown that there are few guarantees that intellectual property (IP) assets will remain accessible to all from one day to the next.  To have confidence that you will continue to enjoy access to IP assets that you depend on under the terms that you depend on, permission needs to be given in ways that make that technology open and accessible to the user.  So-called “de facto standards,” which are often “standards” only insofar as they are promoted by a large vendor, often fail this test.
  • Is deployable on your choice of infrastructure. Hybrid cloud management should provide an additional layer of abstraction above virtualization, physical servers, storage, networking, and public cloud providers. This implies, or indeed requires, that cloud management be independent of virtualization and other foundational technologies. This is a fundamental reason that cloud is different from virtualization management and a fundamental enabler of hybrid clouds that span physical servers, multiple virtualization platforms, and a wide range of public cloud providers including top public clouds.
  • Is pluggable and extensible with an open API. This lets users add features, providers, and technologies from a variety of vendors or other sources. Critically, the API itself cannot be under the control of a specific vendor or tied to a specific implementation but must be under the auspices of a third-party organization that allows for contributions and extensions in an open and transparent manner. Deltacloud, an API that abstracts the differences between clouds, provides a good example. It is under the auspices of the Apache Software Foundation and is neither a Red Hat-controlled project nor tied to a particular implementation of cloud management.
  • Enables portability to other clouds. Implicit in a cloud approach that provides support for heterogeneous infrastructure is that investments made in developing for an open cloud must be portable to other such clouds. Portability takes a variety of forms including programming languages and frameworks, data, and the applications themselves. If you develop an application for one cloud, you shouldn’t need to rewrite it in a different language or use different APIs to move it somewhere else. Furthermore, a consistent runtime environment across clouds means that retesting and requalification isn’t needed every time you want to redeploy.

Open Cloud Manifesto : Open Cloud Principles

Rather, as cloud computing matures, there are several key principles that must be followed to ensure the cloud is open and delivers the choice, flexibility and agility organizations demand:
1. Cloud providers must work together to ensure that the challenges to cloud adoption (security, integration, portability, interoperability, governance/management, metering/monitoring) are addressed through open collaboration and the appropriate use of standards.
2. Cloud providers must not use their market position to lock customers into heir particular platforms and limit their choice of providers.
3. Cloud providers must use and adopt existing standards wherever appropriate. The IT industry has invested heavily in existing standards and standards organizations; there is no need to duplicate or reinvent them.
4. When new standards (or adjustments to existing standards) are needed, we must be judicious and pragmatic to avoid creating too many standards. We must ensure that standards promote innovation and do not inhibit it.
5. Any community effort around the open cloud should be driven by customer needs, not merely the technical needs of cloud providers, and should be tested or verified against real customer requirements.
6. Cloud computing standards organizations, advocacy groups, and communities should work together and stay coordinated, making sure that efforts do not conflict or overlap.
Leaders of the Open Cloud at OpenCloud

Richard Kaufmann, chief technologist, HP Cloud Services
July 3, 2012, “HP Public Cloud Aims to Boost OpenStack Customer Base.” 

There are two important APIs out there, one is Amazon’s and the other is OpenStack. And OpenStack has Amazon compatibility. HP will continue to support those Amazon compatibility layers; We’re not trying to lead on a position about what customers should do with APIs… I (personally) believe there should be a popular cloud API for IaaS and it should not be Amazon. It could be anything else but it can’t float from above. It has to be based on popular usage.

Lew Moorman, Rackspace
July 10, 2012, “Open Cloud Key to Modern Networking.”

Some people seem to think that APIs are the cloud and one thing that made the cloud so revolutionary is it’s programmatically accessible by API.  But (Amazon) S3 is a really complex distributed system. The issue with a model that says “clone Amazon” is that, unless you have the core technology underneath it, you can’t have a cloud…

OpenStack is really setting out to build an open alternative from end to end. They say we’re going to do networking, not just set out to copy Amazon. We need to really innovate and build a visionary system that can power the future of computing. Amazon, VMware and Microsoft don’t have all the answers.

Christopher Brown, CTO, Opscode
July 17, 2012, “Chef Offers a Recipe for the Open Source Cloud.” 

The open cloud lies both below and above the waterline of the API. At the beginning we all wanted to treat the cloud as an easier way to get the compute units that looked like the old thing we used to get buying a physical machine. But that’s not actually true. It’s not the same thing and it requires a different design underneath of a common substrate. If you look above the water line at the consumer, the way you build applications, the way they scale, etc., designing the cloud and for the cloud are different than what is now legacy.

Mark Hinkle, senior director of cloud computing community, Citrix
July 24, 2012, “Citrix’s Hinkle Proposes Linux Model for an Open Source Cloud.” 

It’s first and foremost that the orchestration platform is open source. The data you store within the cloud is open to you as the end user in a format you can manipulate easily and it’s easily transferable. The API is also open and clearly documented.

Ross Turk, vice president of community, InkTank
July 31, 2012, “An Open Source Storage Solution for the Enterprise.” 

It can mean a cloud stack that is built on open source like OpenStack or CloudStack and that reflects the economic and community advantages behind building something that’s akin to what Amazon has done, but built on commodity hardware. It’s an open alternative to AWS.

Another way to think of the open cloud doesn’t exclude AWS. It’s having cloud services with standardized APIs so applications written for one cloud can work on another cloud.

Imad Sousou, director of Intel’s Open Source Technology Center
Aug. 7, 2012, “Open Cloud Standards will Emerge With More Collaboration.”

The open cloud must meet these requirements:  Use Open Formats, where all user data and metadata must be represented in Open Standard formats. Use Open Interfaces, where functionality must be exposed through Open Standard interfaces.

In addition, in the open cloud, various open source technologies should be available to build the right solutions efficiently, and to drive innovation. These would include software stack and tools, such as the hypervisors or operating systems, middleware, such as databases and web servers, web content management systems, and development tools and languages. Such open source-based software solutions would reinforce interoperability of the open cloud.

Kyle MacDonald, vice president of cloud, Canonical
Aug. 14, 2012, “Canonical: Making the Open Cloud Seamless for Users.” 

True power comes when the users can move from one cloud service to another. That’s the state of nirvana. The cool word is ‘interoperability.’ …  It will be almost required that if you’re a cloud service you publish an API that’s clear. And eventually there will be a common API or it becomes so simple the minor differences won’t be a big deal to end users. And then partners who define the services can use those same open source technologies and provide a good service.

Alan Clark, director of industry initiatives, emerging standards and open source, SUSE
Aug. 21, 2012, “SUSE Aims for One-Click Enterprise Deployment on OpenStack.” 

Enterprise IT must deliver the most efficient, scalable and flexible services possible. The open cloud provides that through the ability to have a flexible infrastructure, quick and easy deployment, service management and complete life cycle management.

We’re working with partners — many are part of these open source projects – to build this together and that builds interoperability. It’s a collaboration of ideas as well as code. It accelerates bringing a solution to market that works across all the different partners.

Angel Diaz, vice president of software standards and cloud, IBM
Aug. 28, 2012, “3 Projects Creating User-Driven Standards for the Open Cloud.” 

Our clients who use technology have a heterogeneous environment. They need to take existing systems, extend them and deal with it and they don’t want to be locked into a singe vendor solution. That is how (IBM) defines an open cloud: where end users want to have these points of interoperability.

Joe Brockmeier, open source cloud computing evangelist, Citrix
Sept. 6, 2012, “Defining the Open Cloud.” 

Some folks will argue that a cloud service or offering is open if it has open APIs and open standards.  For my money, the best definition of the open cloud came from Red Hat’s Scott Crenshaw: It enables portability across clouds; Has a pluggable, extensible, and open API; Lets you deploy to your choice of infrastructure; It’s unencumbered by patents and other IP restrictions; It’s based on open standards; It has a viable and independent community; It is open source.

Having open APIs is necessary, but it’s not enough. If you depend on one vendor to provide your cloud, and you can’t pull it in-house for any reason, it’s just not open.

FOOTNOTES

[footnote1]  This is symptom and cause, concurrently, for the Zurich’s rise as the European Silicon Valley, as some already claim, based on its flourishing high-tech start-up scene that enjoys a wide array of support and attention (e.g. Zurich Start-Up PortalStartwerk PortalZurich Start-up Weekend, Zurich StartupsStartup.ch).

[footnote2] Apologies to one of my previous employers. I really didn’t mean to mention the evil.

 

« Older posts