Tag: interoperability

ICCLab presents Interoperability and APIs in OpenStack @ EGI Technical Forum, Cloud Interoperability Week

The ICCLab was invited to give a talk on Interoperability and APIs in OpenStack at EGI Technical Forum which was co-located with Cloud Interoperability Week. The workshop and hands-on tutorial sessions took place between September 18-20, 2013 in the beautiful city of Madrid.

The presentation sessions were followed by panel discussion where all the speakers entertained several questions from the audience. There was substantial interest in the audience with respect to OCCI development roadmap and questions were also raised on suitability of one cloud standard against another.

In the tutorial sessions that followed the workshop, there were several projects that demonstrated their use of the OCCI standard. Notable among them were OpenStack, OpenNebula, and CompatibleOne.

[slideshare id=26782772&doc=interoperabilityandapisinopenstack-131002072248-phpapp01]

Open Cloud Day 2013, hosted by Zurich University of Applied Sciences

Open Cloud Day 2013

The ICCLab here in ZHAW will be co-organising the Open Cloud Day 2013: 11th June 2013 here in Winterthur. /ch/open understands the importance of Cloud Computing, as does the ICCLab. To get the full power of clouds in the view of /ch/open these clouds should be open according of the principles open cloud initiative.  The goal is to foster open clouds and interoperability of clouds. Especially taking into account the requirements of public administrations and large as well as small and medium-sized businesses. In this conference concrete solutions and stacks will be discussed. At least one of the afternoon tracks will explicitly be technical. Another key focus area is in the creation of simple to use and open source GovClouds.

Conference link :  http://www.ch-open.ch/events/aktuelle-events/open-cloud-day-2013/

Overall Programme

Tracks

Venue

ICCLab Presents OCCI @ Future Internet Assembly

The ICCLab presented on the latest developments (PDF) in the Open Cloud Computing Interface at the Future Internet Assembly in Dublin. The session was organised by Cloud4SOA and the main theme was MultiCloud. In this regard, OCCI figures in many projects striving from this including EGI FedCloud, CompatibleOne and BonFire. In the presentation some future points of work that will be carried out in Mobile Cloud Networking, which took the audience’s interest.

ICCLab @ Swiss Academic Cloud Computing Experience

We presented at the Swiss Academic Cloud Computing Experience conference. Below are the slides as presented (or you can grab the PDF here).

ICCLab Invited to European Commission Cloud Expert Group

The ICCLab (Andy and Thomas) is invited to participate to the next meeting of the Cloud Expert Group which will take place in October 29-30, 2012 in Brussels.

The previous work of the Cloud Expert Group (“Advances in Clouds”) has clearly shown that Cloud Computing still requires research and development work in multiple domains (e.g. software & services, networks, security, complex systems, etc.).

The aim of the workshop is to refine the research topics identified in the above mentioned report, provide more details and develop a roadmap with priorities and actions.

To better shape the workshop discussions, position papers (no longer than 2 pages) are submitted on any of the following topics:

  • Data management, communications & networks
  • Resource description & usage, resource management
  • Programmability & usability
  • Federation, interoperability, portability
  • Security
  • Business and cost models, expertise & usability

Andy will provide a position paper on cloud standards, federation, and interoperability from the ICCLab’s perspective. The slides will be online after the event.

European Commission Cloud Announcements

While the [ICCLab presented](http://ec.europa.eu/information_society/events/cf/ictpd12/document.cfm?doc_id=23258) at the [ICT Proposer’s Day in Warsaw](http://ec.europa.eu/information_society/events/ictproposersday/2012/index_en.htm), a very interesting announcement was made in relation to Europe’s strategy on Cloud Computing.

On Thursday, the vice president of the European commission, [Neelie Kroes](http://en.wikipedia.org/wiki/Neelie_Kroes), announced [further details](http://europa.eu/rapid/pressReleasesAction.do?reference=IP/12/1025&format=HTML&aged=0&language=EN&guiLanguage=en) on the European Cloud Partnership.

From the ICCLab’s perspective this is a very exciting announcement as it underlines some of our key research themes that investigated here, namely [dependability and interoperability](http://www.cloudcomp.ch/research/foundation/themes/). Also encouraging is [the reuse](http://ec.europa.eu/information_society/activities/cloudcomputing/docs/com/swd_com_cloud.pdf) of much good work carried out in the area of standardisation by [the SIENA initiative](www.sienainitiative.eu) as quoted in the “[Staff Working Paper](http://ec.europa.eu/information_society/activities/cloudcomputing/docs/com/swd_com_cloud.pdf)”.

In the announcement on Thursday arguments for why Europe should be engaging more with cloud were given. For many in the ICT domain these are well known but what is more interesting in this announcement and the accompanying report are the set of 3 key actions ([from the accompanying ECP document](http://ec.europa.eu/information_society/activities/cloudcomputing/docs/com/com_cloud.pdf)):

1. Cutting through the Jungle of Standards
– Promote trusted and reliable cloud offerings by tasking ETSI to coordinate with stakeholders in a transparent and open way to identify by 2013 a detailed map of the necessary standards (inter alia for security, interoperability, data portability and reversibility).
– Enhance trust in cloud computing services by recognising at EU-level technical specifications in the field of information and communication technologies for the protection of personal information in accordance with the new Regulation on European Standardisation.
– Work with the support of ENISA and other relevant bodies to assist the development of EU-wide voluntary certification schemes in the area of cloud computing (including as regards data protection) and establish a list of such schemes by 2014.
– Address the environmental challenges of increased cloud use by agreeing, with industry, harmonised metrics for the energy consumption, water consumption and carbon emissions of cloud services by 2014.
2. Safe and Fair Contract Terms and Conditions
– Develop with stakeholders model terms for cloud computing service level agreements for contracts between cloud providers and professional cloud users, taking into account the developing EU acquis in this field.
– In line with the Communication on a Common European Sales Law29, propose to consumers and small firms European model contract terms and conditions for those issues that fall within the Common European Sales Law proposal. The aim is to standardise key contract terms and conditions, providing best practice contract terms for cloud services on aspects related with the supply of “digital content”.
– Task an expert group set up for this purpose and including industry to identify before the end of 2013 safe and fair contract terms and conditions for consumers and small firms, and on the basis of a similar optional instrument approach, for those cloud-related issues that lie beyond the Common European Sales Law .
– Facilitate Europe’s participation in the global growth of cloud computing by: reviewing standard contractual clauses applicable to transfer of personal data to third countries and adapting them, as needed, to cloud services; and by calling upon national data protection authorities to approve Binding Corporate Rules for cloud providers.30
– Work with industry to agree a code of conduct for cloud computing providers to support a uniform application of data protection rules which may be submitted to the Article 29 Working Party for endorsement in order to ensure legal certainty and coherence between the code of conduct and EU law.

3. Establishing a European Cloud Partnership to drive innovation and growth from the public sector.
– identify public sector cloud requirements; develop specifications for IT procurement and procure reference implementations to demonstrate conformance and performance.33
– Advance towards joint procurement of cloud computing services by public bodies based on the emerging common user requirements.
– Set up and execute other actions requiring coordination with stakeholders as described in this document.

This annoucement was coupled with the news that the EU commission will supporting its cloud strategy with [160B EUR to the EU GDP by 2020](http://techcrunch.com/2012/09/27/europe-shoots-for-the-clouds-ec-lays-out-new-cloud-strategy-to-add-e160b-to-eu-gdp-by-2020/).

# What is the ECP?
The ECP is a coming together of public authorities and industry, both Cloud buyers and suppliers. It consists of 3 main phases:

1. Common requirements for cloud technology procurement. Typical examples here include standards and security.
2. The delivery of proof-of-concepts for the common requirements
3. Creation of reference implementations

It was originally outlined [in a speech](http://europa.eu/rapid/pressReleasesAction.do?reference=SPEECH/12/38&format=HTML&aged=0&language=EN&guiLanguage=en) by Neelie Kroes in late January.

EU Report: “Advances in Clouds: Report from the Cloud Computing Expert Working Group”

# Introduction
This is a brief summary of the [EU Report:”Advances in Clouds: Report from the CLOUD Computing Expert Working Group.”](http://cordis.europa.eu/fp7/ict/ssai/docs/future-cc-2may-finalreport-experts.pdf) In this report a set of appointed Cloud experts have studied the current cloud computing landscape and have came out with a set of recommendations for advancing the future cloud. They note a large number of challenges present today in cloud computing and where tackled provide an opportunity to European innovators. Quoting the resport: *”Many long-known ICT challenges continue and may be enhanced in a CLOUD environment. These include large data transmission due to inadequate bandwidth; proprietarily of services and programming interfaces causing lock-in; severe problems with trust, security and privacy (which has legal as well as technical aspects); varying capabilities in elasticity and scaling; lack of interoperation interfaces between CLOUD (resources and services) offerings and between CLOUDs and other infrastructures and many more.”*

They see that performance aspects in cloud are as ever prescient and require tackling. *”What is more, spawning (scaling) of objects – no matter whether for the purpose of horizontal or vertical scale – is thereby still slow in modern CLOUD environments and therefore also suboptimal, as it has to take a degree of lag (and hence variance) into account.”*

As ever the topics of **SLAs and QoS** , which provide aspects of **dependability and transparency** to clients arise: *”lacking quality of service control on network level, limitations of storage, consistency management.” The worry here is “If the QoS is only observable per resource instance, instead of per user, some users will not get the quality they subscribed to.”*

They say that **interoperability and portability** are still challenges and that “In general there is a lack of support for porting applications (source code) with respect to all aspects involved in the process” and that due to demand of cloud services “the need for resources will exceed the availability of individual providers” however “current federation and interoperability support is still too weak to realise this”.

More related to **business models**, “generally insufficient experience and expertise about the relationship between pricing, effort and benefit: most users cannot assess the impact of moving to the CLOUD”.

Many of the topics highlight in this report are themes that are being pursued here the **ICCLab**, especially on areas of performance, work load management, dependability and interoperability.

# Identified Essential Research Issues
From the report the following key research issues and challenges were noted.

– **Business and cost models**
– Accounting, billing, auditing: pricing models and appropriate dynamic systems are required including monitoring of resources and charging for them with associated audit functions. This should ideally be supported by integrated quota management for both provider and user, to help keep within budget limits
– Monitoring: common monitoring standards and methods are required to allow user choice over offerings and to match user expectations in billing. There are issues in managing multi-tenancy accounting, real time monitoring and the need for feedback from expectations depending on resource usage and costs.
– Expertise: The lack of expertise requires research to develop best practice. This includes user choices and their effect on costs and other parameters and the impact of CLOUDs on an ICT budget and user experience. Use cases could be a useful tool.

– **Data management and handling**
– Handling of big data across large scales;
– Dealing with real-time requirements – particularly streamed multimedia;
– Distribution of a huge amount of data from sensors to CLOUD centres;
– Relationship to code – there is a case for complete independence and mobile code move the code to the (bulky) data;
– Types of storage & types of data – there is a need for appropriate storage for the access pattern (and digital preservation) pattern required. Different kinds of data may optimally utilise different kinds of storage technology. Issues of security and privacy are also factors.
– Data structuring & integrity – the problem is to have the representation of the real world encoded appropriately inside the computer – and to validate the stored representation against the real world. This takes time (constraint handling) and requires elastic scalable solutions for distributed transactions across multiple nodes;
– Scalability & elasticity are needed in all aspects of data handling to deal with ‘bursty’ data, highly variable demand for access for control and analysis and for simulation work including comparing analytical and simulated representations;

– **Resource awareness/Management**

– Generic ways to define characteristics: there is a need for an architecture of metadata to a common framework (with internal standards) to describe all the components of a system from end-user to CLOUD centre;
– Way to exploit these characteristics (programmatically, resource management level): the way in which software (dominantly middleware but also, for example, user interface management) interacts with and utilises the metadata is the key to elasticity, interoperation, federation and other aspects;
– Relates to programmability & resource management: there are issues with the systems development environment such that the software generated has appropriate interfaces to the metadata;
– Depending on the usage, “resources” may incorporate other services Virtualisation – by metadata descriptions utilised by middleware –
– Of all types of devices
– Of network
– Of distributed infrastructures
– Of distributed data / files / storage
– Deal with scale and heterogeneity: the metadata has to have rich enough semantics;
– Multidimensional, dynamic and large scale scheduling respecting timing and QoS;
– Efficient scale up & down: this requires dynamic rescheduling based on predicted demand;
Allow portable programmability: this is critical to move the software to the appropriate resource;
– Exploit specifics on all levels: high performance and high throughput applications tend to have specific requirements which must be captured by the metadata;
– Energy efficient management of resources: in the ‘green environment’ the cost of energy is not only financial and so good management practices – another factor in the scheduling and optimisation of resources – have to be factored in;
– Resource consumption management : clearly managing the resources used contributes to the expected cost savings in an elastic CLOUD environment; Advanced reservation: this is important for time or business critical tasks and a mechanism is required;
– Fault tolerance, resilience, adaptability: it is of key importance to maintain the SLA/QoS

– **Multi-tenancy impact**
– Isolate performance, isolate network slices: this is needed to manage resources and security;
– No appropriate programming mechanism: this requires research and development to find an appropriate systems development method, probably utilising service-oriented techniques;
– Co-design of management and programming model: since the execution of the computation requires management of the resources co-design is an important aspect requiring the programmer to have extensive knowledge of the tools available in the environment;

– **Programmability**

– Restructure algorithms / identify kernels: in order to place in the new systems development context – this is re-use of old algorithms in a new context; Design models (reusability, code portability, etc): to provide a systematic basis for the above;
– Control scaling behaviour (incl. scale down, restrict behaviour etc.): this requires to be incorporated in the parameters of the metadata associated with the code;
Understand and deal with the interdependency of (different) applications with the management of large scale environments
– Different levels of scale: this is important depending on the application requirements and the characteristics of different scales need to be recorded in the metadata;
– Integrate monitoring information: dynamic re-orchestration and execution time changes to maintain SLA/QoS require the monitoring information to be available to the environment of the executing application;
– Multi-tenancy: as discussed above this raises particular aspects related to systems development and programmability;
– Ease of use: the virtualised experience of the end-user depends on the degree with which the non-functional aspects of the executing application are hidden and managed autonomically;
Placement optimisation algorithms for energy efficiency, load balancing, high availability and QoS: this is the key aspect of scheduling resources for particular executing applications to optimise resource usage within the constraints of SLA and QoS;
– Elasticity, horizontal & vertical: as discussed before this feature is essential to allow optimised resource usage maintaining SLA/QoS;
– Relationship between code and data: the greater the separation of code and data (with the relationships encoded in metadata) the better the optimisation opportunities. Includes aspects of external data representation;
– Consider a wide range of device types and according properties, including energy efficiency etc.; but also wide range of users & use cases (see also business models): this concerns the optimal use of device types for particular applications;
– Personalisation vs. general programming: as programming moves from a ’cottage knitting’ industry to a managed engineering discipline the use of general code modules and their dynamic recomposition and parameterisation (by metadata) will increasingly become the standard practice. However this requires research in systems development methods including requirements capture and matching to available services.

– **Network Management**

– Guaranteeing bandwidth / latency performance, but also adjusting it on demand for individual tenants (elastic bandwidth / latency): this is a real issue for an increasing number of applications. It is necessary for the network to exhibit some elasticity to match that of the CLOUD centres. This may require network slices with adaptive QoS for virtualising the communication paths;
– Compensating for off-line time / maintain mobile connectivity (internationally): intermittent mobile connectivity threatens integrity in computer systems (and also allows for potential security breaches). This relates to better mechanisms for maintaining sessions / restarting sessions from a checkpoint;
– Isolating performance, connectivity etc.: there is a requirement for the path from end-user to CLOUD to be virtualised but maintaining the QoS and any SLA. This leads to intelligent diagnostics to discover any problems in connectivity or performance and measures to activate autonomic processes to restore elastically the required service.

– **Legalisation and Policy**
– Privacy concerns: especially in international data transfers from user to CLOUD;
– Location awareness: required to certify conformity with legislation;
– Self-destructive data; if one-off processing is allowed;

– **Federation**
– Portability, orchestration, composition: this is a huge and important topic requiring research into semi-automated systems development methods allowing execute time dynamic behaviour;
– Merged CLOUDs: virtualisation such that the end-user does not realise the application is running on multiple CLOUD providers’ offerings;
– Management: management of an application in a federated environment requires solutions from the topics listed above but with even higher complexity;
– Brokering algorithms: are needed to find the best services given the user requirements and the resource provision;
– Sharing of resources between CLOUD providers: this mechanism would allow CLOUD providers to take on user demands greater than their own capacity by expanding elastically (with appropriate agreements) to utilise the resources of other CLOUD suppliers;
– Networking in the deployment of services across multiple CLOUD providers: this relates to the above and also to the Networking topic earlier;
– SLA negotiation and management between CLOUD providers: this is complex with technical, economic and legal aspects;
– Support for context-aware services: is necessary for portability of (fragments of) an application across multiple CLOUD service providers;
– Common standards for interfaces and data formats: if this could be achieved then federated CLOUDs could become a reality;
– Federation of virtualized resources (this is not the same as federation of CLOUDs!) is required to allow selected resources from different CLOUD suppliers to be utilised for a particular application or application instance. It has implications for research in
– Gang-Scheduling
– End-to-End Virtualisation
– Scalable orchestration of virtualized resources and data: co-orchestration is highly complex and requires earlier research on dynamic re- orchestration/composition of services;
– CLOUD bursting, replication & scale of applications across CLOUDs: this relies on all of the above.

– **Security**
– Process applications without disclosing information: Homomorphic security: this offers some chance of preserving security (and privacy);
– Static & dynamic compliance: this requires the requirements for compliance to be available as metadata to be monitored by the running application;
– Interoperability, respectively common standards for service level and security: this relates to standard interfaces since the need is to encode in metadata;
– Security policy management: policies change with the perceived threats and since the CLOUD environment is so dynamic policies will need to also be dynamic.
– Detection of faults and attacks: in order to secure the services, data and resources, threads need to be detected early (relates to reliability)
– Isolation of workloads: particular workloads of high security may require isolation and execution at specific locations with declared security policies that are appropriate;