Tag: research (page 2 of 2)

ICCLab joins the new COST Action Autonomous Control for a Reliable Internet of Services (ACROSS) – IC1304

We are happy to announce that ICCLab was invited to join the COST Action Autonomous Control for a Reliable Internet of Services (ACROSS) – IC1304 as Swiss representative in the Management Committee.

ICT COST Action IC1304 Autonomous Control for a Reliable Internet of Services (ACROSS)

Descriptions are provided by the Actions directly via e-COST.

Currently, we are witnessing a paradigm shift from the traditional information-oriented Internet into an Internet of Services (IoS). This transition opens up virtually unbounded possibilities for creating and deploying new services. Eventually, the ICT landscape will migrate into a global system where new services are essentially large-scale service chains, combining and integrating the functionality of (possibly huge) numbers of other services offered by third parties, including cloud services. At the same time, as our modern society is becoming more and more dependent on ICT, these developments raise the need for effective means to ensure quality and reliability of the services running in such a complex environment. Motivated by this, the aim of this Action is to create a European network of experts, from both academia and industry, aiming at the development of autonomous control methods and algorithms for a reliable and quality-aware IoS.

Information and Communication Technologies COST Action IC1304

Action Fact Sheet

Download AFS as .RTF

Memorandum of Understanding

Download MoU as PDF

30th Birthday of the Swiss Informatics Society

30th birthday of the Swiss Informatics Society

The 30th birthday of the Swiss Informatics Society (SI), held on Tue 25 June  in Fribourg CH, concluded successfully with more then 200 participants who globally have attended the thematic workshops in the morning, the inaugural Meeting of the Swiss AIS Chapter and the plenary in the afternoon.

We post hereafter relevant topics on the Cloud Computing workshop, moderated by ZHAW ICCLAB,  and the award ceremony.

Workshop: Cloud Computing in Switzerland

Cloud Computing is transforming the IT industry, and this concerns a high-tech country like Switzerland in particular. The resulting potentials and risks need to be well understood in order to be able to fully leverage the technical as well as economical advantages. This workshop  provided an overview of current technological and economical trends with a particular focus on Switzerland and its Federal Cloud Computing strategy

8:45 – 9:00  Intro by Christof Marti (ZHAW)
Workshop introduction, goals and activities on Cloud Computing at ZHAW.

The Cloud Computing Special Interest Group (SIG), whose formation is coordinated by ZHAW ICCLAB, was introduced with its overall goals identified  to stimulate the knowledge, implementation and development of Cloud Computing in Industry, Research, SMEs and Education. The Kick-Off meeting is foreseen in September (watch si-cc-sig or linkedin group for more details ).  Further information were presented on the InIT Cloud Computing Lab (ICCLAB),  Research Lab dedicated to Cloud Computing in the focus area of Service Engineering encompassing important research themes and cloud initiatives like: Automation, Interoperability, Dependability, SDN for Clouds, Monitoring, Rating, Charging, Billing and Future Internet platforms.

9:00-09:20  Peter Kunszt  (SystemsX)
Cloud computing services for research – first steps and recommendations

The view of the scientific community on technological trends and the opportunities offered by Cloud Computing infrastructures.  Interesting start of the workshop by the Project leader of SyBIT (SystemsX.ch Biology IT: SyBIT) with overview of possible cloud services for science and education, recommendation concerning commercial vs. selfmade clouds and possible pricing & billing models for science .

9:20-09:40 Markus Brunner (Swisscom)
Cloud/SDN in Service Provider Networks

Markus illustrated “why a new network architecture” with feature comparision of aging network technology (static) and current trend (dynamic) on global needs like cost effectiveness, agility and service oriented. The proposal was to  look at new infrastructures based on SDN (Software Defined Network) and NFV (Network Function Virtualisation). NFV is concerned with porting network or telecommunications applications, that today typically run on dedicated and specialized hardware platforms, to virtualized Cloud platforms. Some basic architectures were discussed and interplay of NFV-SDN.   The presentation concluded with analysis of challenges for Cloud technologies today for communication oriented applications like: Real-time, Security, Predictable performance, Fault Management in Virtualized Systems and fixed /  mobile differences.

9:40-10:00  Sergio Maffioletti (University of Zurich)
A roadmap for an Academic Cloud 

“The view of the scientific community on how cloud technology could be used as a foundation for building a national research support infrastructure”. Interesting and innovative presentation made by Sergio starting from the “why and what’s wrong” analysis through the initiatives in places (new platforms, cloud utilisation and long tem competitiveness objectives). The presentation also made an overview of how this is implemented with National Research Infrastructure program (the Swiss Academic Compute  Cloud project) and innovative management systems (a mechanism to collect community requirements and implementing technical services and solution ).  The presentation concluded on the objectives and targets like: inter-operate, intra/inter access to institutional infrastructure, cloud enabled,   research clustering and national computational resources.

10:00-10:20 Michèal Higgins  (CloudSigma) – remote
CloudSigma and the Challenges of Big Science in the Cloud

Switzerland based CloudSigma is a pure-cloud IaaS service provider, offering highly-available flexible enterprise-class cloud servers in Europe and the U.S. It offers innovative services like all SSD storage, high performance solutions and firewall/VPN services. Helping building the a federated cloud platform (Helix Nebula) that addresses the needs of big science, CloudSigma sees the biggest challenges and values to have huge data-sets available close to the computing instances. As a conclusion CloudSigmas offers the Science community to store common big data sets for free close to their compute instances reducing the cost and time to transfer the data.

         10:20-10:40 Muharem Hrnjadovic (RackSpace)

An overview of key capabilities of cloud based infrastructures like OpenStack and challenging scenarios were presented during this session.

10:40-10:45 All
Q&A session

Swiss Informatics Competition 2013

Aside from speakers and panel discussions, captivating student projects (Bachelors &  Masters in Computer Science), from Universities and High Schools Specialty, have been introduced  to illustrate the diversity of computing technologies. Selected projects by team of experts have been also awarded. The details on the student projects are available here.

 Some photos taken from the cloud computing workshop, the plenary and ending awards:

Capture33 IMG_20130625_174341_stitch IMG_20130625_174455 IMG_20130625_174733 IMG_20130625_180000Foto 5Foto 3

Foto 2 Foto 1

Events: 30th birthday of the Swiss Informatics Society SI today at the HES-SO Fribourg.

As announced in previous posts, we report below the agenda of the  Cloud Computing in Switzerland workshop, chaired by ICCLab  from 8.45 AM to 10.45 AM today at the 30th birthday of the Swiss Informatics Society SI – the HES-SO Fribourg.

8:45 – 9:00  Intro by Christof Marti (ZHAW)

Workshop introduction, goals and activities on Cloud Computing at ZHAW.

9:00-09:20  Peter Kunszt  (Systemsx) 

The view of the scientific community on technological trends and the opportunities offered by Cloud Computing infrastructures.

Cloud computing services for research – first steps and recommendations”

9:20-09:40 Markus Brunner (Swisscom)

The view of the operators on how cloud computing is transforming the ecosystem and related risks & challenges.

9:40-10:00  Sergio Maffioletti (University of Zurich) 

The view of the scientific community on how cloud technology could be used as a foundation for building a national research support infrastructure.

“Roadmap for an Open Cloud Academic Research Infrastructure”

10:00-10:20 Michèal Higgins  (CloudSigma) - remote

The view of the industry on how cloud computing is transforming the ecosystem and related risks & challenges.

“CloudSigma and the Challenges of Big Science in the Cloud”

10:20-10:40 Muharem Hrnjadovic (RackSpace)

An overview of key capabilities of cloud based infrastructures like OpenStack and challenging scenarios.

10:40-10:45 All

Q&A session

 

EU Report: “Advances in Clouds: Report from the Cloud Computing Expert Working Group”

# Introduction
This is a brief summary of the [EU Report:”Advances in Clouds: Report from the CLOUD Computing Expert Working Group.”](http://cordis.europa.eu/fp7/ict/ssai/docs/future-cc-2may-finalreport-experts.pdf) In this report a set of appointed Cloud experts have studied the current cloud computing landscape and have came out with a set of recommendations for advancing the future cloud. They note a large number of challenges present today in cloud computing and where tackled provide an opportunity to European innovators. Quoting the resport: *”Many long-known ICT challenges continue and may be enhanced in a CLOUD environment. These include large data transmission due to inadequate bandwidth; proprietarily of services and programming interfaces causing lock-in; severe problems with trust, security and privacy (which has legal as well as technical aspects); varying capabilities in elasticity and scaling; lack of interoperation interfaces between CLOUD (resources and services) offerings and between CLOUDs and other infrastructures and many more.”*

They see that performance aspects in cloud are as ever prescient and require tackling. *”What is more, spawning (scaling) of objects – no matter whether for the purpose of horizontal or vertical scale – is thereby still slow in modern CLOUD environments and therefore also suboptimal, as it has to take a degree of lag (and hence variance) into account.”*

As ever the topics of **SLAs and QoS** , which provide aspects of **dependability and transparency** to clients arise: *”lacking quality of service control on network level, limitations of storage, consistency management.” The worry here is “If the QoS is only observable per resource instance, instead of per user, some users will not get the quality they subscribed to.”*

They say that **interoperability and portability** are still challenges and that “In general there is a lack of support for porting applications (source code) with respect to all aspects involved in the process” and that due to demand of cloud services “the need for resources will exceed the availability of individual providers” however “current federation and interoperability support is still too weak to realise this”.

More related to **business models**, “generally insufficient experience and expertise about the relationship between pricing, effort and benefit: most users cannot assess the impact of moving to the CLOUD”.

Many of the topics highlight in this report are themes that are being pursued here the **ICCLab**, especially on areas of performance, work load management, dependability and interoperability.

# Identified Essential Research Issues
From the report the following key research issues and challenges were noted.

– **Business and cost models**
– Accounting, billing, auditing: pricing models and appropriate dynamic systems are required including monitoring of resources and charging for them with associated audit functions. This should ideally be supported by integrated quota management for both provider and user, to help keep within budget limits
– Monitoring: common monitoring standards and methods are required to allow user choice over offerings and to match user expectations in billing. There are issues in managing multi-tenancy accounting, real time monitoring and the need for feedback from expectations depending on resource usage and costs.
– Expertise: The lack of expertise requires research to develop best practice. This includes user choices and their effect on costs and other parameters and the impact of CLOUDs on an ICT budget and user experience. Use cases could be a useful tool.

– **Data management and handling**
– Handling of big data across large scales;
– Dealing with real-time requirements – particularly streamed multimedia;
– Distribution of a huge amount of data from sensors to CLOUD centres;
– Relationship to code – there is a case for complete independence and mobile code move the code to the (bulky) data;
– Types of storage & types of data – there is a need for appropriate storage for the access pattern (and digital preservation) pattern required. Different kinds of data may optimally utilise different kinds of storage technology. Issues of security and privacy are also factors.
– Data structuring & integrity – the problem is to have the representation of the real world encoded appropriately inside the computer – and to validate the stored representation against the real world. This takes time (constraint handling) and requires elastic scalable solutions for distributed transactions across multiple nodes;
– Scalability & elasticity are needed in all aspects of data handling to deal with ‘bursty’ data, highly variable demand for access for control and analysis and for simulation work including comparing analytical and simulated representations;

– **Resource awareness/Management**

– Generic ways to define characteristics: there is a need for an architecture of metadata to a common framework (with internal standards) to describe all the components of a system from end-user to CLOUD centre;
– Way to exploit these characteristics (programmatically, resource management level): the way in which software (dominantly middleware but also, for example, user interface management) interacts with and utilises the metadata is the key to elasticity, interoperation, federation and other aspects;
– Relates to programmability & resource management: there are issues with the systems development environment such that the software generated has appropriate interfaces to the metadata;
– Depending on the usage, “resources” may incorporate other services Virtualisation – by metadata descriptions utilised by middleware –
– Of all types of devices
– Of network
– Of distributed infrastructures
– Of distributed data / files / storage
– Deal with scale and heterogeneity: the metadata has to have rich enough semantics;
– Multidimensional, dynamic and large scale scheduling respecting timing and QoS;
– Efficient scale up & down: this requires dynamic rescheduling based on predicted demand;
Allow portable programmability: this is critical to move the software to the appropriate resource;
– Exploit specifics on all levels: high performance and high throughput applications tend to have specific requirements which must be captured by the metadata;
– Energy efficient management of resources: in the ‘green environment’ the cost of energy is not only financial and so good management practices – another factor in the scheduling and optimisation of resources – have to be factored in;
– Resource consumption management : clearly managing the resources used contributes to the expected cost savings in an elastic CLOUD environment; Advanced reservation: this is important for time or business critical tasks and a mechanism is required;
– Fault tolerance, resilience, adaptability: it is of key importance to maintain the SLA/QoS

– **Multi-tenancy impact**
– Isolate performance, isolate network slices: this is needed to manage resources and security;
– No appropriate programming mechanism: this requires research and development to find an appropriate systems development method, probably utilising service-oriented techniques;
– Co-design of management and programming model: since the execution of the computation requires management of the resources co-design is an important aspect requiring the programmer to have extensive knowledge of the tools available in the environment;

– **Programmability**

– Restructure algorithms / identify kernels: in order to place in the new systems development context – this is re-use of old algorithms in a new context; Design models (reusability, code portability, etc): to provide a systematic basis for the above;
– Control scaling behaviour (incl. scale down, restrict behaviour etc.): this requires to be incorporated in the parameters of the metadata associated with the code;
Understand and deal with the interdependency of (different) applications with the management of large scale environments
– Different levels of scale: this is important depending on the application requirements and the characteristics of different scales need to be recorded in the metadata;
– Integrate monitoring information: dynamic re-orchestration and execution time changes to maintain SLA/QoS require the monitoring information to be available to the environment of the executing application;
– Multi-tenancy: as discussed above this raises particular aspects related to systems development and programmability;
– Ease of use: the virtualised experience of the end-user depends on the degree with which the non-functional aspects of the executing application are hidden and managed autonomically;
Placement optimisation algorithms for energy efficiency, load balancing, high availability and QoS: this is the key aspect of scheduling resources for particular executing applications to optimise resource usage within the constraints of SLA and QoS;
– Elasticity, horizontal & vertical: as discussed before this feature is essential to allow optimised resource usage maintaining SLA/QoS;
– Relationship between code and data: the greater the separation of code and data (with the relationships encoded in metadata) the better the optimisation opportunities. Includes aspects of external data representation;
– Consider a wide range of device types and according properties, including energy efficiency etc.; but also wide range of users & use cases (see also business models): this concerns the optimal use of device types for particular applications;
– Personalisation vs. general programming: as programming moves from a ’cottage knitting’ industry to a managed engineering discipline the use of general code modules and their dynamic recomposition and parameterisation (by metadata) will increasingly become the standard practice. However this requires research in systems development methods including requirements capture and matching to available services.

– **Network Management**

– Guaranteeing bandwidth / latency performance, but also adjusting it on demand for individual tenants (elastic bandwidth / latency): this is a real issue for an increasing number of applications. It is necessary for the network to exhibit some elasticity to match that of the CLOUD centres. This may require network slices with adaptive QoS for virtualising the communication paths;
– Compensating for off-line time / maintain mobile connectivity (internationally): intermittent mobile connectivity threatens integrity in computer systems (and also allows for potential security breaches). This relates to better mechanisms for maintaining sessions / restarting sessions from a checkpoint;
– Isolating performance, connectivity etc.: there is a requirement for the path from end-user to CLOUD to be virtualised but maintaining the QoS and any SLA. This leads to intelligent diagnostics to discover any problems in connectivity or performance and measures to activate autonomic processes to restore elastically the required service.

– **Legalisation and Policy**
– Privacy concerns: especially in international data transfers from user to CLOUD;
– Location awareness: required to certify conformity with legislation;
– Self-destructive data; if one-off processing is allowed;

– **Federation**
– Portability, orchestration, composition: this is a huge and important topic requiring research into semi-automated systems development methods allowing execute time dynamic behaviour;
– Merged CLOUDs: virtualisation such that the end-user does not realise the application is running on multiple CLOUD providers’ offerings;
– Management: management of an application in a federated environment requires solutions from the topics listed above but with even higher complexity;
– Brokering algorithms: are needed to find the best services given the user requirements and the resource provision;
– Sharing of resources between CLOUD providers: this mechanism would allow CLOUD providers to take on user demands greater than their own capacity by expanding elastically (with appropriate agreements) to utilise the resources of other CLOUD suppliers;
– Networking in the deployment of services across multiple CLOUD providers: this relates to the above and also to the Networking topic earlier;
– SLA negotiation and management between CLOUD providers: this is complex with technical, economic and legal aspects;
– Support for context-aware services: is necessary for portability of (fragments of) an application across multiple CLOUD service providers;
– Common standards for interfaces and data formats: if this could be achieved then federated CLOUDs could become a reality;
– Federation of virtualized resources (this is not the same as federation of CLOUDs!) is required to allow selected resources from different CLOUD suppliers to be utilised for a particular application or application instance. It has implications for research in
– Gang-Scheduling
– End-to-End Virtualisation
– Scalable orchestration of virtualized resources and data: co-orchestration is highly complex and requires earlier research on dynamic re- orchestration/composition of services;
– CLOUD bursting, replication & scale of applications across CLOUDs: this relies on all of the above.

– **Security**
– Process applications without disclosing information: Homomorphic security: this offers some chance of preserving security (and privacy);
– Static & dynamic compliance: this requires the requirements for compliance to be available as metadata to be monitored by the running application;
– Interoperability, respectively common standards for service level and security: this relates to standard interfaces since the need is to encode in metadata;
– Security policy management: policies change with the perceived threats and since the CLOUD environment is so dynamic policies will need to also be dynamic.
– Detection of faults and attacks: in order to secure the services, data and resources, threads need to be detected early (relates to reliability)
– Isolation of workloads: particular workloads of high security may require isolation and execution at specific locations with declared security policies that are appropriate;

ICCLab Research

The InIT Cloud Computing Lab adopts a comprehensive and holistic approach to science . The entire approach is based on three driving principles, namely **Scientific Foundation**, **Strategic Impact**, and **Knowledge Transfer**. The entire scientific work of the ICCLab is aligned and directed along these inter-linked dimensions.

[Read more about the ICCLab’s approach to research and education](http://www.cloudcomp.ch/research/).

Newer posts »