Month: January 2014

1st European Conference on the Future Internet – ECFI Brussels, 2-3 April 2014

Brussels, 2-3 April 2014
Early bird registration deadline: 1 February 2014
The 1st European Conference on the Future Internet (ECFI) aims at bringing together key stakeholders to discuss how Europe can achieve global leadership in ICT by 2020 through innovative Internet technologies. In this context, industry and political stakeholders will discuss central socio-economic and technological topics of Future Internet infrastructures and services in Europe.
The event will present cutting-edge research results on the European Internet infrastructures and services of the future, which have been developed in the Future Internet Public-Private Partnership (FI-PPP). The FI-PPP is a European programme for Internet-enabled innovation aiming to accelerate the development and adoption of Future Internet technologies in Europe.
Target audience
ECFI is particularly targeting political stakeholders, including representatives of public bodies and private associations, as well as ICT industry stakeholders, including innovation managers and CIOs from large and medium-sized companies, who are involved in shaping the networks and services of the future.
Key benefits
Attending the event will give participants a number of key benefits, including:
The opportunity to discuss with peers from industry and the political domain how Europe can achieve global leadership in ICT by 2020
A clear understanding of how FI-PPP initiatives, projects and actions in FP7 and Horizon 2020 will achieve the goals of the Innovation Union
Insights on the latest developments in Future Internet technologies
Direct access to key players in European Future Internet initiatives who are the drivers of the Internet infrastructures and services of tomorrow
A unique opportunity to discuss major Internet-related topics with political and industry stakeholders
Europe’s competitive advantage in the Future Internet
ECFI will cover a wide range of Future Internet-related aspects of direct relevance for Europe’s competitive advantage in the short- to mid-term, which will be discussed in high-level plenary sessions and three parallel tracks:
The role of the Future Internet for innovation in Europe
Privacy and data protection
Crossing the chasm – moving European R&D to the market
A European roadmap for Future Internet infrastructures
Virtual power plants – Smart energy grids of the future
Future Internet – Enabling opportunities for vertical application sectors
Business models for the use of Future Internet technologies
Network infrastructures for Smart Cities
Future Internet at the crossroads of content, media, networks and creativity
Future Internet – Smart Products for a Smart Digital Europe
Speakers
A line-up of high-level speakers  will share their insights at the event, including:
Neelie Kroes, Vice President of the European Commission for Digital Agenda
Mario Campolargo, Director ‘Net Futures’, European Commission
Malcolm Harbour, MEP, Chair of EP Committee on the Internal Market and Consumer
Luigi Gambardella, Chairman of ETNO
Patrice Chazerand, Director Digital Europe
Event format
The event will consist of a conference and an exhibition.
At the plenary sessions of the conference, keynote speakers will share and discuss their insights on how the Future Internet can sustain Europe’s competitiveness through generating innovation across different sectors. The parallel conference sessions will explore a variety of Future Internet-related topics. The exhibition will showcase results by the different projects of the Future Internet PPP, giving participants the unique experience of putting their hands on tomorrow’s technologies and for directly talking with the people who are developing them.
Further information and registration
Further information and registration are available on the event website at http://www.ecfi.eu/Brussels2014
Early bird registration deadline: 1 February 2014
The number of participants is limited to 250.
Organiser
Future Internet Public-Private Partnership (FI-PPP)
represented by Mr Ilkka Lakaniemi, FI-PPP Programme Chairman
Contact
Website: www.fi-ppp.eu
FI-PPP on Twitter: @FI_PPP
Event hashtag: #ECFI1
Acknowledgement
The FI-PPP Programme receives funding from the European Commission under the Seventh Framework Programme (FP7). The European Commission has no responsibility for the contents of this publication.

Mirantis Fuel – Openstack installation for Noddy

While we have lots of experience working with cloud automation tools, for OpenStack in particular, it has taken us a little while to get around to checking out Fuel from Mirantis. Here, we give a short summary of our initial impressions of this very interesting tool.

Continue reading

Martin Blöchlinger

Martin Blöchlinger is a researcher at the ICCLab.

After an IT apprenticeship and an additional year of programming experience he decided to study at the ZHAW. In summer 2014 he graduated (Bachelor of Science ZFH in Informatics) and a few weeks later started to work at the InIT in the focus area ‘Distributed Software Systems’. He is currently working on a project in the ‘Cloud-Native Applications‘ initiative.

Arcus – Understanding energy consumption in the cloud

Arcus is an internally funded project which focuses on correlating energy consumption with cloud usage information to enable a cloud provider to understand in detail how her energy is consumed – as energy continues to account for an increasing amount of a cloud provider’s operating costs, this issue is increasing in importance.

The work focuses on correlating cloud usage information obtained from Openstack (primarily via Ceilometer) with energy consumption information obtained from the devices, using a mix of internal readings and wireless metering infrastructure. It involves determining which users of the cloud stack are consuming energy at any point in time by fine-grained monitoring of the energy consumption in the system coupled with information on how the systems will be used.

The output of the project will be a tool which will enable an Openstack provider to see this relationship: it could be used by a public cloud provider to help them understand how their tariffing structure relates to their costs or for a private cloud operator to understand which internal applications or departments may be most responsible for energy consumption.

The project started in January 2014.

Energy Efficiency and Cloud Computing – The Theme

The primary focus of the Energy Theme is on reducing the energy consumption of cloud computing resources. As compute nodes consume most of the energy in cloud computing systems, work to date has been focused on reducing the energy consumed by compute loads, particularly within the Openstack context. Although, as servers get increasingly instrumented, it is clear that there is potential in understanding the energy consumption with finer granularity and ultimately this can lead to energy efficiencies and cost savings.

Architecture

In the current work, the primary mechanism to achieve energy efficiencies is load consolidation combined with power control of servers. This could be augmented with managing server CPU power states, but it remains to be seen if this will lead to significant power savings. Another tool to achieve energy efficiencies is to add elastic load when the resources are underutilized – this does not reduce the overall energy consumption per se, but rather enables providers to get more bang for their energy buck.

energy-arch-v1

The current architecture of the Cloud Energy Efficiency Subsystem is shown above with the components performing the following functions:

  • an energy monitoring component: this obtains information on the energy consumption of the entire system – it may also make some kind of abstraction rather than working with highly granular data for each node;
  • a load characterization component: this component uses primarily ceilometer data to understand what is going on in the cloud – it makes an abstraction of the usage of the system over different timescales and particularly determines which level of burstiness exists in the load patterns;
  • a load consolidation mechanism: this will take the info on the system state and identify where load consolidation can be performed – it then issues a set of live migration instructions to the cloud to perform the consolidate. In general, it would be necessary to add some filters to support different hypervisors, bare metal servers, etc which makes it more complex;
  • physical server manager: this will turn off servers and turn them on as necessary – this will take input from the load characterization component to determine how much spare capacity to keep in the system to deal with variations in demand.

The specific interactions between these components is evolving as this is a work in progress.

Initiatives

At present, the theme comprises of two initiatives. These are

Related Projects

People

Cloud-Native Applications

This page is kept for archiving. Please navigate to our new site: blog.zhaw.ch/splab.

Overview

Since Amazon started offering cloud services (AWS) in 2006, cloud computing in all its forms became evermore popular and has steadily matured since. A lot of experience has been collected and today a high number of companies are running their applications in the cloud either for themselves or to offer services to their customers. The basic characteristics of this paradigm1 offer capabilities and possibilities to software applications that were unthinkable before and are the reason why cloud computing was able to establish itself the way it did.

What is a Cloud-Native Application?

In a nutshell, a cloud-native application (CNA) is a distributed application that runs on a cloud infrastructure (irrespective of infrastructure or platform level) and is in its core scalable and resilient as well as adapted to its dynamic and volatile environment. These core requirements are derived from the essential characteristics that every cloud infrastructure must by definition possess, and from user expectations. It is of course possible to run an application in the cloud that doesn’t meet all those criteria. In that case it would be described as a cloud-aware or cloud-ready application instead of a cloud-native application. Through a carefully cloud-native application design based on composed stateful and stateless microservices, the hosting characteristics can be exploited so that scalability and elasticity do not translate into significantly higher cost.

Objectives

  • The CNA initiative provides architecture and design guidelines for cloud-native applications, based on lessons-learned of existing applications and by taking advantage of best-practices (Cloud-Application Architecture Patterns).
  • Evaluate microservice technology mappings, related to container compositions, but also other forms of microservice implementations.
  • Provide recommendations for operation of cloud native applications (Continuous Delivery, Scaling, Monitoring, Incident Management,…)
  • Provide economic guidelines on how to operate cloud native applications (feasibility, service model (mix), microservice stacks, containers, …)
  • Investigate in, develop and establish a set of open source technologies, tools and services to build, operate and leverage state of the art cloud-native applications.
  • Support SMEs to build their own cloud-native solutions or reengineer and migrate existing applications to the cloud.
  • Ensure that all new applications developed within the SPLab and the ICCLab are cloud-native.

Relevance to current and future markets

– Business impact

  • Using cloud infrastructures (IaaS/PaaS) it is possible to prototype and test new business ideas quickly and without spending a lot of money up-front.
  • An application running on a cloud infrastructure – if designed in a cloud-native way – only ever uses as many resources as needed. This avoids under- or over- provisioning of resources and ensures cost-savings.
  • Developing software with services offered by cloud infrastructure and -platform providers enables even a small team to create highly scalable applications serving a high number of customers.
  • Developing cloud-native applications with a microservice architecture style allows for shorter development-cycles which reduces the time to adapt to customer feedback, new customer requirements and changes in the market.

– Correlation to industry forecasts

  • Cloud-native applications are tightly bound to cloud computing resp. to IaaS and PaaS since these technologies are used to develop and host applications and in the best case these applications are cloud native. So wherever these technologies stand in the Gartner Hype-Cycle Cloud-Native Applications can be thought of as being at the same stage.
  • The Cloud-Native Computing Foundation (CNCF.io) and other industry groups are formed to shape the evolution of technologies that are container packaged, dynamically scheduled and microservices oriented.

  • Container composition languages and tools are on the rise. A careful evaluation and assessment of technologies, lock-ins, opportunities is required. The CNA initiative brings sufficient academic rigor to afford long-term perspectives on these trends.

Relevant Standards and Articles

Architecture

Cloud-native applications are typically designed as distributed applications with a shared-nothing architecture composed of autonomous and stateless services that can horizontally scale and communicate asynchronously via message queues. The focus lies on the scalability and resilience of an application. The architecture style and current state of the art of how to design such applications is described with the term Microservices. While this is in no way the only way to architect cloud-native applications it is the current state of the art.

Generic CNA Architecture

The following architecture has been initially analysed, refined and realised by the SPLab CNA initiative team with a business application (Zurmo CRM) based on the CoreOS/fleet stack as well as on Kubernetes.

More recent works include a cloud-native document management architecture with stateful and stateless microservices implemented as composed containers with Docker-Compose, Vamp and Kubernetes.

Articles and Publications

G. Toffetti, S. Brunner, M. Blöchlinger, J. Spillner, T. M. Bohnert: Self-managing cloud-native applications: design, implementation and experience. FGCS special issue on Cloud Incident Management, 2016.

S. Brunner, M. Blöchlinger, G. Toffetti, J. Spillner, T. M. Bohnert, “Experimental Evaluation of the Cloud-Native Application Design”, 4th International Workshop on Clouds and (eScience) Application Management (CloudAM), Limassol, Cyprus, December 2015. (slides; author version; IEEExplore/ACM DL: to appear)

Blog Posts

Note: Latest posts are at the bottom.

Presentations

Open Source Software

Contact

Josef Spillner: josef.spillner(at)zhaw.ch

Footnotes

1. On-Demand Self-Service, Broad Network Access, Resource Pooling, Rapid Elasticity and Measured Service as defined in  NIST Definition of Cloud Computing