Category: Projects (page 1 of 2)

An overview of networking in Rancher using Cattle

As noted elsewhere, we’re looking at Rancher in the context of one of our projects. We’ve been doing some work on enabling it to work over heterogeneous compute infrastructures – one of which could be an ARM based edge device and one a standard x86_64 cloud execution environment. Some of our colleagues were asking how the networking works – we had not looked into this in much detail, so we decided to find – turns out it’s pretty complex.

Continue reading

ElasTest

An elastic platform for testing complex distributed large software systems.

An elastic platform for testing complex distributed large software systems.

Summary:

The most limiting factor in development today is software validation, which typically requires very costly and complex testing processes. ElasTest pretends to improve significantly the efficiency and effectiveness of the testing process. The project aims to build an elastic platform that eases the testing phase for large and distributed software systems. This is done in order to reduce the time-to-market of the software projects and increase its quality, quality of service (QoS) as well as its quality of experience (QoE). ElasTest (Project ID: 731535) will also develop a novel test orchestration theory and toolbox enabling the creation of complex test suites as the composition of simple testing units (T-Jobs).

ElasTest just started on January of 2017 and its consortium comprises of 10 partners including IBM, ATOS, Technical University of Berlin and is coordinated by the University of Rey Juan Carlos.

In this project ZHAW is going to be working on the development of the ElasTest Platform Manager (EPM) and the ElasTest Test Orchestration and Recommendation Manager (TORM).

The EPM is the interface between the ElasTest testing components (e.g. TORM, Test Support Services, etc.) and the cloud infrastructure where ElasTest is deployed. This platform must abstract the cloud services so that Elastest becomes fully agnostic to them and provide this abstraction via Software Development Toolkits or REST APIs to the users of it. The objective of the EPM is to implement such a Platform Manager that enables ElasTest to be deployed in a target cloud infrastructure (e.g. OpenStack, CloudStack, AWS, etc.).

The TORM is the ElasTest brain and the main entry point for developers. This requires identifying, specifying and implementing a number of interfaces through which the TORM exposes its capabilities to testers. This interface include the following:

  • SuT (Software under Test) specification. Developers need to be able to specify their SuT so that the TORM can execute the tests on it.
  • Engine Hosting. The TORM enables engines to be plugged as modules.
  • Development APIs and interfaces. The TORM is the main entry point for testers, that is why it needs to expose the appropriate interfaces and tools enabling developers to consume the different capabilities exposed by the platform.

Coordinator: Universidad Rey Juan Carlos (URJC)

Consortium:

  • Universidad Rey Juan Carlos (URJC)
  • Fraunhofer Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. (Fraunhofer)
  • Technische Universitaet Berlin (TUB)
  • Consiglio Nazionale Delle Ricerche (CNR)
  • Fundación Imdea Software (IMDEA)
  • Atos Spain S.A. (ATOS)
  • Zürcher Hochschule Für Angewandte Wissenschaften (ZHAW)
  • Tikal Technologies S.L. (NAEVATEC)
  • IBM Ireland Limited (IBM IRE)
  • Production Trade And Support of Machinable Products of Software and Informatics – Relational Technology A.E. (RELATIONAL)

Scale-UP

Title: SCALE-UP: Services for the Swiss Cloud for Academic and Learning Experts

Coordinator: SWITCH

Consortium: 

  • Zurich University of Applied Sciences, ZHAW
  • University of Applied Sciences and Arts, Northwestern Switzerland, FHNW
  • Fernfachhochschule Schweiz, FFHS
  • University of Berne, UNIBE
  • University of Basel, UNIBAS
  • Università della Svizzera Italiana, USI
  • École Polytechnique Fédérale de Lausanne, EPFL
  • University of St.Gallen, UNISG
  • FHS St.Gallen, FHSG

Funded by: CUS 2013-2016 P-2 from swissuniversities Continue reading

SESAME – Small cEllS coordinAtion for Multi-tenancy and Edge services

SESAME targets innovations around three central elements in 5G: the placement of network intelligence and applications in the network edge through Network Functions Virtualisation (NFV) and Edge Cloud Computing; the substantial evolution of the Small Cell concept, already mainstream in 4G but expected to deliver its full potential in the challenging high dense 5G scenarios; and the consolidation of multi-tenancy in communications infrastructures, allowing several operators/service providers to engage in new sharing models of both access capacity and edge computing capabilities.

hl_architecture

SESAME proposes the Cloud-Enabled Small Cell (CESC) concept, a new multi-operator enabled Small Cell that integrates a virtualised execution platform (i.e., the Light DC) for deploying Virtual Network Functions (NVFs), supporting powerful self-x management and executing novel applications and services inside the access network infrastructure. The Light DC will feature low-power processors and hardware accelerators for time critical operations and will build a high manageable clustered edge computing infrastructure. This approach will allow new stakeholders to dynamically enter the value chain by acting as ‘host-neutral’ providers in high traffic areas where densification of multiple networks is not practical. The optimal management of a CESC deployment is a key challenge of SESAME, for which new orchestration, NFV management, virtualisation of management views per tenant, self-x features and radio access management techniques will be developed.

After designing, specifying and developing the architecture and all the involved CESC modules, SESAME will culminate with a prototype with all functionalities for proving the concept in relevant use cases. Besides, CESC will be formulated consistently and synergistically with other 5G-PPP components through coordination with the corresponding projects.

Active ICCLab Research Initiatives

Given the topics that will be developed during the project execution, the following research initiatives from ICCLab will contribute to SESAME.

Project Facts

Horizon 2020 – Call: H2020-ICT-2014-2

Topic: ICT-14-2014

Type of action: RIA

Duration30 Months

Start date: 1/7/2015

Project Title: SESAME: Small cEllS coordinAtion for Multi-tenancy and Edge services

FI-PPP XiFI (FI-Ops)

What is XIFI?
XIFI is a project of the European Public-Private-Partnership on Future Internet (FI-PPP) programme. In this context XIFI is the project responsible for the capacity building part of the programme.

XIFI will pave the way for the establishment of a common European market for large-scale trials for Future Internet and Smart Cities through the creation of a sustainable pan-European federation of Future Internet test infrastructures. The XIFI open federation will leverage existing public investments in advanced infrastructures and support advanced large-scale deployment of FI-PPP early trials across a multiplicity of heterogeneous environments and sector use cases that should be sustained beyond the FI-PPP programme.

For more details what exactly the contribution of the ICCLab to this project is see: The FI-PPP ZuFi Node

Activities
Integrate infrastructure components, with functional components that satisfy the interoperability requirements for the GEs of the FI-WARE core platform
ensure that each infrastructure site is able to offer access to its services through open interfaces as specified by the FI-PPP collaboration agreement terms and the new governance model agreed at the FI-PPP programme level support the infrastructure sites that exist in the early trial projects to adapt and upgrade their services and functionality support more of the existing infrastructures, identified by INFINITY, to adapt and upgrade their services and functionality leverage the experience and knowledge of federation of testbeds that has been gained by the FIRE initiative develop processes and mechanisms to validate that each site, which joins the XIFI federation, is able to provide the required services and thus is able to support the early trials and phase III (expansion phase) of the programme develop the necessary business incentives, in order to lay the ground work for a sustainable ecosystem beyond the horizon of the FI-PPP programme seek the cooperation with the FI-PPP Programme Facilitation and Support project as well as the technology foundation, the usage areas and early trials projects utilise, where appropriate, the infrastructure investments and project support provided by GÉANT and its connected NREN‘s and global partners who are involved in similar initiatives particularly in North America (GENI) and Asia

Main XIFI planned outcomes
Integration of selected infrastructures into a federated facility and its deployment, operation and support to provide capacity to meet the needs of the FI-PPP phase II trials. Initially the federation of infrastructures will consist of five nodes located in five different European countries enabled with the Technology Foundation services (FI-PPP project FI-WARE) to be ready before the start of FI-PPP phase III. This initial core backbone will be enlarged to 15 nodes during the second year with new local and regional infrastructures. The selection of appropriate infrastructures will be based on the work and the capacities repository (www.xipi.eu) of the Capacity Building support action (project INFINTY of FI-PPP phase I). Further relevant infrastructures originate in the new use case early trial projects, the FIRE facilities, Living Labs-related infrastructures, EIT ICT Labs-related infrastructures, and possibly others. This enlargement process will be the key to establish a marketplace for large-scale trial infrastructures.

Adaptation, upgrade and validation of selected infrastructures, through the creation of adaptation components that will enable infrastructure federation and monitoring and facilitate the deployment of FI-WARE GEs. The adaptation and update process will cover interoperability mechanisms at technical, operational, administrative and knowledge level, to be able to support the FI-WARE services with a guaranteed QoS.

A sustainable marketplace for infrastructures within the XIFI federation where they can be found, selected and used by the activities of the FI-PPP expansion (phase III) and in future initiatives beyond the FI-PPP programme. Special consideration will be given to Smart City initiatives, opening new business opportunities and providing sustainability beyond the XIFI project duration.

In addition the following will also be achieved:
The ability to efficiently replicate deployment environments to extend and validate Use Case Trials and to support the capacity sharing across Use Case Trials.
A pathway for innovators, involving and going beyond existing experimentations (e.g. FIRE and Living Labs), that enable large-scale trials to address business related issues, such as scalability and sustainability.

The provision of training, support and assistance including integration guidelines and the promotion of best-practice between large-scale trials and infrastructure nodes. These activities will facilitate the uptake and continued use of the FI-PPP results. They will address infrastructure operators, other Future Internet stakeholders including FI-PPP use cases trials, Future Internet application developers.
The creation of business models for the sustainability of the XIFI federation, through engagement with stakeholders and elaboration of value propositions, which expand the federation and maximize the impact of the project.

XIFI will demonstrate and validate the capabilities of a unified market for Future Internet facilities overcoming a number of existing limitations to the current set of Future Internet experimental infrastructures namely fragmentation, interoperability and scalability. XIFI will achieve this vision by federating a multiplicity of heterogeneous environments – using the generic and specific enablers provided by FI-WARE and the FI-PPP use cases and early trials. XIFI will extend its effort to include the results of other Future Internet services and R&D work, such as the Future Internet Research and Experimentation (FIRE) Initiative.

To facilitate the establishment of an infrastructure market, the federation will be open to any interested party fulfilling the technical and operational requirements that will be specified by XIFI, to participate. XIFI will define a number of incentives to attract the participation of infrastructures in the federation, through the creation of value propositions, including a service to validate compatibility with the FI-WARE GEs, and the opportunity to participate in the new Future Internet infrastructures marketplace under non-discriminatory principles.

XIFI will be carried out by a wide European partnership including major telecom operators, service providers, innovative SMEs, research centres, Universities, consultants and the infrastructure operators of the five initial nodes. This mix of roles and competences is necessary to ensure the achievements of XIFI are viable and sustainable beyond the FI-PPP programme. All partners have significant experience in the Future Internet activities and in collaborative programmes.

COST Action IC1304

ICT COST Action IC1304 “Autonomous Control for a Reliable Internet of Services (ACROSS)”

Currently, we are witnessing a paradigm shift from the traditional information-oriented Internet into an Internet of Services (IoS). This transition opens up virtually unbounded possibilities for creating and deploying new services. Eventually, the ICT landscape will migrate into a global system where new services are essentially large-scale service chains, combining and integrating the functionality of (possibly huge) numbers of other services offered by third parties, including cloud services. At the same time, as our modern society is becoming more and more dependent on ICT, these developments raise the need for effective means to ensure quality and reliability of the services running in such a complex environment. Motivated by this, the aim of this Action is to create a European network of experts, from both academia and industry, aiming at the development of autonomous control methods and algorithms for a reliable and quality-aware IoS

Downloads
Action Fact Sheet

Memorandum of Understanding
Download MoU as PDF

Chairs of the Action:
Prof Rob VAN DER MEI (NL)
Prof J.l. VAN DEN BERG (NL)

Arcus – Understanding energy consumption in the cloud

Arcus is an internally funded project which focuses on correlating energy consumption with cloud usage information to enable a cloud provider to understand in detail how her energy is consumed – as energy continues to account for an increasing amount of a cloud provider’s operating costs, this issue is increasing in importance.

The work focuses on correlating cloud usage information obtained from Openstack (primarily via Ceilometer) with energy consumption information obtained from the devices, using a mix of internal readings and wireless metering infrastructure. It involves determining which users of the cloud stack are consuming energy at any point in time by fine-grained monitoring of the energy consumption in the system coupled with information on how the systems will be used.

The output of the project will be a tool which will enable an Openstack provider to see this relationship: it could be used by a public cloud provider to help them understand how their tariffing structure relates to their costs or for a private cloud operator to understand which internal applications or departments may be most responsible for energy consumption.

The project started in January 2014.

ZuFi: The Zurich Future Internet node

Motivation

Besides the work in national projects where we engage with and transfer our knowledge to local SMEs, a major focus always lay on international or more precisely european projects where we’re currently involved in a number of FP7 Future Internet projects. Our goal is to push the progress of this so called Future Internet (FI) even further by providing a platform for/to ourselves and our communities where newly developed FI services can be offered to third parties as well as the general public. To this end, we are building ZuFi – the Zurich Future Internet node.

Infrastructure

At the moment the ICCLab has set up cloud-infrastructures at two locations, one at ZHAW in Winterthur and one at the Equinix data centre in Zurich (big shout out to Equinix for providing the space and power for this!). Both of them currently run OpenStack Grizzly. The setup in Zurich emerged out of our strategic partnership with Equinix and enables further research in areas like cloud federation or cloud interoperability. It’s resources will be exclusively available for XIFI.

ZuFi – Winterthur

The following hardware is available in the Winterthur node.

Component

Description

Servers

15 x Lynx CALLEO 1240

Total Capacity

  • CPU: 240 Cores

  • RAM: 960 GB

  • HDD: 60 TB

  • 12 TB NFS shared disk space

Per server capacity

  • CPU : 2 x Intel Xeon E5620 (16 Cores)

  • RAM : 64 GB

  • HDD: 4 TB

  • Network: 3 x Gbit Ethernet NIC

Per core capacity

  • RAM: 4 GB

  • HDD: 250 GB

Switch

HP E2910al-48G

And the following software and virtualization supports are installed.

Component

Description

Hypervisor

KVM

Cloud Manager

OpenStack Grizzly

Base OS

Ubuntu 13.04

ZuFi – Equinix

The following hardware is available in the Equinix node.

Component

Description

Servers

8 x Intel Xeon 5140, 2.3 GHz

Total Capacity

  • CPU: 64 Cores

  • RAM: 256 GB

  • HDD: 10 TB SAN

Per server capacity

  • CPU : 2 x Intel Xeon 5140 (8 Cores)

  • RAM : 32 GB

  • HDD: variable (attached via SAN-Controller)

  • Network: 4 x Gbit Ethernet NIC

Per core capacity

  • RAM: 4 GB

  • HDD: variable

Switch

Cisco Catalyst 3560G

And the following software and virtualization supports are installed.

Component

Description

Hypervisor

KVM

Cloud Manager

OpenStack Grizzly

Base OS

CentOS 6.4

Future Plans

For the future it is planned to extend the current installation with several generic enablers (GEs) and with that offer the future internet services that were developed in the FI-WARE project to our academic community as well as the general public. Part of that will also be the integration with local FI as well as smart cities activities.

The InIT BladeCenter Lab

Complimenting the ICCLab cloud computing lab, the InIT operates and runs 2 BladeCenter infrastructure environments, based on IBM Blade-server technology.

These environments have different purposes:

  • 1 productive environment for projects, teaching and training systems (running on VMware vSphere v5.0 and operated via a vCenter 5 management system)
  • 1 test and training environment for practical exercises (different hypervisor configurations, as required during lab exercises)

BLDEHBLL-06IBM BladeCenter Chassis with HS22 server units          ibm-ds3500-series  DS 3512 SAN storage unit

The systems consist of IBM Blade servers, corresponding BladeCenter chassis and are connected to IBM SAN storage networks that are linked via fiberoptic switches to deliver a maximum of flexibility and performance for each environment. Both environments can be operated and run independent from each other, at different physical locations.

Hard-/Software Infrastructure and Virtualization Layers

The productive InIT BladeCenter environment consists of 7 IBM HS 22 Blade systems, each with 192 GB RAM, 2 Xeon processors per system and are linked to a 8Gbit fiberoptic network storage switch.

They are running on  VMware vSphere v5.0 hypervisor software for virtualization and a are operated via vCenter v5 as management platform.

The productive environment has 3 IBM DS 3512 SAN storage units available, 2 x 12 TB storage space and 1 x 8.5 TB storage unit. All SAN units are configured as RAID 6 storage systems, with 4 GB local memory and each with a hot-spare disk. The hosts systems have an additional storage server available that contains over 200 different ISO-software images (various operating systems and applications that can be attached during setup of a new virtual system or for installation of additional software during operation). All computing units are connected via 2 x 1Gbit network interfaces for operation of the virtual systems and 2 x 1 Gbit network links for backbone management and backup purposes.

BladeCenterHS22    InIT IBM BladeCenter infrastructure with HS22 servers and ISO-storage server

The InIT Blade infrastructure currently hosts over 250 active virtual systems and approx. 8 different templates that are used in various teaching courses, a project- and test-environments of for dedicated research projects. The templates allow the operators to deploy pre-configured Windows- and Linux-Servers within minutes.

A dedicated backup-server is available for raw-backup of running virtual systems or for backup of individual virtual servers, based on user-requests. An agent-based version of the Acronis Backup software allows authorized users to run independent backups of full servers, databases or individual files, not requiring access to the underlaying VMware host systems.

Test-Lab Environment

The InIT Blade Test-Lab consists of 10 x IBM HS21XM and 2 x IBM HS21 blade servers, each equipped with 2 Intel Xeon processors and 32 GB RAM.
They share 2 SAN storage units IBM DS3400, each with 36 x 250 GB storage volume, organized in various LUNs for different exercises and training Labs.

Connectivity and internet access is managed via a BladeCenter chassis and three 6-port 1Gbit Ethernet switches, while the SANs are attached via 4Gbit fiberoptic switches to allow a maximum of flexibilty and configuration options for different lab-tests.

bladesystemsHS21   HS 21 Lab-Blade Servers                 storage1   DS3400 Lab SAN storage units

 

T-NOVA

Overview: Network Functions as-a-Service over Virtualised Infrastructures

Network Functions Virtualisation (NFV) is an emerging concept. It refers to the migration of certain network functionalities, traditionally performed by hardware elements, to virtualized IT infrastructures, where they are deployed as software components. NFV leverages commodity servers and storage, including cloud platforms, to enable rapid deployment, reconfiguration and elastic scaling of network functionalities.

Network Function Virtualization Concept

With the aim of promoting the NFV concept, T-NOVA introduces a novel enabling framework, allowing operators not only to deploy virtualized Network Functions (NFs) for their own needs, but also to offer them to their customers, as value-added services. Virtual network appliances (gateways, proxies, firewalls, transcoders, analyzers etc.) can be provided on-demand as-a-Service, eliminating the need to acquire, install and maintain specialized hardware at customers’ premises.

High Level Architecture of T-Nova Platform

T-NOVA will design and implement a management/orchestration platform for the automated provision, configuration, monitoring and optimization of Network Functions-as-a-Service (NFaaS) over virtualised Network/IT infrastructures.

T-NOVA leverages and enhances cloud management architectures for the elastic provision and (re-) allocation of IT resources assigned to the hosting of Network Functions. It also exploits and extends Software Defined Networking platforms for efficient management of the network infrastructure.

Furthermore, in order to facilitate the involvement of diverse actors in the NFV scene and attract new market entrants, T-NOVA establishes a “NFV Marketplace”, where network services and functions by several developers can be published and brokered/traded. Via the Marketplace, customers can browse and select the services and virtual appliances which best match their needs, as well as negotiate the associated SLAs and be charged under various billing models.

T-Nova-Marketplace

More info here.

« Older posts