Diana Moise

00013_1Diana Moise is a researcher in ZHAW InIT Cloud Computing Lab.

Diana received her PhD degree in Computer Science from École Normale Supérieure de Cachan, France. In the past, she worked as a research engineer at INRIA Rennes – Bretagne Atlantique research center. The focus of her PhD was on optimizations of  MapReduce applications on large-scale distributed infrastructures (including storage optimization, application and platform-aware optimizations). Her research interests include distributed computing, cloud computing, large-scale distributed data management, MapReduce paradigm, Hadoop.

 

KIARA InfiniBand Functionality Overview

This blog post gives an overview of the InfiniBand functionality that is offered by the transport stack of KIARA. KIARA is a new and advanced middleware and a part of the FI-WARE project which is in turn part of the very large European FI-PPP programme. Several team members of the ICCLab are currently working on the implementation of this middleware.

Continue reading

Floating IPs management in Openstack

Openstack is generally well suited for typical use cases and there is hardly reasons to tinker with advance options and features available. Normally you would plan your public IP addresses usage and management well in advance, but if you are an experimental lab like ours, many a times things are handled in an ad-hoc manner. Recently, we ran into a unique problem which took us some time to figure out a solution.

We manage a full 160.xxx.xxx.xxx/24 block of 255 public IP addresses. Due to an underestimated user demand forecast, in our external cloud we ended up with a floating-ip pool that was woefully inadequate. One solution was to remove the external network altogether and recreate a new one with the larger floating-ip pool. The challenge was – we had real users, with experiments running on our cloud and destroying the external network was not an option.

So here is what we did to add more floating ips to the pool without even stopping or restarting any of the neutron services –

  1. Log onto your openstack controller node
  2. Read the neutron configuration file (usually located at /etc/neutron/neutron.conf
  3. Locate the connection string – this will tell you where the neutron database in located
  4. Depending on the database type (mysql, sqlite) use appropriate database managers (ours was using sqlite)

I will next show you what to do to add more IPs to the floating pool for sqlite3, this can be easily adapted for mysql.

$ sqlite3 /var/lib/neutron/ovs.sqlite
SQLite version 3.7.9 2011-11-01 00:52:41
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> .tables

The list of tables used by neutron dumped by the previous command will be similar to –

agents ovs_tunnel_endpoints
allowedaddresspairs ovs_vlan_allocations
dnsnameservers portbindingports
externalnetworks ports
extradhcpopts quotas
floatingips routerl3agentbindings
ipallocationpools routerroutes
ipallocations routers
ipavailabilityranges securitygroupportbindings
networkdhcpagentbindings securitygrouprules
networks securitygroups
ovs_network_bindings subnetroutes
ovs_tunnel_allocations subnets

The tables that are of interest to us are –

  • ipallocationpools
  • ipavailabilityranges

Next look into the schema of these tables, this will shed more light into what needs to be modified –

sqlite> .schema ipavailabilityranges
CREATE TABLE ipavailabilityranges (
allocation_pool_id VARCHAR(36) NOT NULL,
first_ip VARCHAR(64) NOT NULL,
last_ip VARCHAR(64) NOT NULL,
PRIMARY KEY (allocation_pool_id, first_ip, last_ip),
FOREIGN KEY(allocation_pool_id) REFERENCES ipallocationpools (id) ON DELETE CASCADE
);
sqlite> .schema ipallocationpools
CREATE TABLE ipallocationpools (
id VARCHAR(36) NOT NULL,
subnet_id VARCHAR(36),
first_ip VARCHAR(64) NOT NULL,
last_ip VARCHAR(64) NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY(subnet_id) REFERENCES subnets (id) ON DELETE CASCADE
);
sqlite>

Next look into the content of these tables, for brevity only partial outputs are shown below. Also I have masked some of the IP addresses with xxx, replace these with real values when using this guide.

sqlite> select * from ipallocationpools;
b5a7b8b4-ad10-4d92-b877-e406df8ceb91|f0034b20-3566-4f9f-a6d5-b725c02f98fc|10.10.10.2|10.10.10.254
7bca3261-e578-4cfa-bba1-51ba6eae7791|765adcdf-72a4-4e07-8860-f443c7b9098b|160.xxx.xxx.32|160.xxx.xxx.80
a9994f70-2b9a-45f3-b5db-31ccc6cb7e90|72250c58-5fda-4d1b-a847-b71b432ea218|10.10.1.2|10.10.1.254
23032620-731a-4092-9509-7591b53b5ddf|12849c1f-4456-4fc1-bea6-444cce4f1ac6|10.10.2.2|10.10.2.254
fcf22336-2bd6-4e1c-92cd-e33af0b23ad9|bcf1082d-50d5-4ebc-a311-7e0618096356|10.10.11.2|10.10.11.254
bc961a47-4902-4ca2-b4f4-c5fd581a364e|09b79d08-aa92-4b99-b1fd-61d5f31d3351|10.10.25.2|10.10.25.254
sqlite> select * from ipavailabilityranges;
b5a7b8b4-ad10-4d92-b877-e406df8ceb91|10.10.10.6|10.10.10.254
a9994f70-2b9a-45f3-b5db-31ccc6cb7e90|10.10.1.2|10.10.1.2
7bca3261-e578-4cfa-bba1-51ba6eae7791|160.xxx.xxx.74|160.xxx.xxx.74
7bca3261-e578-4cfa-bba1-51ba6eae7791|160.xxx.xxx.75|160.xxx.xxx.75

Looking at the above two outputs, it is immediately clear what needs to be done next in order to add more IPs to the floating-ip range.

  1. modify the floating-ip record in the ipallocationpools table, extend the first_ip and/or last_ip value(s)
  2. for each new ip address to be added in the pool, create an entry in the ipavailabilityranges table with first_ip same as last_ip value (set to the actual IP address)

An an example, say I want to extend my pool from 160.xxx.xxx.80 to 160.xxx.xxx.82, this is what I would do

sqlite> update ipallocationpools set last_ip='160.xxx.xxx.82' where first_ip='160.xxx.xxx.32';
sqlite> insert into ipavailabilityranges values ('7bca3261-e578-4cfa-bba1-51ba6eae7791', '160.xxx.xxx.81', '160.xxx.xxx.81');
sqlite> insert into ipavailabilityranges values ('7bca3261-e578-4cfa-bba1-51ba6eae7791', '160.xxx.xxx.82', '160.xxx.xxx.82');
sqlite> .exit

And that’s all, you have 2 additional IPs available for use from your floating-ip pool. And you don’t even need to restart any of the neutron services. make sure that the subnet id is the same as in the ipallocationpools table entry.

Vincenzo Pii

avatar

Vincenzo joined the ICCLab in March 2014 where he is working as a researcher in the Cloud Storage initiative.

Vincenzo obtained his Master’s Degree from the University of Pisa in March 2011 and has collected 3 years of working experience in the industry before joining ICCLab. At Intecs (Pisa), he has worked in the Telecommunications and Smart Systems research lab, participating in internal research activities, mainly related to M2M and IoT, and FP7 research projects, such as BETaaS. At TomTom (Eindhoven) Vincenzo has worked as a software engineer for the development of in-dash infotainment systems for cars, extensively adopting Scrum/Agile methodologies.

His current research activities are aimed at developing Cloud Storage systems with advanced technical features that can be suitable for application and adoption by industrial partners.

Empty parameter list in C function, do you write func(void) or func()?

While reviewing code for the KIARA project I came across a change set which read like this:

- void super_duper_func () {
+ void super_duper_func (void) {

I was puzzled, what’s the difference anyway except from making it explicitly clear that there are no parameter expected? Well, I was wrong. The ISO 9899 standard (read: C99 standard) states under paragraph ‘6.7.5.3 Function declarators (including prototypes)’ that

10 — The special case of an unnamed parameter of type void as the only item in the list
specifies that the function has no parameters.
14 — An identifier list declares only the identifiers of the parameters of the function. An empty
list in a function declarator that is part of a definition of that function specifies that the
function has no parameters. The empty list in a function declarator that is not part of a
definition of that function specifies that no information about the number or types of the
parameters is supplied.

Therefore we can conclude that even though your code may compile and work correctly, your code is not standard compliant and you may even leap a compile time error detection. Have a look a this snippet which compiled flawless with clang 3.4:

#include 

void func();

int main() {
    func("AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA");
    return 0;
}

void func() {
    printf("in func()\n");
}

Though when you turn on all warnings in clang you will get a warning but this is easily overlooked and not very obvious:

$ clang -std=c99 -Weverything -o empty_param_list empty_param_list.c
empty_param_list.c:10:6: warning: no previous prototype for function 'func' [-Wmissing-prototypes]
void func() {
     ^
empty_param_list.c:3:6: note: this declaration is not a prototype; add 'void' to make it a prototype for a zero-parameter function
void func();
     ^
          void
1 warning generated.

If you go through the code you will find a function prototype and you may think that if there was no previous prototype and the function is defined later than ‘main’ the compiler will fail anyway … In this case if you forgot the function prototype the compiler would throw an error (conflicting types for ‘func’) even if you passed no arguments.

To sum it up:

  • Create function prototypes/declarations (they go before the first function definition in your source)
  • If you don’t need any parameters explicitly write void in the parameter list (helps you with finding mistakes)
  • Turn on all warnings with either ‘-Wall’ (gcc) or ‘-Weverything’ (clang) and don’t ignore those warnings!

Introduction to MuleSoft

Being part of the Service Prototyping lab, the ICCLab teaches the Service Engineering course at ZHAW. As part of that course was a lecture on MuleSoft, a tool that facilitates the data exchange between applications following the service-oriented architecture (SOA) methodology. It was developed to take the donkey-work out of the integration process, allowing developers to connect anything, anywhere. Because it “carries the heavy development load” of connecting systems, it is also sometimes referenced as a Swiss army knife. Mule differs from the typical web application servers by specializing in integration between different applications, databases and cloud services, as opposed to integration with just the end users. Mule applications are also stateless and event-driven.

MuleSoft is an integration platform that consists of CloudHub and Mule ESB. CloudHub is an integration platform as a service (iPaaS) that connects SaaS and on-premise application. It allows cloud to cloud integration as well as cloud to enterprise integration. MuleESB on the other hand, is Java-based enterprise service bus for building and running integration applications and web services. It offers service meditation by separating business logic from protocols and message formats, message routing and data transformation. Most importantly, it provides service creation and service orchestration. It allows the functionalities in any endpoint to be exposed as a service and existing services to be hosted as a lightweight service containers.

To facilitate the access to MuleESB’s functionalities, there is Mule Studio, an Eclipse-based integration development environment that can be used as either a visual, drag-and-drop editor or just a simple XML editor. Because Mule is also based on the concept of event-driven architecture, you can use MuleStudio to create an application that processes messages by forming a flow. The Mule flow is actually a sequence of message-processing events constructed by combining several building blocks which are pre-packed units of business logic. Each building block in the flow evaluates or process the message until it has passed through all the building blocks in the flow. Mule receives the message through a request-response inbound endpoint, transforms the content into a new format and process the business logic before returning a response via the message source.

The Mule flow typically consists of a message source, message processors and some global elements. The message source accepts a message from an external source, thus triggering the execution of the flow itself. The message processors transform, filter and enrich the message, while the global elements are reusable pieces of code that can be invoked by multiple elements in any flow within the application. The Mule message is the data that passes through the application via one or more flows. It consists of a message header that is the meta data about the message, and message payload, the actual data content that is being transported through the Mule application. Mule uses the Mule Expression language (MEL) to facilitate the transport of the Mule message. In the final stages, a response is returned to the original sender or the results of the processing are logged to a database or send to a third party.

For example, lets say that a company has a shipping and a billing service which now need to connect to an inventory system. Writing the code manually may work for the time being, but suppose we want to make a few changes in the future. Or connect to a third party SaaS app. We will then have to update every connection. Instead, Mule can transform the different data formats and act as a translator between them.

To sum up, what Mule basically does is enabling the integration between the SaaS and on-premise applications, eliminating point-to-point connections and taking out the need to worry about the different data formats.

FI-PPP XiFI (FI-Ops)

What is XIFI?
XIFI is a project of the European Public-Private-Partnership on Future Internet (FI-PPP) programme. In this context XIFI is the project responsible for the capacity building part of the programme.

XIFI will pave the way for the establishment of a common European market for large-scale trials for Future Internet and Smart Cities through the creation of a sustainable pan-European federation of Future Internet test infrastructures. The XIFI open federation will leverage existing public investments in advanced infrastructures and support advanced large-scale deployment of FI-PPP early trials across a multiplicity of heterogeneous environments and sector use cases that should be sustained beyond the FI-PPP programme.

For more details what exactly the contribution of the ICCLab to this project is see: The FI-PPP ZuFi Node

Activities
Integrate infrastructure components, with functional components that satisfy the interoperability requirements for the GEs of the FI-WARE core platform
ensure that each infrastructure site is able to offer access to its services through open interfaces as specified by the FI-PPP collaboration agreement terms and the new governance model agreed at the FI-PPP programme level support the infrastructure sites that exist in the early trial projects to adapt and upgrade their services and functionality support more of the existing infrastructures, identified by INFINITY, to adapt and upgrade their services and functionality leverage the experience and knowledge of federation of testbeds that has been gained by the FIRE initiative develop processes and mechanisms to validate that each site, which joins the XIFI federation, is able to provide the required services and thus is able to support the early trials and phase III (expansion phase) of the programme develop the necessary business incentives, in order to lay the ground work for a sustainable ecosystem beyond the horizon of the FI-PPP programme seek the cooperation with the FI-PPP Programme Facilitation and Support project as well as the technology foundation, the usage areas and early trials projects utilise, where appropriate, the infrastructure investments and project support provided by GÉANT and its connected NREN‘s and global partners who are involved in similar initiatives particularly in North America (GENI) and Asia

Main XIFI planned outcomes
Integration of selected infrastructures into a federated facility and its deployment, operation and support to provide capacity to meet the needs of the FI-PPP phase II trials. Initially the federation of infrastructures will consist of five nodes located in five different European countries enabled with the Technology Foundation services (FI-PPP project FI-WARE) to be ready before the start of FI-PPP phase III. This initial core backbone will be enlarged to 15 nodes during the second year with new local and regional infrastructures. The selection of appropriate infrastructures will be based on the work and the capacities repository (www.xipi.eu) of the Capacity Building support action (project INFINTY of FI-PPP phase I). Further relevant infrastructures originate in the new use case early trial projects, the FIRE facilities, Living Labs-related infrastructures, EIT ICT Labs-related infrastructures, and possibly others. This enlargement process will be the key to establish a marketplace for large-scale trial infrastructures.

Adaptation, upgrade and validation of selected infrastructures, through the creation of adaptation components that will enable infrastructure federation and monitoring and facilitate the deployment of FI-WARE GEs. The adaptation and update process will cover interoperability mechanisms at technical, operational, administrative and knowledge level, to be able to support the FI-WARE services with a guaranteed QoS.

A sustainable marketplace for infrastructures within the XIFI federation where they can be found, selected and used by the activities of the FI-PPP expansion (phase III) and in future initiatives beyond the FI-PPP programme. Special consideration will be given to Smart City initiatives, opening new business opportunities and providing sustainability beyond the XIFI project duration.

In addition the following will also be achieved:
The ability to efficiently replicate deployment environments to extend and validate Use Case Trials and to support the capacity sharing across Use Case Trials.
A pathway for innovators, involving and going beyond existing experimentations (e.g. FIRE and Living Labs), that enable large-scale trials to address business related issues, such as scalability and sustainability.

The provision of training, support and assistance including integration guidelines and the promotion of best-practice between large-scale trials and infrastructure nodes. These activities will facilitate the uptake and continued use of the FI-PPP results. They will address infrastructure operators, other Future Internet stakeholders including FI-PPP use cases trials, Future Internet application developers.
The creation of business models for the sustainability of the XIFI federation, through engagement with stakeholders and elaboration of value propositions, which expand the federation and maximize the impact of the project.

XIFI will demonstrate and validate the capabilities of a unified market for Future Internet facilities overcoming a number of existing limitations to the current set of Future Internet experimental infrastructures namely fragmentation, interoperability and scalability. XIFI will achieve this vision by federating a multiplicity of heterogeneous environments – using the generic and specific enablers provided by FI-WARE and the FI-PPP use cases and early trials. XIFI will extend its effort to include the results of other Future Internet services and R&D work, such as the Future Internet Research and Experimentation (FIRE) Initiative.

To facilitate the establishment of an infrastructure market, the federation will be open to any interested party fulfilling the technical and operational requirements that will be specified by XIFI, to participate. XIFI will define a number of incentives to attract the participation of infrastructures in the federation, through the creation of value propositions, including a service to validate compatibility with the FI-WARE GEs, and the opportunity to participate in the new Future Internet infrastructures marketplace under non-discriminatory principles.

XIFI will be carried out by a wide European partnership including major telecom operators, service providers, innovative SMEs, research centres, Universities, consultants and the infrastructure operators of the five initial nodes. This mix of roles and competences is necessary to ensure the achievements of XIFI are viable and sustainable beyond the FI-PPP programme. All partners have significant experience in the Future Internet activities and in collaborative programmes.

COST Action IC1304

ICT COST Action IC1304 “Autonomous Control for a Reliable Internet of Services (ACROSS)”

Currently, we are witnessing a paradigm shift from the traditional information-oriented Internet into an Internet of Services (IoS). This transition opens up virtually unbounded possibilities for creating and deploying new services. Eventually, the ICT landscape will migrate into a global system where new services are essentially large-scale service chains, combining and integrating the functionality of (possibly huge) numbers of other services offered by third parties, including cloud services. At the same time, as our modern society is becoming more and more dependent on ICT, these developments raise the need for effective means to ensure quality and reliability of the services running in such a complex environment. Motivated by this, the aim of this Action is to create a European network of experts, from both academia and industry, aiming at the development of autonomous control methods and algorithms for a reliable and quality-aware IoS

Downloads
Action Fact Sheet

Memorandum of Understanding
Download MoU as PDF

Chairs of the Action:
Prof Rob VAN DER MEI (NL)
Prof J.l. VAN DEN BERG (NL)

3rd IEEE CloudNet conference 2014

IEEE CloudNet 2014

The third IEEE Cloud Networking conference will take place in Luxemburg from 8 to 10 October 2014 in Luxemburg.

The technical program will include special sessions having the objective to complement the regular program with new and emerging themes like Data Center Network Management, Reliability, Optimization, Distributed Data Center Architectures and Services, IaaS, PaaS, SaaS, Energy-Efficient Datacenters and Networks, Internet Routing of Cloud data, Cloud Traffic Characterization and many others available here

This edition foresees a session on Mobile Cloud Networking proposed, and chaired, by the FP7 MCN project which is technically coordinated by ZHAW ICCLab.