Month: April 2014 (page 1 of 2)

Swiss Informatics SIG CC kick off meeting

ZHAW ICCLab, as president of the Swiss Informatics Society (SI) Cloud Computing Fachgruppe, is setting up this Special Interest Group (SIG) in Switzerland (SI-SIG-CC).

The Special Interest Group on Cloud Computing Kick Off meeting will be held on:

21-May-2014 from 9:30 to 14:30  at
Zurich University of Applied Sciences (ZHAW)

Address: Technikumstrasse 9, 8400 Winterthur

ROOM: TE 419

The overall agenda is:

9:30      Intro and Welcome (Chairman)

10:00    Defining the scope of the SIG – Technical, Academic and, Industrial and Impacts

11.00    Coffee Break

11.15    Long term objectives

12.15    2014 Objectives

12.45    Governance structure, tools and next meetings

13.30    Lunch

To register please send and email to cimm@zhaw.ch

ICCLab News-April 2014 Issue n.1

header

ZHAW Service Engineering ICCLab sporadically sends Newsletters with short information about latest activities of the lab and relevant events.

In this issue

  • Welcome message
  • FI-PPP at FIA 2014 in Athens
  • ICClab node on XiPi portal of Future Internet infrastructures
  • Major event: 1st European Conference on the Future Internet in Bruxelles
  • Next events: Open Cloud Day 2014, ICCLab workshop
  • Relevant call for papers
  • Project Corner
  • Tutorial: Cloud Native Applications

Welcome

The InIT Cloud Computing Lab (ICCLab) and the Service Engineering focus area of the ZHAW University welcome you to this first issue of the New Letter. We will provide you with all updates on relevant scientific and technological activities in Europe and Switzerland with main topics and initiatives carried out by our Lab. The News always includes progress on our projects and a technical tutorial at the end. ________________________________________

FI-PPP at FIA 2014 in Athens

fia

Athens hosted the annual Future Internet Assembly conference at Megaron Athens International Conference Centre on 18-20 March, where more than 400 European Internet scientific and economic actors, researchers, industrialists, SMEs, users, service and content provider representatives have  attended, delivered new ideas, and shared views, aiming at advancing activities for reshaping the Future Internet. The themes of this year event went along new Internet technologies based on network/cloud integration & virtualization and innovative software, services and cloud technologies to enable the innovation of the applications.  The presence of the Future Internet Public Partnership Programme,  Twitter ) at FIA 2014 has been assured by dedicated sessions and by FI PPP project  booths represented by relevant projects like XI-FI (ZHAW is partner), FI-WARE (ZHAW), FI-CONTENT2, FINESCE, FI-STAR and  FITMAN.

___________________________________________

ICCLab node on XI-PI portal.

xipi

Within the activities of the project FI–PPP Infinity, last year it was created a dedicated portal to improve and facilitate the registration, discovery and uptake of the Future Internet experimental infrastructures, and to make it easier for Future Internet developers and experimenters to use the XiPi portal in order to find relevant infrastructures for their testing requirements.  The datacentre of ZHAW ICClab, hosted at Equinix Zurich, is part of the XI-FI federation since 2014 and you can find all information on ICCLab node in this XI-PI portal.

__________________________________________

Major Event:

1st European Conference on the Future Internet

ecfi

On 2-3 April 2014, the Future Internet PPP organised the 1st European Conference on the Future Internet (ECFI) in Brussels. This ha brought together key stakeholders to discuss how Europe can achieve global leadership in ICT by 2020 through innovative Internet technologies. ZHAW ICCLab was part of the event organisation as partners of the CONCORD project.

_____________________________________________

Next events:

Open Cloud Day 2014

ocd

10. June 2014 – 09:00 – 17:00 –  University of Bern, Unis

Schanzeneckstrasse 1, Bern, A003

Motivation and Goal

Cloud Computing becomes more and more important. To get the full power of clouds in the view of /ch/open these clouds should be open according of the principles open cloud initiative. The goal is to foster open clouds and interoperability of clouds. Especially, taking into account the requirements of public administrations and SMEs. In this conference especially concrete stacks are discussed. In the afternoon are held specific workshops. The developments in Gov Clouds are also discussed.

The conference builds on the success of two previous Open Cloud Days in 2012 and 2013.

Possible Program

The event is planned as a full day event with a single track and 1-2 training session for specific cloud stacks. The themes will be: Clouds for Public Administration, Success Stories of implemented clouds, Details of Cloud Technology, Cloud interoperability, How to avoid/minimize lock-ins, Platform as a Service.

Next Event:

Workshop on Scientific Computing in the ICCLab Cloud

The ICCLab is pleased to invite you to the upcoming Workshop on Scientific Computing in the ICCLab Cloud. This workshop will focus on how to leverage the ICCLab Cloud infrastructures for executing scientific applications in a distributed, high performance environment.

The workshop’s agenda will include several talks describing applications from different areas of science (physics, mathematics, machine learning, etc.), highlighting their requirements from the ICT perspective. The workshop will also include a comprehensive overview of Hadoop and a tutorial on how to deploy, configure and use a Hadoop cluster on the ICCLab Cloud through the Savanna OpenStack project.

The workshop date and the full program are to be announced.

_____________________________________________

Relevant Calls for papers

  • The IEEE Transactions on Cloud Computing (TCC) is seeking original and innovative research papers in all areas related to Cloud computing.
  • From 2014, TCC will publish 4 issues per year. For details of the submission process, please consult the relevant Web pages

___________________________________________

Project Corner

tnova

After the Kick-off meeting held in Jab 2014, the project is progressing with the definition of the use cases and scenarios as part of the WP2 activities. All these requirements and specification will be used as input for the architecture of the project.

xifi

FP7 XI-FI

The project held its XIFI World Summit 2014 on 26-28 March in Zurich. One of the objective of the meeting was  to start the inclusion of new partners as result of the open calls. ZHAW ICCLab is one of the new partners joining the consortium. Technical aspects, needed to include new data centers in the XI-FI federation, have been introduced as well as the project status and relevant deliverables.

concord

The project is progressing fast with the re-organisation of the supporting actions and FI-PPP governance model, as result of the phase-3 of the programme which has a major objective the marketization of the solutions and numerous open calls that will be issued by the new accelerator projects.

fiware

FI-WARE

News: 14/03/2014

  • From 27 January to 31 March, participants have submitted their ideas for the Smart Society Challenge or prototypes for FI-WARE Excellence.20.  Selected teams for each challenge will receive a €2,800 prize and enter the final phase of the contest. During this phase a jury composed of FI-WARE platform developers and other experts will advise candidates on how to improve their prototypes before ultimately presenting their final versions.

geyser

ZHAW is actively engaged in the GEYSER project which is focused on making urban Data Centres more energy efficient. One of the key aspects of the project is the relationship between the Smart City and the Data Centre: Data Centres can generate energy as well as be flexible with respect to energy consumption. For this reason they are particularly interesting players in an urban grid. The project had a meeting in ZHAW in February and is working through the requirements definition of the GEYSER system.

_________________________________________

Tutorial:

Cloud Native Applications

A cloud-native application is drafted and designed to take full advantage of cloud platforms. The cloud-fication also identifies any application re-architecting or service to take advantage of cloud principles. The result of this process is a cloud-native target. Such targets have common behaviors and characteristics.    A cloud-native application is assumed to have as main properties to leverage cloud-platform services for reliable, scalable infrastructure and to scales horizontally, adding resources as demand increases and releasing resources as demand decreases. In particular, it upgrades without downtime and scales automatically using proactive and reactive actions.

Some key design principles include: the use of non-blocking asynchronous communication, in a loosely coupled architecture, and handles transient failures without user experience degradation with  node failures without downtime.  Since the under lied architecture is based on cloud resources, the cloud native applications optimize the costs to run efficiently and without wasting resources. However, they are designed to using geographical distribution to minimize network latency and include monitors and application logs even as nodes come and go.

Where did these characteristics come from?

There is evidence that companies with a large web presence have clouds with some similar capabilities.

As these characteristics show, an application does not need to support millions of users to benefit from cloud-native patterns.  The architecture of the application makes a solution cloud-native, not the choice of platform.  It is more cost-effective to architect new applications to be cloud-native from the start then transforming legacy applications. There is no need to for every application to be cloud- native. This is a business choice defined driven by technical insight.

___________________________________________

Imprint

Editor: Antonio Cimmino, ICCLab ZHAW

ZHAW ICCLab Contributors: Thomas Michael Bohnert, Andy Edmonds, Piyush Harsh, Cristof Marti, Sandro Brunner, Philipp Aeschlimann, Mathias Hablutzel, Sean Murphy, Vincenzo Pii, Diana-Maria Moise, Florian Dudouet.

Please send an e-mail to cimm@zhaw.ch for commenting articles or suggesting topics for upcoming issues.

next newsletter

Acknowledgement

FI-PPP Programme receives funding from the European Commission under the Seventh Framework Programme (FP7). The European Commission has no responsibility for the contents of this publication.

foot1

 

 

Deploy Ceph and start using it: end to end tutorial – Installation (part 1/3)

Ceph is one of the most interesting distributed storage systems available, with a very active development and a complete set of features that make it a valuable candidate for cloud storage services. This tutorial goes through the required steps (and some related troubleshooting), required to setup a Ceph cluster and access it with a simple client using librados. Please refer to the Ceph documentation for detailed insights on Ceph components.

(Part 2/3 – Troubleshooting – Part 3/3 – librados client)

Assumptions

  • Ceph version: 0.79
  • Installation with ceph-deploy
  • Operating system for the Ceph nodes: Ubuntu 14.04

Cluster architecture

In a minimum Ceph deployment, a Ceph cluster includes one Ceph monitor (MON) and a number of Object Storage Devices (OSD).

Administrative and control operations are issued from an admin node, which must not necessarily be separated from the Ceph cluster (e.g., the monitor node can also act as the admin node). Metadata server nodes (MDS) are required only for Ceph Filesystem (Ceph Block Devices and Ceph Object Storage do not use MDS).

Preparing the storage

WARNING: preparing the storage for Ceph means to delete a disk’s partition table and lose all its data. Proceed only if you know exactly what you are doing!

Ceph will need some physical storage to be used as Object Storage Devices (OSD) and Journal. As the project documentation recommends, for better performance, the Journal should be on a separate drive than the OSD. Ceph supports ext4, btrfs and xfs. I tried setting up clusters with both btrfs and xfs, however I could achieve stable results only with xfs, so I will refer to this latter.

  1. Prepare a GPT partition table (I have observed stability issues when using a dos partition)
    $ sudo parted /dev/sd<x>
    (parted) mklabel gpt
    (parted) mkpart primary xfs 0 ­100%
    (parted) quit

    if parted complains about alignment issues (“Warning: The resulting partition is not properly aligned for best performance”), check this two links to find a solution: 1 and 2.

  2. Format the disk with xfs (you might need to install xfs tools with sudo apt-get install xfsprogs)
    $ sudo mkfs.xfs /dev/sd<x>1
  3. Create a Journal partition (raw/unformatted)
    $ sudo parted /dev/sd<y>
    (parted) mklabel gpt
    (parted) mkpart primary 0 100%

 Install Ceph deploy

The ceph-deploy tool must only be installed on the admin node. Access to the other nodes for configuration purposes will be handled by ceph-deploy over SSH (with keys).

  1. Add Ceph repository to your apt configuration, replace {ceph-stable-release} with the Ceph release name that you want to install (e.g., emperor, firefly, …)
    $ echo deb http://ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
  2. Install the trusted key with
    $ wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -
  3. If there is no repository for your Ubuntu version, you can try to select the newest one available by manually editing the file /etc/apt/sources.list.d/ceph.list and changing the Ubuntu codename (e.g., trusty -> raring)
    $ deb http://ceph.com/debian-emperor raring main
  4. Install ceph-deploy
    $ sudo apt-get update
    $ sudo apt-get install ceph-deploy

Setup the admin node

Each Ceph node will be setup with an user having passwordless sudo permissions and each node will store the public key of the admin node to allow for passwordless SSH access. With this configuration, ceph-deploy will be able to install and configure every node of the cluster.

NOTE: the hostnames (i.e., the output of hostname -s) must match the Ceph node names!

  1. [optional] Create a dedicated user for cluster administration (this is particularly useful if the admin node is part of the Ceph cluster)
    $ sudo useradd -d /home/cluster-admin -m cluster-admin -s /bin/bash

    then set a password and switch to the new user

    $ sudo passwd cluster-admin
    $ su cluster-admin
  2. Install SSH server on all the cluster nodes (even if a cluster node is also an admin node)
    $ sudo apt-get install openssh-server
  3. Add a ceph user on each Ceph cluster node (even if a cluster node is also an admin node) and give it passwordless sudo permissions
    $ sudo useradd -d /home/ceph -m ceph -s /bin/bash
    $ sudo passwd ceph
    <Enter password>
    $ echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
    $ sudo chmod 0440 /etc/sudoers.d/ceph
  4. Edit the /etc/hosts file to add mappings to the cluster nodes. Example:
    $ cat /etc/hosts
    127.0.0.1       localhost
    192.168.58.2    mon0
    192.168.58.3    osd0
    192.168.58.4    osd1

    to enable dns resolution with the hosts file, install dnsmasq

    $ sudo apt-get install dnsmasq
  5. Generate a public key for the admin user and install it on every ceph nodes
    $ ssh-keygen
    $ ssh-copy-id ceph@mon0
    $ ssh-copy-id ceph@osd0
    $ ssh-copy-id ceph@osd1
  6. Setup an SSH access configuration by editing the .ssh/config file. Example:
    Host osd0
       Hostname osd0
       User ceph
    Host osd1
       Hostname osd1
       User ceph
    Host mon0
       Hostname mon0
       User ceph
  7. Before proceeding, check that ping and host commands work for each node
    $ ping mon0
    $ ping osd0
    ...
    $ host osd0
    $ host osd1

Setup the cluster

Administration of the cluster is done entirely from the admin node.

  1. Move to a dedicated directory to collect the files that ceph-deploy will generate. This will be the working directory for any further use of ceph-deploy
    $ mkdir ceph-cluster
    $ cd ceph-cluster
  2. Deploy the monitor node(s) – replace mon0 with the list of hostnames of the initial monitor nodes
    $ ceph-deploy new mon0
    [ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy new mon0
    [ceph_deploy.new][DEBUG ] Creating new cluster named ceph
    [ceph_deploy.new][DEBUG ] Resolving host mon0
    [ceph_deploy.new][DEBUG ] Monitor mon0 at 192.168.58.2
    [ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
    [ceph_deploy.new][DEBUG ] Monitor initial members are ['mon0']
    [ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.58.2']
    [ceph_deploy.new][DEBUG ] Creating a random mon key...
    [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
    [ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
  3. Add a public network entry in the ceph.conf file if you have separate public and cluster networks (check the network configuration reference)
    public network = {ip-address}/{netmask}
  4. Install ceph in all the nodes of the cluster. Use the --no-adjust-repos option if you are using different apt configurations for ceph. NOTE: you may need to confirm the authenticity of the hosts if your accessing them on SSH for the first time!
    Example (replace mon0 osd0 osd1 with your node names):

    $ ceph-deploy install --no-adjust-repos mon0 osd0 osd1
  5. Create monitor and gather keys
    $ ceph-deploy mon create-initial
  6. The content of the working directory after this step should look like
    cadm@mon0:~/my-cluster$ ls
    ceph.bootstrap-mds.keyring  ceph.bootstrap-osd.keyring  ceph.client.admin.keyring  ceph.conf  ceph.log  ceph.mon.keyring  release.asc

Prepare OSDs and OSD Daemons

When deploying OSDs, consider that a single node can run multiple OSD Daemons and that the journal partition should be on a separate drive than the OSD for better performance.

  1. List disks on a node (replace osd0 with the name of your storage node(s))
    $ ceph-deploy disk list osd0

    This command is also useful for diagnostics: when an OSD is correctly mounted on Ceph, you should see entries similar to this one in the output:

    [ceph-osd1][DEBUG ] /dev/sdb :
    [ceph-osd1][DEBUG ] /dev/sdb1 other, xfs, mounted on /var/lib/ceph/osd/ceph-0
  2. If you haven’t already prepared your storage, or if you want to reformat a partition, use the zap command (WARNING: this will erase the partition)
    $ ceph-deploy disk zap --fs-type xfs osd0:/dev/sd<x>1
  3. Prepare and activate the disks (ceph-deploy also has a create command that should combine this two operations together, but for some reason it was not working for me). In this example, we are using /dev/sd<x>1 as OSD and /dev/sd<y>2 as journal on two different nodes, osd0 and osd1
    $ ceph-deploy osd prepare osd0:/dev/sd<x>1:/dev/sd<y>2 osd1:/dev/sd<x>1:/dev/sd<y>2
    $ ceph-deploy osd activate osd0:/dev/sd<x>1:/dev/sd<y>2 osd1:/dev/sd<x>1:/dev/sd<y>2

Final steps

Now we need to copy the cluster configuration to all nodes and check the operational status of our Ceph deployment.

  1. Copy keys and configuration files, (replace mon0 osd0 osd1 with the name of your Ceph nodes)
    $ ceph-deploy admin mon0 osd0 osd1
  2. Ensure proper permissions for admin keyring
    $ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  3. Check the Ceph status and health
    $ ceph health
    $ ceph status

    If, at this point, the reported health of your cluster is HEALTH_OK, then most of the work is done. Otherwise, try to check the troubleshooting part of this tutorial.

Revert installation

There are useful commands to purge the Ceph installation and configuration from every node so that one can start over again from a clean state.

This will remove Ceph configuration and keys

ceph-deploy purgedata {ceph-node} [{ceph-node}]
ceph-deploy forgetkeys

This will also remove Ceph packages

ceph-deploy purge {ceph-node} [{ceph-node}]

Before getting a healthy Ceph cluster I had to purge and reinstall many times, cycling between the “Setup the cluster”, “Prepare OSDs and OSD Daemons” and “Final steps” parts multiple times, while removing every warning that ceph-deploy was reporting.

 

Managing hosts in a running OpenStack environment

How does one remove a faulty/un/re-provisioned physical machine from the list of managed physical nodes in OpenStack nova? Recently we had to remove a compute node in our cluster for management reasons (read, it went dead on us). But nova perpetually maintains the host entry hoping at some point in time, it will come back online and start reporting its willingness to host new jobs.

Normally, things will not break if you simply leave the dead node entry in place. But it will mess up the overall view of the cluster if you wish to do some capacity planning. The resources once reported by the dead node will continue to show up in the statistics and things will look all ”blue” when in fact they should be ”red”.

There is no straight forward command to fix this problem, so here is a quick and dirty fix.

  1. log on as administrator on the controller node
  2. locate the nova configuration file, typically found at /etc/nova/nova.conf
  3. location the ”connection” parameter – this will tell you the database nova service uses

Depending on whether the database is mysql or sqlite endpoint, modify your queries. The one shown next are for mysql endpoint.

# mysql -u root
mysql> use nova;
mysql> show tables;

The tables of interest to us are ”compute_nodes” and ”services”. Next find the ”host” entry of the dead node from ”services” table.

mysql> select * from services;
+---------------------+---------------------+------------+----+-------------------+------------------+-------------+--------------+----------+---------+-----------------+
| created_at          | updated_at          | deleted_at | id | host              | binary           | topic       | report_count | disabled | deleted | disabled_reason |
+---------------------+---------------------+------------+----+-------------------+------------------+-------------+--------------+----------+---------+-----------------+
| 2013-11-15 14:25:48 | 2014-04-29 06:20:10 | NULL       |  1 | stable-controller | nova-consoleauth | consoleauth |      1421475 |        0 |       0 | NULL            |
| 2013-11-15 14:25:49 | 2014-04-29 06:20:05 | NULL       |  2 | stable-controller | nova-scheduler   | scheduler   |      1421421 |        0 |       0 | NULL            |
| 2013-11-15 14:25:49 | 2014-04-29 06:20:06 | NULL       |  3 | stable-controller | nova-conductor   | conductor   |      1422189 |        0 |       0 | NULL            |
| 2013-11-15 14:25:52 | 2014-04-29 06:20:05 | NULL       |  4 | stable-compute-1  | nova-compute     | compute     |      1393171 |        0 |       0 | NULL            |
| 2013-11-15 14:25:54 | 2014-04-29 06:20:06 | NULL       |  5 | stable-compute-2  | nova-compute     | compute     |      1393167 |        0 |       0 | NULL            |
| 2013-11-15 14:25:56 | 2014-04-29 06:20:05 | NULL       |  6 | stable-compute-4  | nova-compute     | compute     |      1392495 |        0 |       0 | NULL            |
| 2013-11-15 14:26:34 | 2013-11-15 15:06:09 | NULL       |  7 | 002590628c0c      | nova-compute     | compute     |          219 |        0 |       0 | NULL            |
| 2013-11-15 14:27:14 | 2014-04-29 06:20:10 | NULL       |  8 | stable-controller | nova-cert        | cert        |      1421467 |        0 |       0 | NULL            |
| 2013-11-15 15:48:53 | 2014-04-29 06:20:05 | NULL       |  9 | stable-compute-3  | nova-compute     | compute     |      1392736 |        0 |       0 | NULL            |
+---------------------+---------------------+------------+----+-------------------+------------------+-------------+--------------+----------+---------+-----------------+

The output for one of our test cloud is shown above, clearly the node that we want to remove is ”002590628c0c”. Note down the corresponding id for the erring host entry. This ”id” value will be used for ”service_id” in the following queries. Modify the example case with your own specific data. It is important that you first remove the corresponding entry from the ”compute_nodes” table and then in the ”services” table, otherwise due to foreign_key dependencies, the deletion will fail.

mysql> delete from compute_nodes where service_id=7;
mysql> delete from services where host='002590628c0c';

Change the values above with corresponding values in your case. Voila! The erring compute entries are gone in the dashboard view and also from the resource consumed metrics.

Nagios / Ceilometer integration: new plugin available

The famous Nagios open source monitoring system has become a de facto standard in recent years. Unlike commercial monitoring solutions Nagios does not come as a one-size-fits-all monitoring system with thousands of monitoring agents and monitoring functions. Nagios is rather a small, lightweight monitoring system reduced to the bare essential of monitoring: an event management and notification engine. Nagios is very lightweight and flexible, but it must be extended in order to become a solution which is valuable for your organization. Plugins are a very important part in setting up a Nagios environment. Though Nagios is extremely customizable, there are no plugins that capture OpenStack specific metrics like number of floating IPs or network packets entering a virtual machine (even if there are some Nagios plugins to check that OpenStack services are up and running).

Ceilometer is the OpenStack component that captures these metrics. OpenStack measures typical performance indices like CPU utilization, Memory allocation, disk space used etc. for all VM instances within OpenStack. When an OpenStack environment has to be metered and monitored, Ceilometer is the right tool to do the job. Though Ceilometer is a quite powerful and flexible metering tool for OpenStack, it lacks capabilities to visualize the collected data.

It can easily be seen that Nagios and Ceilometer are complementary products which can be used in an integrated solution. There are no Nagios plugins to integrate the Ceilometer API (though Enovance has developed plugins to check that OpenStack components alive) with the Nagios monitoring environment and therefore allow Nagios to monitor not only the OpenStack components, but also all the hosted VMs and other services.

The ICCLab has developed a Nagios plugin which can be used to capture metrics through the Ceilometer API. The plugin is available download on Github. The Ceilometer call plugin can be used to capture a Ceilometer metric and define thresholds for employing the nagios alerting system.

In order to use the plugin simply copy it into your Nagios plugins folder (e. g. /usr/lib/nagios/plugins/) and define a Nagios command in your commands.cfg file (in /etc/nagios/objects/commands.cfg). Don’t forget to make your Nagios plugin executable to the Nagios API (chmod u+x).

A command to monitor the CPU utilization could look like this:

define command {
command_name    check_ceilometer-cpu-util
command_line    /usr/lib/nagios/plugins/ceilometer-call -s "cpu_util" -t 50.0 -T 80.0
}

Then you have to define a service that uses this command.

define service {
check_command check_ceilometer-cpu-util
host_name
normal_check_interval 1
service_description OpenStack instances CPU utilization
use generic-service
}

Now Nagios can employ Ceilometer API to monitor VMs inside OpenStack.

EUCNC 2014 workshop: Mobile Cloud Infrastructures and Services (MCIS)

Workshop Motivation and Background

This workshop addresses the three main topics that are significant for the realization of the Future Internet Architecture, which are the Mobile Networking, Network Function Virtualization and Service Virtualization.

While mobile communication networks have been established decades ago and are still continuously evolving, cloud computing and cloud services became a hot topic in recent years and is expected to have significant impact on novel applications as well as on ICT infrastructures. Cloud computing and mobile communication networks have been considered separate from each other in the past. However, there are various possible synergies between them. This trend supports the use of cloud computing infrastructures as processing platforms for signal and protocol processing of mobile communication networks, in particular for current (4G) and future (5G) generation networks. This enables several opportunities to optimize performance of cloud applications and services observed by mobile users, whose devices are connected to the cloud via wireless access networks. This trend is also in line with the emerging ETSI activities in Network Functions Virtualization (NFV). The “Mobile Cloud Infrastructures and Services” workshop focuses on the thematic area that the EU project MCN is concentrating on and is addressing emerging technologies in cloud services and mobile communication infrastructures. Emphasis will be put on possible integration scenarios and synergies between them.

Workshop Structure
Based on the successful format of the FUNEMS 2013, “Mobile Cloud Networking and Edge ICT “ workshop, we plan to have a good mix of invited keynote talks from key participants in the EU FP7 projects MCN, iJOIN, CONENT and FLAMINGO and peer-reviewed abstracts of the papers to be presented. Moreover, the panel organized in 2013 was highly appreciated by the participants and therefore is proposed to be part of the program in 2014. The speakers of the workshop will form the panel. During the panel session, the presented papers will be used as the starting point for the panel discussions. The programme associated with this workshop is as follows:

Mobile Cloud Infrastructures and Services session (200 minutes + 30 minutes break), Chair Thomas Michael Bohnert (Zurich University of Applied Sciences)

  • Thomas Michael Bohnert (Zurich University of Applied Sciences), Welcome speech: EU FP7 Mobile Cloud Networking (MCN) (15 minutes)
  • Anna Tzanakaki (University of Bristol) (Invited paper), Title of invited speech: “EU FP7 CONTENT: Virtualizing converged network infrastructures in support of mobile cloud services” (15 minutes)
  • Peter Rost (NEC) (Invited paper), Title of invited speech: “EU FP7 iJOIN: Benefits and challenges of cloud technologies for ‘5G” (15 minutes)
  • Filip De Turck (University of Gent) (Invited paper), Title of invited speech: “EU FP7 FLAMINGO: Network monitoring in virtualized environments” (15 minutes)
  • Joao Soares (Portugal Telecom Inovacao), Andy Edmonds (Zurich University of Applied Sciences), Giada Landi (Nextworks), Luigi Grossi (Telecom Italia), Julius Mueller (Fraunhofer FOKUS), Frank Zdarsky (NEC Laboratories Europe), Title of presentation: “Cloud computing and SDN networking for end to end virtualization in cloud-based LTE systems” (20 minutes)
  • Desislava Dimitrova (University of Bern), Lucio S. Ferreira (INOV-INESC | IST), André Gomes (University of Bern | One Source, Consultoria Informática Lda.), Navid Nikaein (EURECOM), Alexander Georgiev (CloudSigma), Anna Pizzinat (Orange), Title of presentation: “Challenges ahead of RAN virtualization” (20 minutes)

Coffee Break (30 minutes)

  • Tarik Taleb (NEC Laboratories Europe), Marius Corici (Fraunhofer FOKUS), Carlos Parada (Portugal Telecom Inovacao), Almerima Jamakovic (University of Bern), Simone Ruffino (Telecom Italia), Georgios Karagiannis (University of Twente), Morteza Karimzadeh (University of Twente), Thomas Magedanz (Fraunhofer FOKUS), Title of presentation: “Virtualizing the LTE Evolved Packet Core (EPC)” (20 minutes)
  • André Gomes (University of Bern | One Source, Consultoria Informática Lda.), Santiago Ruiz (Soft Telecom), Giuseppe Carella (TU Berlin / Fraunhofer FOKUS), Paolo Comi (Italtel), Paolo Secondo Crosta (Italtel), Title of presentation: “Cloud-based Orchestration of Multimedia Services and Applications” (20 minutes)

Panel discussions (60 minutes)

Previous Editions

The previous edition of this workshop was entitled: “Mobile Cloud Networking and Edge ICT Services”, and it has been organized during the FUNEMS 2013, http://www.futurenetworksummit.eu/2013/. The duration of the workshop was half a day and has been organised in two sessions. The current edition of this workshop will focus mainly only on one of these sessions, “Mobile Cloud Infrastructures and Services”. The workshop was successful and attracted a relatively high number of attendees compared to other parallel workshops. 25- 50 participants have been permanently in the room at the Mobile Cloud Networking and Edge ICT Services 2013.

Workshop Audience

The target audience will be the telecommunication infrastructures and cloud computing research and industry communities, with an emphasis on European FP7 project involved researchers and organizations. The workshop organizers are participating among others in the EU FP7 IP projects: Mobile Cloud Networking (MCN), CONTENT, iJOIN, FLAMINGO and in Standardization Bodies such as Open Networking Foundation and ETSI NFV (Network Function Virtualisation). It is therefore expected that a significant part of the audience and participants will be the communities involved in Standardization Bodies such as Open Networking Foundation and ETSI NFV and the EU FP7 projects that are and will be cooperating with the EU FP7 IP project “Mobile Cloud Networking” (MCN).

XiFi Developer Event – Berlin, May 15 2014

The XiFi project, which we have just recently joined, is starting the process of reaching out to the larger community – primarily developers – to let them know about all the cool capabilities that are offered by Future Internet platform.

Continue reading

Getting Started with OpenShift and OpenStack

In Mobile Cloud Networking (MCN) we rely heavily on OpenStack, OpenShift and of course Automation. So that developers can get working fast with their own local infrastructure, we’ve spent time setting up an automated workflow, using Vagrant and puppet to setup both OpenStack and OpenShift. If you want to experiment with both OpenStack and OpenShift locally, simply clone this project:

$ git clone https://github.com/dizz/os-ops.git

Once it has been cloned you’ll need to initialise the submodules:

$ git submodule init
$ git submodule update

After that just you can begin the setup of OpenStack and OpenShift. You’ll need an installation of VirtualBox and Vagrant.

OpenStack

  • run in controller/worker mode:
      $ vagrant up os_ctl
      $ vagrant up os_cmp
    

There’s some gotchas, so look at the known issues in the README, specific to OpenStack. Otherwise, open your web browser at: http://10.10.10.51.

OpenShift

You’ve two OpenShift options:

  • run all-in-one:
      $ cd os-ops
      $ vagrant up ops_aio
    
  • run in controller/worker mode:
      $ cd os-ops
      $ vagrant up ops_ctl
      $ vagrant up ops_node
    

Once done open your web browser at: https://10.10.10.53/console/applications. There more info in the README.

In the next post we’ll look at getting OpenShift running on OpenStack, quickly and fast using two approaches, direct with puppet and using Heat orchestration.

Invitation to the Upcoming Workshop on Scientific Computing in the ICCLab Cloud

Workshop Date: Wednesday May 14th from 10:00 to 14:00, room ‘TV 401’

The ICCLab is pleased to invite you to the upcoming Workshop on Scientific Computing in the ICCLab Cloud. This workshop will focus on how to leverage the ICCLab Cloud infrastructures for executing scientific applications in a distributed, high performance environment.

The workshop’s agenda will include several talks describing applications from different areas of science (physics, mathematics, machine learning, etc.), highlighting their requirements from the ICT perspective. The workshop will also include a comprehensive overview of Hadoop and a tutorial on how to deploy, configure and use a Hadoop cluster on the ICCLab Cloud through the Savanna OpenStack project.

The workshop date and the full program are to be announced.

To register send an email to Diana Moise <mois@zhaw.ch>

We look forward to your attendance.

Access to our Cloud

Request access to our OpenStack testbeds.

[contact-form to=’harh@zhaw.ch’ subject=’Request for testbed access’][contact-field label=’Full Name’ type=’name’ required=’1’/][contact-field label=’Department’ type=’text’/][contact-field label=’Institutional Email’ type=’email’ required=’1’/][contact-field label=’Access for Testbed’ type=’select’ required=’1′ options=’Internal Testbed (research),External Testbed (stable)’/][contact-field label=’Intended Usage’ type=’textarea’/][/contact-form]

# Please Note
Make sure you provide a valid institutional email address above. If you provide emails ending in yahoo.com, gmail.com and other private domains, your request will not be processed. Please expect a few days of delay before our administrators create and enable a valid cloud account for you.

« Older posts