Month: July 2012

FI-PPP FI-WARE (KIARA)

Today, it is fair to say that nearly any application depends on distributed and service-based computing of some sort. This is most apparent in the mobile and cloud computing areas but this trend is quickly affecting essentially all areas of computing. Despite this natural and comprehensive demand, most interestingly, there is to date no established middleware that provides dependable high-performance over a wide range of configurations and deployments, offers rich in- built QoS and Security features, while at the same time facilitating the development of diverse applications across a wide range of heterogeneous devices, infrastructures, systems, and domains.This mismatch between supply and demand became early and especially apparent within FI-WARE (www.fi-ware), which is developing a large-scale, distributed, cross-technology Future Internet platform for a large set Use Case projects in different application domains.

The goal of KIARA is to provide a “Middleware for efficient and QoS/Security- aware invocation of services and exchange of messages”for the FI-PPP program and beyond. KIARA builds on top of a well-established, proven, and high-performance product RTI-DDS from RTI and combines it with innovative research results to provide an advanced middleware layer that targets the specific requirements of the Future Internet.

KIARA improves on the state-of-the art in multiple ways:

  • KIARA provides radical improvements in performance and scalability not only for traditional Web services, but also for distributed applications in general – ranging from tiny devices in the Internet of Things to high-performance computing applications.
  • KIARA improves developer productivity and greatly simplifies application integration using a simple-to-use IDL for specifying the communication contract between peers as well as a novel API that allows applications to communicate in terms of their own data structures.
  • KIARA dynamically and transparently selects the optimal communication mechanisms, protocols, and data representations to be used between two peers, including the traditional SOAP/REST protocols but also optimized binary formats and mechanisms like pointer forwarding, shared memory, and the use of specialized network infrastructures. An embedded compiler dynamically at run-time generates highly optimized code that transfers messages directly from application data structures to the network.
  • KIARA uses simple, high-level specifications of QoS and security requirements from the application for automatically selecting the best communication strategy, thus clearly separating the high-level concerns of the application/developer from the concrete and varying technical details, such as the available network and other capabilities and resources.
  • KIARA, for the first time, uses a “Secure By Design” approach for of the communication architecture, thus trying to eliminate network connections as the dominant source of security threats.

This combination of a well-proven, existing middleware product with unique features based on latest research results forms an ideal basis for efficient and QoS/security-aware communication within FI-WARE, the FI-PPP program, and for general distributed applications.

KIARA Architecture Sketch based on RTI-DDS

KIARA is a European research project funded by the European Commission. As winner of the FI-WARE Open Call 1 for additional beneficiaries it is an integral part of the FI-WARE project. It runs over a period of 20 Month with a total investment of roughly 1.6M Euro.

Partners are

– Zürcher Hochschule für Angewandte Wissenschaften (ZHAW) – Coordinator
– Deutsches Forschungszentrum für Künstliche Intelligenz GmbH (DFKI)
– Universität des Saarlandes – Center for IT Security, Privacy and Accountability (USAAR-CISPA)
– Proyectos y Sistemas de Mantenimiento SL (EPROS)

Video: ICCLab Presenting on Open Standards, OpenStack @ /ch/open

The ICCLab team presented gave a live demo of our [OpenStack cluster](http://www.cloudcomp.ch/2012/06/whats-powering-the-icclab/) at the /ch/open [Open Cloud Day](http://www.ch-open.ch/index.php?id=1034). It was an excellent day with many view points from governmental all the way down to Infrastructure as a Service and automation.

The presentation given in this video and more details of the talk can be [found in this article](http://www.cloudcomp.ch/2012/06/icclab-presented-at-open-ch/).

Foreman, Puppet and OpenStack

# Foreman, Puppet and OpenStack

So all our work in the previous articles has been leading up to this one. In this article we’ll describe how you can deploy a full multi-node [OpenStack](http://www.openstack.org) cluster beginning from bare metal using [Foreman](http://www.theforeman.org) and [puppet](http://www.puppetlabs.com). Before continuing we should note what exactly ‘bare metal’ is in this context. Bare metal refers to physical server hardware that has not yet been provisioned with an operating system. When provisioning this bare metal, it is assumed that the underlying network has been setup (e.g. L2 configurations).

For the purposes of this article all setup will happen in a virtualised environment just as in the previous articles. It will also draw upon those previous articles.

The first requirement is to have a successfully running installation of Foreman. You should find all the information on how to do this in [the articles](http://www.cloudcomp.ch/2012/06/automating-the-icclab-part-one/) on [setting up Foreman](http://www.cloudcomp.ch/2012/06/automating-the-icclab-part-two/).

Once Foreman has been setup the next thing that you will need to do is to deploy the OpenStack puppet modules as described in [the article on puppet and OpenStack](http://www.cloudcomp.ch/2012/07/puppet-and-openstack-part-one/).

Note, that you must only execute up to and including `git clone https://github.com/puppetlabs/puppetlabs-openstack` in the “Select Puppet Modules” section. Once you have completed this then follow this procedure.

The OpenStack modules require that `storeconfigs` is enabled. To do this you need to edit the foreman configuration file, in our case the foreman-installer manifest located at `/etc/puppet/modules/common/foreman/params.pp`. In there you will need to set `storeconfigs => true`. Once this is done you will need to run the foreman-installer again.

For the purposes of this article we will create our own isolated puppet environment named `iaas`. To do this:

[gist id=3119051]

Then configure puppet so that it knows of this environment by editing `/etc/puppet/puppet.conf` and insert this definition:

[gist id=3119055]

Once you have defined the `iaas` environment there are two things to do:

1. Install all the necessary OpenStack puppet modules.

You will already have the puppet modules checked out if you have followed the [puppet and OpenStack article](http://www.cloudcomp.ch/2012/07/puppet-and-openstack-part-one/). Copy the OpenStack modules as follows:

[gist id=3119076]

Lastly in this step, you need to change a variable in the `Rakefile` located in the folder where the OpenStack modules are cloned into. To do this change `default_modulepath` to the following:

[gist id=3119084]

Now execute the rake command `rake modules:clone` as in the [puppet OpenStack article](http://www.cloudcomp.ch/2012/07/puppet-and-openstack-part-one/). All other modules that the OpenStack requires will now be placed in the `iaas` environment.

[gist id=3119059]

2. Import the new environment and all associated puppet classes into Foreman.

In the Foreman web interface import the new `iaas` environment and all its associated modules. Do this by navigating in the web interface “Other->Puppet Classes” and clicking on “Import new puppet classes and environments”. A screen will then show the new environment to be imported, along with its modules and await your confirmation.

Now that these steps are complete, you will want to create some class definitions in a module with which you can apply against new Foreman managed hosts. Below is an example structure of module followed by the contents that defines both the OpenStack all-in-one, controller and compute node roles.

***Module structure***:

[gist id=3119093]

***Module Content***:

* [`all_in_one.pp`](https://gist.github.com/3118975): This will setup an all in one installation of openstack.
* [`controller.pp`](https://gist.github.com/3118972): This will setup the controller on a specified host
* [`compute.pp`](https://gist.github.com/3118973): This will setup the compute services on a specified host
* [`params.pp`](https://gist.github.com/3118970): This is some *basic* common configurations for `controller.pp` and `compute.pp`

Note that as we’re setting this up in a virtualised environment, `libvirt_type` is set to `qemu`. If you were deploying to bare metal, this value would be `kvm` or another suitable hypervisor. Also to note, this file looks almost identical to the `site.pp` file we previous used.

With this module created, you will need to import it just as you did when importing the `iaas` environment. Doing this will simply update the current environment.

Now that you have imported everything, you can now provision a host with Foreman and when doing so select the `icclab::controller` resource class for the new host after ensuring the field ‘Environment’ is ‘iaas’. Once the new host boots over PXE, installs the selected OS (Ubuntu 12.04 in this case) and the post-install script executes, the OpenStack resource class will be applied to the host and you will now have a brand new and shiny OpenStack controller node.

To add compute nodes is almost the exact same process except rather than selecting `icclab::os_controller`, you will select `icclab::compute`. Simple, eh?! You can add as many compute nodes as you have hardware to this arrangement.

# Issues Encountered
Ok, so not everything is as simple. There was one major issue encountered while doing this work.

Puppet’s support of environments has a number of associated issues with it. This is manifested when a newly provisioned host has a module applied and shown by the following error message:

[gist id=3119096]

What this error is telling us is that the puppet resource type cannot be found on the puppet module path. This is because it lives in a location that puppet is unaware of, namely the `iaas` environment. You will also encounter the same issue if you use the `production` or `development` environments that Foreman setups up. There are more details about this issue on the puppetlabs’ tracker [under bug #4409](http://projects.puppetlabs.com/issues/4409). Thankfully there is a work around pointed out in the thread and it is the following:

[gist id=3119097]

Here we manually place the types and providers in a place where puppet will find them. Naturally this solution is not perfect however can be further automated by a `cron` job or executed when there are updates to the effected modules.

Talk about the FI-PPP at the 3rd European Summit on the Future Internet

The 3rd European Summit on the Future Internet was hosted by TIVIT in Helsinki. After attending the first event, presenting SAP’s Future Internet vision, Thomas M. Bohnert was invited again this time presenting latest insights into the evolution, status, and near future of the FI-PPP from a program-level (CONCORD) perspective.

The talk was recorded and the video stream can be accessed here: T. M. Bohnert, “The FI-PPP after One Year: Lessons Learned, Challenges and Opportunities Ahead”, 3rd European Future Internet Summit, Helsinki, June 2012

ICCLab to Present “From Bare-Metal to Cloud” at EGI Technical Forum 2012

ICCLabs and GWDG will present on the topic of “From Bare-Metal to Cloud” at EGI Technical Forum 2012 in Prague.

The ICCLab and GWDG had a shared, common problem, namely how to deploy infrastructural service technology (e.g. OpenStack, CloudStack etc.) with the least amount of user interaction (i.e. automated) during the deployment process, across a large number of servers. The solution to be presented allows for the easy deployment of operating systems on to bare-metal (physical servers) and the deployment and management of specified software packages upon those provisioned bare-metal systems. To accomplish the combination of Foreman and Puppet was chosen. For the work, it was assumed that the network architecture, partitioning etc. is already determined.

This presentation will detail what measures have been taken to automate the provisioning of OpenStack clusters at the two research labs. The presentation will describe the technology stack, discuss the individual technologies used and share the information with others. It will conclude with a demonstration of provisioning a multi-cluster OpenStack deployment upon virgin bare metal servers.

Puppet and OpenStack: Part Two

In our [last article](http://www.cloudcomp.ch/2012/07/puppet-and-openstack-part-one/ ) we setup OpenStack using puppet on one node. In this article we’ll look at setting up OpenStack in a clustered environment. For this we’ll use the following configuration of VMs:

* The hypervisor is [VirtualBox](http://www.virtualbox.org).

* 1 OpenStack **Controller** VM.

This will host all the main services of OpenStack and will not run any virtual machines. This will be running Ubuntu 12.04 and acting as the puppetmaster. It will require 3 adapters:

– `eth0`: Host-only. Assign this an IP that matches the VirtualBox virtual switch.
– `eth1`: Host-only. Leave this unassigned. OpenStack will look after this and place a linux kernel bridge upon in named `br100`.
– `eth2`: NAT. Setup this adapter as [shown in previous articles](http://www.cloudcomp.ch/2012/06/automating-the-icclab-part-one/).

* 2 OpenStack **Compute Node** VMs.

This will only act as a node that that provide virtual machines and storage volumes when requested by the OpenStack controller VM. This will be running Ubuntu 12.04 and acting as a puppet agent (slave). These VMs will require 2 adapters:

– `eth0`: Host-only. Assign this an IP that matches the VirtualBox virtual switch.
– `eth1`: Host-only. Leave this unassigned. OpenStack will look after this.

In both cases above, `eth0` and `eth1` can share the same or different networks. In our case we set them to different virtual networks (`vboxnet0` and `vboxnet1` respectively).

## OpenStack Controller Setup

Once the controller VM is up and running and you have configured `eth2`, the next task at hand is to configure puppet. You can use the same puppet configuration as was shown in [the article on creating an OpenStack all-in-one installation](http://www.cloudcomp.ch/2012/07/puppet-and-openstack-part-one/). The same hostname configuration is also the same, however as there are two other nodes to deal with in the setup you should place these in your `/etc/hosts` file (unless you’re using a setup with a DNS server). With your puppet and hostname resolution configured, install the puppet modules as shown in the previous article. Where things begin to differ is in the configuration of `/etc/puppet/manifests/site.pp`.

In the `site.pp` file we will be using 2 different `node` definitions. One for the controller node and the second for compute nodes. For the controller node we will explicitly set the node name to that of the fully qualified domain name of the controller VM. The definition is then:

[gist id=3029148]

In the second case we are going to set the node name to one that contains a regular expression so that any hostname that is certified by the puppetmaster and matches the regular expression can partake to provide virtual machine capabilities. The definition is then:

[gist id=3029150]

With your nodes defined in `site.pp` you will need to set some particular variables:

* Global:
* set `$public_interface` to `’eth0’`
* set `$private_interface` to `’eth1’`
* Controller-specific:
* set `$controller_node_address` to the IP address that you have set to `eth0`. In our case it’s `’192.168.56.2’`
* set `floating_range` to `’192.168.56.128/25’`. This will give you enough floating IP address in this test setup.
* Compute node-specific:
* set `libvirt_type` to `’qemu’`

Now either ensure the puppet agent is running on the controller node or run the puppet agent in the foreground. Once the puppet agent on the controller node contacts the puppetmaster it will install all the necessary services for an OpenStack controller.

## OpenStack Compute Node Setup
There is little here for you to do other than ensure that the puppet agent process is configured properly and can contact the puppetmaster. When each compute node first contacts the puppetmaster they will have issued a new certificate request and will wait until the puppetmaster signs these certificates. In order to sign these you can quickly do this by issuing the command:

[gist id=3029151]

Again as in the case of the all-in-one installation, the `nova-volumes` group is not setup so [follow the previous steps to setup is up](http://www.cloudcomp.ch/2012/07/puppet-and-openstack-part-one/). Once the puppet agent on each of the compute node contacts the puppetmaster it will install all the necessary services for an OpenStack compute node.

## Validating the Install
Once all services on all VMs have been installed you should be able to list that all nodes are operational. To validate this, execute the following:

[gist id=3029153]

You should see the following output:

[gist id=3029145]

You may want to run some VMs at this stage so you will need to import a VM image into glance. This is [detailed here](http://www.cloudcomp.ch/2012/07/puppet-and-openstack-part-one/).

## Next up
We’ve now successfully installed a clustered OpenStack using puppet. In the next article we’ll look at expanding this to combine both Foreman, puppet and OpenStack so that we can go from bare metal to a multi-node, clustered OpenStack installation. Stay tuned!

***Note***: If you want copies of the VMs used in this article ping [@dizz](http://www.twitter.com/dizz)

Puppet and OpenStack: Part One

In this guide we’ll explain how you can setup a simple OpenStack all in one installation using puppet. In this guide we’ll be using a virtual machine to simulate the hardware. We’ll use the same network configration as was used [in the article on Foreman](http://www.cloudcomp.ch/2012/06/automating-the-icclab-part-one/). We’ll also be using Ubuntu 12.04.

By default, nameserver and domain settings are automatically managed by `resolvconf` (this is due to the primary adapter being managed by DHCP). As puppet relies on the fully qualified host name of nodes its installed on, you should, if not using other means, configure `resolvconf` so that it does not overwrite your domain and nameserver settings. To do this edit `/etc/resolvconf/resolv.conf.d/head` and place the following content:

[gist id=3029170]

Of course feel free to use what ever other values you like to use. Performing this configuration will ensure that resolvconf always generates settings with your values.

If you are not using your own managed DNS server then you should place IP-host aliases in your `/etc/hosts` file. Here’s the relevant file entries used in this article:

[gist id=3029172]

Once you have the virtual machine installed, it might help you if you take a snapshot of the it at this stage so that you can roll-back to a fresh state.

## Install Puppet

Execute the following:

[gist id=3029174]

## Configure Puppet
In order to deploy OpenStack, we will be using puppet in both agent and master mode.

### Agent Configuration
Configure puppet agent. Edit `/etc/puppet/puppet.conf` so that it the `[agent]` section has the set values of:

[gist id=3029176]

### Master Configuration
Configure puppet master. Edit `/etc/puppet/puppet.conf` so that it the `[master]` section has the set values of:

[gist id=3029177]

***Note*** that this particular configuration will change when we integrate with Foreman in the article describing Foreman, puppet and OpenStack integration.

## Select Puppet Modules

We’ll use the [official Puppetlabs OpenStack modules](https://github.com/puppetlabs/puppetlabs-openstack). Install the prerequisites and checkout the OpenStack modules from Github:

[gist id=3029179]

Once done you’ll need to follow the setup instructions (they’re repeated here for completeness):

[gist id=3029180]

After executing these steps, the rake script will have placed other required puppet modules in `/etc/puppet/modules/`

## Assign the OpenStack Role
We now have to tell puppet that the current VM is to run as a complete OpenStack instance. To do this copy the example `site.pp` file to `/etc/puppet/manifests/` and then edit so that:

1. the node definition for the VM reads as `node /controller.cloudcomplab.ch/ {`
2. `libvirt_type` is set to `qemu`
3. if you want further logging information to help you, then set `$verbose` to `true`
4. you might want to specify a floating (i.e. static) IP range. For this setup you can add and set `floating_range` to `’192.168.56.128/25’`

Once done, wait! This setup will run perfectly fine until you attempt to invoke the services of `nova-volume`. `nova-volume` is not fully setup as there is no LVM group (`nova-volumes`) setup. To set this up manually execute these steps as root:

1. `truncate -s 2052M /root/nova-vol-file`
2. Find the loop-back device associated with `nova-vol-fil`:

`losetup -f –show /root/nova-vol-file`

In this setup the value is `/dev/loop1`

3. Now, finally, create the LVM volume group:

`vgcreate nova-volumes /dev/loop1`

You may need to install the LVM tools: `apt-get -y install lvm2`

**Note** that this LVM mapping is reset on reboot.

Now that you have setup LVM and puppet to install OpenStack, do just that! Execute:

`puppet agent –no-daemonize –verbose`

You’ll see alot of output as puppet installs OpenStack but at the end of the process you will be able to access your OpenStack installation at `http://192.168.56.2`

## Using OpenStack
Now that you’ve seen the shiny UI of OpenStack you will have noticed that there are no VM images to run. To get a VM image into OpenStack do the following:

1. Import the authentication credentials into your shell (puppet was kind enough to create these for you)

`source /root/openrc`

2. Download a VM image, CirrOS in this case:

`wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img`

3. Import the CirrOS image into glance:

`glance add name=”CirrOS 0.3 VM Image” disk_format=qcow2 container_format=ovf < cirros-0.3.0-x86_64-disk.img`

4. Go back to your web user interface and you will see that there is now a VM image to instantiate and execute.

By the way, the puppetlabs OpenStack github repository [has some decent documentation](https://github.com/puppetlabs/puppetlabs-openstack).

# Next up
We've now successfully installed an "all-in-one" OpenStack using puppet. In the next article we'll look at expanding this to a multi-node scenario. Stay tuned!