Transducing service descriptions into SaaS prototypes

Service prototyping is still a young topic when it comes to cloud services, web services or other network services. Researchers are concerned with defining the topic more accurately and finding out which metrics matter, for instance time, quality or cost. New definitions, methods and tools will result from this process.

In a previous blog post, we have discussed the process of automating service and API prototyping tools on the scripting level, ensuring that all commands to install dependencies and to configure the software are executed properly, in order and without omission. The tool in focus has been Ramses which turns RAML web service descriptions into executable prototypes. The focus of this post is to take this idea further to the SaaS and web application level. A convenient web application, accessible from every browser, should offer a guided prototypical service generation based on just the service interface description which specifies its resources, methods and data types.

Continue reading

OpenStack Heat plugin for Apache CloudStack

This blog post presents a plugin for OpenStack Heat which adds support for Apache CloudStack resources and thus enables a template-based orchestration on CloudStack using Heat. As this plugin extends the standard Heat’s resource type list it can also be used within our Hurtle orchestrator for providing your application as a service or any other application underlying on Heat. This work follows from our earlier work in which we developed a Heat plugin for SDC. Continue reading

Nagios OpenStack Installer – Automated monitoring of your OpenStack VMs

There are many tools available which can be used to monitor operation of the Opentack infrastructure, but as OpenStack user you might not be interested in monitoring OpenStack itself. Your primary interest should be the operation of the VMs that are hosted on OpenStack. Nagios OpenStack Installer is a tool for exactly that purpose: it uses a Nagios VM inside the OpenStack environment and configures it to monitor all VMs that you own.

Nagios OpenStack Installer configures your OpenStack monitoring environment remotely from your desktop PC or labtop. In order to use Nagios OpenStack Installer you need to fulfil the following prerequisites.

  • You must have an SSH Key for securely accessing the Nagios VM and the VMs you own and you must know the SSH credentials to access the VMs.
  • You must know your OpenStack user account (name and id), your OpenStack password, the OpenStack Keystone authentication URL and the OpenStack tenant (“project”) (name and id) you work with.
  • You must be able to create a VM that serves as Nagios VM and you must own a publicly available IP (“floating IP”) to make the Nagios dashboard accessible to the outside world.
  • Nagios OpenStack Installer is a Python tool and requires some Python packages. Make sure to install Python 2.7 on your desktop. Additionally you need the following packages:
    • pip: The package manager to install Python packages from the PyPI repository (Windows users should refer to the pip developer’s “get pip” manual to install pip, Cygwin users are recommended to follow these guidelines in atbrox blog).
    • fabric: This package is used to access OpenStack VMs via SSH and remotely execute tasks on the VMs.
    • python-keystoneclient: To access the OpenStack Keystone API and authenticate to your OpenStack environment.
    • python-novaclient: To manage VMs which are hosted on OpenStack.
    • cuisine: This is a configuration management tool and lightweight alternative to configuration managers like Puppet or Chef. cuisine is required to manage the packages and configuration files on the Nagios VM and the monitored VMs.
    • pickle: pickle is a object serialization tool that can store objects and their current state in a file dump. Object serilaization is used to get the list of VMs which should be monitored.
    • We recommend to use pip for installation of the required packages, since pip automatically installs package dependencies.
  • You must have Git downloaded and installed.

After having installed the prerequisites on your local PC or labtop, you can use Nagios OpenStack Installer by performing the following steps.

  1. Create a new directory and clone the Nagios OpenStack Installer Github repository in it.git clone https://github.com/icclab/kobe6661-nagios-openstack-installer.git
  2. Edit the credentials in install_autoconfig.py, remote.py, remote_server_config.py and vm_list_extractor.py to match your OpenStack and SSH credentials.
  3. Run remote_server_config.py from Python console. This installs and configures Nagios server on your Nagios VM. After installation you should be able to access the Nagios Dashboard by pointing your webbrowser to “http://<your_nagios_public_ip>/nagios” and providing your Nagios login credentials.
  4. Run vm_list_extractor.py from Python console. This will extract the list of VMs on OpenStack that should be monitored and save the list as pickle file dump on your computer.
  5. Run install_autoconfig.py from Python console. This will upload the Python scripts required to automatically update the Nagios configuration in case of changes in the OpenStack VM environment (nagios_config_updater.py, config_transporter.py, config_generator.py, vm_list_extractor.py). Additionally it will run these Python scripts on the Nagios VM to let Nagios capture the VMs which should be monitored, install and run the required Nagios and NRPE plugins on these VMs and reconfigure and restart Nagios server to monitor these VMs remotely.

Now the Nagios environment is installed and you should be able to monitor your VMs. Nagios OpenStack Installer is available on ICCLab’s Github repository. Feel free to try it out and give feedback about future improvements.

Getting Started with OpenShift and OpenStack

In Mobile Cloud Networking (MCN) we rely heavily on OpenStack, OpenShift and of course Automation. So that developers can get working fast with their own local infrastructure, we’ve spent time setting up an automated workflow, using Vagrant and puppet to setup both OpenStack and OpenShift. If you want to experiment with both OpenStack and OpenShift locally, simply clone this project:

$ git clone https://github.com/dizz/os-ops.git

Once it has been cloned you’ll need to initialise the submodules:

$ git submodule init
$ git submodule update

After that just you can begin the setup of OpenStack and OpenShift. You’ll need an installation of VirtualBox and Vagrant.

OpenStack

  • run in controller/worker mode:
      $ vagrant up os_ctl
      $ vagrant up os_cmp
    

There’s some gotchas, so look at the known issues in the README, specific to OpenStack. Otherwise, open your web browser at: http://10.10.10.51.

OpenShift

You’ve two OpenShift options:

  • run all-in-one:
      $ cd os-ops
      $ vagrant up ops_aio
    
  • run in controller/worker mode:
      $ cd os-ops
      $ vagrant up ops_ctl
      $ vagrant up ops_node
    

Once done open your web browser at: https://10.10.10.53/console/applications. There more info in the README.

In the next post we’ll look at getting OpenShift running on OpenStack, quickly and fast using two approaches, direct with puppet and using Heat orchestration.

Automated Vagrant installation of MySQL HA using DRBD, Corosync and Pacemaker

Fig. 1: Redundant MySQL Server nodes using Pacemaker, Corosync and DRBD.

Fig. 1: Redundant MySQL Server nodes using Pacemaker, Corosync and DRBD.

If automation is required, Vagrant and Puppet seem to be the most adequate tools to implement it. What about automatic installation of High Availability database servers? As part of  our Cloud Dependability efforts, the ICCLab works on automatic installation of High Availability systems. One such HA system is a MySQL Server – combined with DRBD, Corosync and Pacemaker.

In this system the server-logic of the MySQL Server runs locally on different virtual machine nodes, while all database files are stored on a clustered DRBD-device which is distributed on all the nodes. The DRBD resource is used by Corosync which acts as resource layer for Pacemaker. If one of the nodes fails, Pacemaker automagically restarts the MySQL server on another node and synchronizes the data on the DRBD device. This combined DRBD and Pacemaker approach is best practice in the IT industry.

At ICCLab we have developed an automatic installation script which creates 2 virtual machines and configures MySQL, DRBD, Corosync and Pacemaker on both machines. The automated installation script can be downloaded from Github.

Vagrant, Devstack and the ICCLab

What?

So what is vagrant? In the words of its creator it allows you to:

“Create and configure lightweight, reproducible, and portable development environments.”

Vagrant is a ruby framework that automates a lot of the boring, painful setup a developer needs to do to work with services. In the case of the ICCLab those services are generally OpenStack services. We use vagrant to create consistent reproducible setups of our testbed on local development machines.

Why?

In the ICCLab we operate two testbeds, one that is stable and operates an OpenStack environment that does not change often. The other is a research testbed that is used to investigate the latest features of OpenStack, evaluate our own modifications or experiments upon OpenStack (e.g. Hadoop, CloudFoundry etc.). In order for code modifications to be placed on to the research test bed it must first prove that it is worthy. To prove itself it must be shown that it can run locally on a laptop/desktop and can be installed and configured automatically. The great advantage of this is that vagrant supports the same configuration framework, puppet, as is used on the test beds. Essentially what vagrant allows us to do is model our infrastructure but locally before deploying changes to metal.

How?

So the best way to get started with vagrant is by example. In this example, we’ll show you how to create a vagrant project to create an OpenStack devstack environment.

Install it!

To install vagrant, make sure you have virtualbox already installed. Then simply install it. On a mac it’s easiest to use the bundled installer but otherwise just execute gem install vagrant. Once installed execute vagrant help so see what you can do. You should see something like this:

[gist id=5309928]

The most common commands you’ll use are up, halt, reload and ssh

Play with it!

The example we will bring you through is setting up a devstack environment. To see all the code check out the github project here.

The first thing you need to do when creating a new vagrant project is to create a directory to host all your files. Once done you’ll need to execute:

[gist id=5310061]

Once done you should find a Vagrantfile created in your directory. This contains a basic template of how your vagrant project. For the purposes of this example we’ll use the following content:

[gist id=5310052]

What is important to note in this devstack_config.vm.box. This tells vagrant what ‘box’ it will use. A box is simply a VM image with a particular initial configuration (see here for more details). Boxes can also be created with veewee. You can also install other boxes from vagrantbox.es.

The next most important piece in this is the devstack_config.vm.provision block. This details how your software will be installed. In this example we are using puppet (in local mode) to install devstack. In the code block we specify where to find additional modules and where to find the vagrant specific manifests. Most importantly we note that the main “entry point” manifest is (devstack_puppet.manifest_file variable).

In our example, site.pp encodes the following steps to create our devstack VM:

  1. Install git
  2. Check out the devstack repository
  3. Customise the devstack installation by setting up the devstack localrc file
  4. Run devstack by executing stack.sh

You can see the contents of this manifest here.

If you’ve got this far then with the vagrant project cloned from github all you’ll have to do to get your devstack VM up and running is:

vagrant up

 

Easy eh?

Wrap up

The latest vagrant will add support for provisioning on the cloud (Amazon, OpenStack, Rackspace) and is also independent of hypervisor choice including support (paid) for VMware fusion.

How to Test your OpenStack Deployment?

Like us in the ICCLab, you have likely spent lots of time researching the best means to deploy OpenStack and you’ve decided upon a particular method (at the ICCLab we use foreman and puppet). You’ve implemented OpenStack with your chosen deployment plan and technologies and you now have an operational OpenStack cluster. The question you now have to ask is:

“How do I test that all functionality is operating correctly?”

You could certainly take the time to write a suite of tests using the various OpenStack python clients and maintain those. However there is an OpenStack project already available that can save you a lot of time. OpenStack Tempest is a project and suite that comprises of a set of integration tests. Tempest is used to validate the OpenStack code base through it’s integration with Jenkins (continuous integration server). Tempests calls against OpenStack service API endpoints and uses the python unittest2 and nosetest frameworks at its core.

If you wish to experiment with Tempest locally, try it out with devstack. Devstack automatically configures Tempest for use with it. To ease things, simply use vagrant-devstack (README here) do the following:

  1. Install VirtualBox
  2. Install vagrant
  3. git clone https://github.com/dizz/vagrant-devstack.git
  4. vagrant up
  5. vagrant ssh
  6. cd /opt/stack/tempest
  7. ./run_tests.sh

You will now see quite an amount of tests being run against your devstack installation. It will take time! If you wish to integrate Tempest with your Jenkins CI server see information on devstack gate. There is also a Tempest Jenkins plugin. Finally, if you wish to run Tempest against a “real” installation of OpenStack you will need to configure the Tempest configuration file (etc/tempest.conf) and change the relevant information (more here).

2nd Swiss OpenStack Meetup

chosug

We (ICCLab and ZHGeeks) are pleased to announce the 2nd Swiss OpenStack Meetup. It will happen on the 19th of February in Zurich at ETH. If you’re keen and interested in attending then please register here.

If you are interested in giving a talk then do give a shout out at the meetup site or simply message @OpenStackCH on twitter. Currently there are talks planned for:

Looking forward to seeing you all there!

Parallel OpenStack Multi Hosts Deployments with Foreman and Puppet

In our lab we have the need to have one environment which is running OpenStack Essex and another which is running OpenStack Folsom. Here’s a guide on how we setup our infrastructure so we can support the two environments in parallel.

To install Essex using Puppet/Foreman please follow the guides:

  • [OpenStack Puppet Part1](http://www.cloudcomp.ch/2012/07/puppet-and-openstack-part-one/),
  • [OpenStack Puppet Part2](http://www.cloudcomp.ch/2012/07/puppet-and-openstack-part-two/),
  • [OpenStack Puppet/Foreman](http://www.cloudcomp.ch/2012/07/foreman-puppet-and-openstack/)

Here it is only described how to integrate OpenStack Foslom with Puppet/Foreman. It is assumed that Puppet and Foreman are already set up according to the articles mentioned above.

2 environments will be created: `stable` and `research`. In the stable environment  are the puppet classes for Essex and in the research environment  the Folsom classes.
Create following directories:

[gist id=4147331]

Add the research and stable module path to /etc/puppet/puppet.conf

[gist id=4147341]

Clone Folsom classes:
[gist id=4147352]

Add compute.pp controller.pp, all-in-one.pp, params.pp
[gist id=4292299]

While applying controller.pp classes I encountered following error:
[gist id=4147369]

This issue is desribed [here](https://github.com/puppetlabs/puppetlabs-horizon/pull/26).

To overcome these issues add `include apache` in:
[gist id=4147377]

According to a [previous article](http://www.cloudcomp.ch/2012/07/foreman-puppet-and-openstack/) describing an issue with multiple environments, executing these steps is required:
[gist id=4147408]

After that in Foreman you can create new hostgroups and import the newly added classes (More – Puppet Classes – Import form local smart proxy).
Define stable and research environment and 3 hostgroups in the research environment: os-worker, os-controller, ow-aio.

Next assign the icclab::compute and icclab::params class to the worker hostgroup, icclab::controller and icclab::params class to the controller hostgroup and icclab::aio and icclab::params to the aio hostgroup.

Since we are using Ubuntu 12.04 it is required to add the Folsom repository to your installation. In order to do that create a new provisioning template. Copy the existing one and add line 14-18.
Name: Preseed Default Finish (Research)
Kind: finish
[gist id=4292436]

Please also consider the interface settings in line 1-7. Without these setting it was not possible to ping nor ssh VMs running on different physical nodes. This hint was found [here](http://www.mirantis.com/blog/openstack-networking-single-host-flatdhcpmanager/#network-configuration)

provisioning_template

After that click on Association, select Ubuntu 12.04 and assign the research hostgroup and environment.

In our installation we got this error in the VM console log:

[gist id=4292393]

In our case it was due to wrongly configured iptables by open stack.
Adding the parameters metadata_host and routing_source_ip to nova.conf on the nova-network nodes has solved the issue. To make this permanent with puppet add Line 4, 34 and 35 in `/etc/puppet/modules/research/nova/manifests/compute.pp`:

[gist id=4292497]

With these steps followed you should then be able to go about provisioning your physical hosts across both puppet environments. In the next article we’ll show how we’ve segmented our network and what will be the next steps in progressing our network architecture.

 

 

ICCLab at SwiNG SDCD 2012

The ICCLab presented at SwiNG SDCD 2012 on how you can easily provision bare-metal physical servers. This presentation, “From Bare-Metal to Cloud” was an updated version of the presentation that was made at the EGI Technical Forum in Prague. The slides can be viewed below or downloaded from here.

As well as presenting the ICCLab was part of a discussion panel on the role of Cloud Computing and academic research. On the whole, it was a very interesting and rewarding event.