Getting Started with OpenShift and OpenStack

In Mobile Cloud Networking (MCN) we rely heavily on OpenStack, OpenShift and of course Automation. So that developers can get working fast with their own local infrastructure, we’ve spent time setting up an automated workflow, using Vagrant and puppet to setup both OpenStack and OpenShift. If you want to experiment with both OpenStack and OpenShift locally, simply clone this project:

$ git clone https://github.com/dizz/os-ops.git

Once it has been cloned you’ll need to initialise the submodules:

$ git submodule init
$ git submodule update

After that just you can begin the setup of OpenStack and OpenShift. You’ll need an installation of VirtualBox and Vagrant.

OpenStack

  • run in controller/worker mode:
      $ vagrant up os_ctl
      $ vagrant up os_cmp
    

There’s some gotchas, so look at the known issues in the README, specific to OpenStack. Otherwise, open your web browser at: http://10.10.10.51.

OpenShift

You’ve two OpenShift options:

  • run all-in-one:
      $ cd os-ops
      $ vagrant up ops_aio
    
  • run in controller/worker mode:
      $ cd os-ops
      $ vagrant up ops_ctl
      $ vagrant up ops_node
    

Once done open your web browser at: https://10.10.10.53/console/applications. There more info in the README.

In the next post we’ll look at getting OpenShift running on OpenStack, quickly and fast using two approaches, direct with puppet and using Heat orchestration.

OpenStack Grizzly Multi-Node Installation with Stackforge Puppet-Modules on CentOS 6.4

This blog post describes the installation of OpenStack Grizzly with help of the Stackforge Puppet-Modules on CentOS 6.4 – with the use of network namespaces. The setup consists of a controller/network and a compute node. Of course additional compute nodes can later be added as needed. Continue reading

Automated Vagrant installation of MySQL HA using DRBD, Corosync and Pacemaker

Fig. 1: Redundant MySQL Server nodes using Pacemaker, Corosync and DRBD.

Fig. 1: Redundant MySQL Server nodes using Pacemaker, Corosync and DRBD.

If automation is required, Vagrant and Puppet seem to be the most adequate tools to implement it. What about automatic installation of High Availability database servers? As part of  our Cloud Dependability efforts, the ICCLab works on automatic installation of High Availability systems. One such HA system is a MySQL Server – combined with DRBD, Corosync and Pacemaker.

In this system the server-logic of the MySQL Server runs locally on different virtual machine nodes, while all database files are stored on a clustered DRBD-device which is distributed on all the nodes. The DRBD resource is used by Corosync which acts as resource layer for Pacemaker. If one of the nodes fails, Pacemaker automagically restarts the MySQL server on another node and synchronizes the data on the DRBD device. This combined DRBD and Pacemaker approach is best practice in the IT industry.

At ICCLab we have developed an automatic installation script which creates 2 virtual machines and configures MySQL, DRBD, Corosync and Pacemaker on both machines. The automated installation script can be downloaded from Github.

DRBD-Test environment for Vagrant available

There is always room to test different HA technologies in a simulated VM environment. At ICCLab we have created such a DRBD test environment for PostgreSQL databases. This environment is now available on Github.

The test environment installation uses Vagrant as tool to install VMs, Virtualbox as VM runtime environment and Puppet as VM configurator. It includes a Vagrant installation script (usually called a “Vagrantfile”) which sets up two virtual machines which run a clustered highly available PostgreSQL database.

In order to use the environment, you have to download it and then run the Vagrant installation script. The Vagrant installation script of the test environment essentially does the following things:

  • It creates two virtual machines with 1 GB RAM, one 80 GB harddrive and an extra 5 GB harddrive (which is used as DRBD device).
  • It creates an SSH tunnel between the two VM nodes which is used for DRBD synchronization.
  • It installs, configures and runs the DRBD device on both machines.
  • It installs, configures and runs Corosync and Pacemaker on both machines.
  • It creates a distributed PostgreSQL  database which runs on the DRBD device and which is managed by the Corosync/Pacemaker software.

This environment can easily be installed and then be used for testing of the DRBD technology. It can be downloaded from the following Github repository:

https://github.com/kobe6661/dependability_test_fw.git

Installation instructions can be found here.

How to Test your OpenStack Deployment?

Like us in the ICCLab, you have likely spent lots of time researching the best means to deploy OpenStack and you’ve decided upon a particular method (at the ICCLab we use foreman and puppet). You’ve implemented OpenStack with your chosen deployment plan and technologies and you now have an operational OpenStack cluster. The question you now have to ask is:

“How do I test that all functionality is operating correctly?”

You could certainly take the time to write a suite of tests using the various OpenStack python clients and maintain those. However there is an OpenStack project already available that can save you a lot of time. OpenStack Tempest is a project and suite that comprises of a set of integration tests. Tempest is used to validate the OpenStack code base through it’s integration with Jenkins (continuous integration server). Tempests calls against OpenStack service API endpoints and uses the python unittest2 and nosetest frameworks at its core.

If you wish to experiment with Tempest locally, try it out with devstack. Devstack automatically configures Tempest for use with it. To ease things, simply use vagrant-devstack (README here) do the following:

  1. Install VirtualBox
  2. Install vagrant
  3. git clone https://github.com/dizz/vagrant-devstack.git
  4. vagrant up
  5. vagrant ssh
  6. cd /opt/stack/tempest
  7. ./run_tests.sh

You will now see quite an amount of tests being run against your devstack installation. It will take time! If you wish to integrate Tempest with your Jenkins CI server see information on devstack gate. There is also a Tempest Jenkins plugin. Finally, if you wish to run Tempest against a “real” installation of OpenStack you will need to configure the Tempest configuration file (etc/tempest.conf) and change the relevant information (more here).

Parallel OpenStack Multi Hosts Deployments with Foreman and Puppet

In our lab we have the need to have one environment which is running OpenStack Essex and another which is running OpenStack Folsom. Here’s a guide on how we setup our infrastructure so we can support the two environments in parallel.

To install Essex using Puppet/Foreman please follow the guides:

  • [OpenStack Puppet Part1](http://www.cloudcomp.ch/2012/07/puppet-and-openstack-part-one/),
  • [OpenStack Puppet Part2](http://www.cloudcomp.ch/2012/07/puppet-and-openstack-part-two/),
  • [OpenStack Puppet/Foreman](http://www.cloudcomp.ch/2012/07/foreman-puppet-and-openstack/)

Here it is only described how to integrate OpenStack Foslom with Puppet/Foreman. It is assumed that Puppet and Foreman are already set up according to the articles mentioned above.

2 environments will be created: `stable` and `research`. In the stable environment  are the puppet classes for Essex and in the research environment  the Folsom classes.
Create following directories:

[gist id=4147331]

Add the research and stable module path to /etc/puppet/puppet.conf

[gist id=4147341]

Clone Folsom classes:
[gist id=4147352]

Add compute.pp controller.pp, all-in-one.pp, params.pp
[gist id=4292299]

While applying controller.pp classes I encountered following error:
[gist id=4147369]

This issue is desribed [here](https://github.com/puppetlabs/puppetlabs-horizon/pull/26).

To overcome these issues add `include apache` in:
[gist id=4147377]

According to a [previous article](http://www.cloudcomp.ch/2012/07/foreman-puppet-and-openstack/) describing an issue with multiple environments, executing these steps is required:
[gist id=4147408]

After that in Foreman you can create new hostgroups and import the newly added classes (More – Puppet Classes – Import form local smart proxy).
Define stable and research environment and 3 hostgroups in the research environment: os-worker, os-controller, ow-aio.

Next assign the icclab::compute and icclab::params class to the worker hostgroup, icclab::controller and icclab::params class to the controller hostgroup and icclab::aio and icclab::params to the aio hostgroup.

Since we are using Ubuntu 12.04 it is required to add the Folsom repository to your installation. In order to do that create a new provisioning template. Copy the existing one and add line 14-18.
Name: Preseed Default Finish (Research)
Kind: finish
[gist id=4292436]

Please also consider the interface settings in line 1-7. Without these setting it was not possible to ping nor ssh VMs running on different physical nodes. This hint was found [here](http://www.mirantis.com/blog/openstack-networking-single-host-flatdhcpmanager/#network-configuration)

provisioning_template

After that click on Association, select Ubuntu 12.04 and assign the research hostgroup and environment.

In our installation we got this error in the VM console log:

[gist id=4292393]

In our case it was due to wrongly configured iptables by open stack.
Adding the parameters metadata_host and routing_source_ip to nova.conf on the nova-network nodes has solved the issue. To make this permanent with puppet add Line 4, 34 and 35 in `/etc/puppet/modules/research/nova/manifests/compute.pp`:

[gist id=4292497]

With these steps followed you should then be able to go about provisioning your physical hosts across both puppet environments. In the next article we’ll show how we’ve segmented our network and what will be the next steps in progressing our network architecture.

 

 

Automating OCCI Installations

As part of the work here in the ICCLab not only are we active in the [OCCI working group](http://www.occi-wg.org) but also contributing not only [contributing to its implementation on OpenStack](https://github.com/tmetsch/occi-os) but we also make available our work on automating the install of OpenStack. We recently made a contribution to the [puppetlab-nova project](https://github.com/puppetlabs/puppetlabs-nova). This [contribution allows](https://github.com/puppetlabs/puppetlabs-nova/pull/150) users of the nova module to specify the APIs to enable in nova, as well as enabling the OCCI if specified.

The contribution, [submitted as a pull request](https://github.com/puppetlabs/puppetlabs-nova/pull/150) can be used in the following fashion:

[gist id=3778884]

The `nova::api` class declared above enables all the usual OpenStack APIs as well as the OCCI interface. Where the OCCI API is enabled, puppet then will look after installing the necessary components.

From Bare Metal to Cloud

This is the presentation that was presented at the [EGI Technical Forum 2012 in Prague](http://tf2012.egi.eu/).

If you like, [download the slides as pdf](http://blog.zhaw.ch/icclab/files/2012/09/From-Bare-Metal-to-Cloud.pdf).

There is also a youtube video showing the various stages of bring bare metal machines to a state such that they have OpenStack installed and operational.

For those in attendance or those that are interested in how all of this is done, all information, HOWTOs, code, virtual machine images are available from this site.

The talk had an excellent attendance and there is great interest in using OpenStack within the EGI FedCloud environment, especially one where the installation is automated as with our work.

ICCLab EGI TF Audience

 

Installing Foreman 1.0.1

Just recently the Foreman project released the latest version, 1.0.1. If you are following our [previous guide to install 0.4.2](http://www.cloudcomp.ch/2012/06/automating-the-icclab-part-one/) then you should also follow this.

# Installing & Configuring Foreman
You should setup your virtual machine exactly as we did in the previous guide, install puppet and checkout the foreman-installer modules from github. There is a small number of issues with the installer but we’ll easily walk you through them!

To check things out quickly, you can [download a VM (OVA) that has Foreman 1.0.1](http://www.cloudcomp.ch/wp-content/uploads/2012/09/Foreman1.01.ova) preconfigured. The username/password is `root` and `root`. This also includes puppet modules to deploy OpenStack compute and controller nodes.

> Side note: the puppetlabs repository has changed. Make sure to:
> `wget http://apt.puppetlabs.com/puppetlabs-release-stable.deb`

Ensure that the foreman-proxy is part of the bind group. If not add the user:

[gist id=3667157]

Configure your `foreman_proxy/manifests/params.pp as` before, ensuring to enable DHCP and DNS and for each of those setting the correct network settings (subnet etc)

Configure you `foreman/manifest/params.pp` as before. For us we disabled SSL. Very important here is that you set the `foreman_url` parameter to include the port number on which foreman listens (port 3000).

[gist id=3667152]

If it is not set then the scripts that tie puppet and foreman together will not work. This is a [known and reported issue](http://theforeman.org/issues/1855), which will be resolved.

Currently there is a bug in `foreman_proxy/manifests/proxydhcp.pp`. For now you need to manually set the DNS `nameserver` and TFTP `nextserver` parameters. This [bug has been reported](https://groups.google.com/d/topic/foreman-users/t1m8JeWVd7U/discussion) and will be resolved soon.

Finally you need to apply [this patch](https://github.com/theforeman/smart-proxy/commit/a402c71290f2d8205e60b876f2a40dfa9fefacda). Puppet in its most recent version changed the value of the return code from operations related to `puppetca`. This causes blocking issues with provisioning and deleting hosts with foreman. You can use this sed command if it suits you:

[gist id=3667151]

Once applied you should restart the foreman-proxy service

[gist id=3667147]

Note, if you start the foreman service and it halts with a stacktrace then you will have to reinitialise the database. This is a one-time operation.

[gist id=3667139]

Once these step have been complete, you can then configure Foreman itself (setting smart proxy, host groups etc).

When configuring these various aspects you should update the ‘Ubuntu default’ disk partition table configuration. Use the following to ensure a complete automatic install:

[gist id=3667133]

One of the issues that we’re dealing with currently is that rather than the puppetmaster’s hostname being placed in the relevant configuration files (e.g. `puppet.conf`), an IP address is inserted. This will not work as it will fail with SSL issues. The current work around is to create a ‘snippet’ in the Provisioning Templates section. With this snippet created then set the content of the config files in Provisioning Templates to (using `puppet.conf` as an examples):

[gist id=3667127]