In our lab we have the need to have one environment which is running OpenStack Essex and another which is running OpenStack Folsom. Here’s a guide on how we setup our infrastructure so we can support the two environments in parallel.
To install Essex using Puppet/Foreman please follow the guides:
Here it is only described how to integrate OpenStack Foslom with Puppet/Foreman. It is assumed that Puppet and Foreman are already set up according to the articles mentioned above.
2 environments will be created: `stable` and `research`. In the stable environment are the puppet classes for Essex and in the research environment the Folsom classes.
Create following directories:
Add the research and stable module path to /etc/puppet/puppet.conf
Clone Folsom classes:
Add compute.pp controller.pp, all-in-one.pp, params.pp
While applying controller.pp classes I encountered following error:
This issue is desribed [here](https://github.com/puppetlabs/puppetlabs-horizon/pull/26).
To overcome these issues add `include apache` in:
According to a [previous article](http://www.cloudcomp.ch/2012/07/foreman-puppet-and-openstack/) describing an issue with multiple environments, executing these steps is required:
After that in Foreman you can create new hostgroups and import the newly added classes (More – Puppet Classes – Import form local smart proxy).
Define stable and research environment and 3 hostgroups in the research environment: os-worker, os-controller, ow-aio.
Next assign the icclab::compute and icclab::params class to the worker hostgroup, icclab::controller and icclab::params class to the controller hostgroup and icclab::aio and icclab::params to the aio hostgroup.
Since we are using Ubuntu 12.04 it is required to add the Folsom repository to your installation. In order to do that create a new provisioning template. Copy the existing one and add line 14-18.
Name: Preseed Default Finish (Research)
Please also consider the interface settings in line 1-7. Without these setting it was not possible to ping nor ssh VMs running on different physical nodes. This hint was found [here](http://www.mirantis.com/blog/openstack-networking-single-host-flatdhcpmanager/#network-configuration)
After that click on Association, select Ubuntu 12.04 and assign the research hostgroup and environment.
In our installation we got this error in the VM console log:
In our case it was due to wrongly configured iptables by open stack.
Adding the parameters metadata_host and routing_source_ip to nova.conf on the nova-network nodes has solved the issue. To make this permanent with puppet add Line 4, 34 and 35 in `/etc/puppet/modules/research/nova/manifests/compute.pp`:
With these steps followed you should then be able to go about provisioning your physical hosts across both puppet environments. In the next article we’ll show how we’ve segmented our network and what will be the next steps in progressing our network architecture.
The ICCLab presented at SwiNG SDCD 2012 on how you can easily provision bare-metal physical servers. This presentation, “From Bare-Metal to Cloud” was an updated version of the presentation that was made at the EGI Technical Forum in Prague. The slides can be viewed below or downloaded from here.
As well as presenting the ICCLab was part of a discussion panel on the role of Cloud Computing and academic research. On the whole, it was a very interesting and rewarding event.
This is the presentation that was presented at the [EGI Technical Forum 2012 in Prague](http://tf2012.egi.eu/).
If you like, [download the slides as pdf](http://blog.zhaw.ch/icclab/files/2012/09/From-Bare-Metal-to-Cloud.pdf).
There is also a youtube video showing the various stages of bring bare metal machines to a state such that they have OpenStack installed and operational.
For those in attendance or those that are interested in how all of this is done, all information, HOWTOs, code, virtual machine images are available from this site.
The talk had an excellent attendance and there is great interest in using OpenStack within the EGI FedCloud environment, especially one where the installation is automated as with our work.
It doesn’t make sense to continually download the same operating system packages when you can cache them along side your Foreman installation. Assuming you use a debian based OS, in order to do this and have a cached copy of all packages you use within your infrastructure simply install apt-cacher or apt-cacher-ng. Our preference is for apt-cacher-ng but we’ll show you how to install both.
# Installing apt-cacher-ng
You shouldn’t have to adjust the configuration of apt-cacher-ng for the basic functionality it offers, however if you need to adjust settings in `/etc/apt-cacher-ng/acng.conf`.
# Installing apt-cacher
Then set the contents of `apt-cacher-conf` to:
***Note:*** you might want to change the hostname of the ubuntu mirror and interface (eth1 is specific to previous articles on Foreman)
Reports are created every 24 hours, which can be accessed at `http://$FOREMAN:3241/report`. To force the creation run:
# Configuring Foreman
Once you have setup apt-cacher you can then create a new Foreman “Installation Media Source”. Simply supply a sensible name and importantly set the URL of that installation media source to `http://$FOREMAN:3241/ubuntu`, where $FOREMAN is either the IP or FQDN of your Foreman host.
ICCLabs and GWDG will present on the topic of “From Bare-Metal to Cloud” at EGI Technical Forum 2012 in Prague.
The ICCLab and GWDG had a shared, common problem, namely how to deploy infrastructural service technology (e.g. OpenStack, CloudStack etc.) with the least amount of user interaction (i.e. automated) during the deployment process, across a large number of servers. The solution to be presented allows for the easy deployment of operating systems on to bare-metal (physical servers) and the deployment and management of specified software packages upon those provisioned bare-metal systems. To accomplish the combination of Foreman and Puppet was chosen. For the work, it was assumed that the network architecture, partitioning etc. is already determined.
This presentation will detail what measures have been taken to automate the provisioning of OpenStack clusters at the two research labs. The presentation will describe the technology stack, discuss the individual technologies used and share the information with others. It will conclude with a demonstration of provisioning a multi-cluster OpenStack deployment upon virgin bare metal servers.