# Foreman, Puppet and OpenStack
So all our work in the previous articles has been leading up to this one. In this article we’ll describe how you can deploy a full multi-node [OpenStack](http://www.openstack.org) cluster beginning from bare metal using [Foreman](http://www.theforeman.org) and [puppet](http://www.puppetlabs.com). Before continuing we should note what exactly ‘bare metal’ is in this context. Bare metal refers to physical server hardware that has not yet been provisioned with an operating system. When provisioning this bare metal, it is assumed that the underlying network has been setup (e.g. L2 configurations).
For the purposes of this article all setup will happen in a virtualised environment just as in the previous articles. It will also draw upon those previous articles.
The first requirement is to have a successfully running installation of Foreman. You should find all the information on how to do this in [the articles](http://www.cloudcomp.ch/2012/06/automating-the-icclab-part-one/) on [setting up Foreman](http://www.cloudcomp.ch/2012/06/automating-the-icclab-part-two/).
Once Foreman has been setup the next thing that you will need to do is to deploy the OpenStack puppet modules as described in [the article on puppet and OpenStack](http://www.cloudcomp.ch/2012/07/puppet-and-openstack-part-one/).
Note, that you must only execute up to and including `git clone https://github.com/puppetlabs/puppetlabs-openstack` in the “Select Puppet Modules” section. Once you have completed this then follow this procedure.
The OpenStack modules require that `storeconfigs` is enabled. To do this you need to edit the foreman configuration file, in our case the foreman-installer manifest located at `/etc/puppet/modules/common/foreman/params.pp`. In there you will need to set `storeconfigs => true`. Once this is done you will need to run the foreman-installer again.
For the purposes of this article we will create our own isolated puppet environment named `iaas`. To do this:
Then configure puppet so that it knows of this environment by editing `/etc/puppet/puppet.conf` and insert this definition:
Once you have defined the `iaas` environment there are two things to do:
1. Install all the necessary OpenStack puppet modules.
You will already have the puppet modules checked out if you have followed the [puppet and OpenStack article](http://www.cloudcomp.ch/2012/07/puppet-and-openstack-part-one/). Copy the OpenStack modules as follows:
Lastly in this step, you need to change a variable in the `Rakefile` located in the folder where the OpenStack modules are cloned into. To do this change `default_modulepath` to the following:
Now execute the rake command `rake modules:clone` as in the [puppet OpenStack article](http://www.cloudcomp.ch/2012/07/puppet-and-openstack-part-one/). All other modules that the OpenStack requires will now be placed in the `iaas` environment.
2. Import the new environment and all associated puppet classes into Foreman.
In the Foreman web interface import the new `iaas` environment and all its associated modules. Do this by navigating in the web interface “Other->Puppet Classes” and clicking on “Import new puppet classes and environments”. A screen will then show the new environment to be imported, along with its modules and await your confirmation.
Now that these steps are complete, you will want to create some class definitions in a module with which you can apply against new Foreman managed hosts. Below is an example structure of module followed by the contents that defines both the OpenStack all-in-one, controller and compute node roles.
* [`all_in_one.pp`](https://gist.github.com/3118975): This will setup an all in one installation of openstack.
* [`controller.pp`](https://gist.github.com/3118972): This will setup the controller on a specified host
* [`compute.pp`](https://gist.github.com/3118973): This will setup the compute services on a specified host
* [`params.pp`](https://gist.github.com/3118970): This is some *basic* common configurations for `controller.pp` and `compute.pp`
Note that as we’re setting this up in a virtualised environment, `libvirt_type` is set to `qemu`. If you were deploying to bare metal, this value would be `kvm` or another suitable hypervisor. Also to note, this file looks almost identical to the `site.pp` file we previous used.
With this module created, you will need to import it just as you did when importing the `iaas` environment. Doing this will simply update the current environment.
Now that you have imported everything, you can now provision a host with Foreman and when doing so select the `icclab::controller` resource class for the new host after ensuring the field ‘Environment’ is ‘iaas’. Once the new host boots over PXE, installs the selected OS (Ubuntu 12.04 in this case) and the post-install script executes, the OpenStack resource class will be applied to the host and you will now have a brand new and shiny OpenStack controller node.
To add compute nodes is almost the exact same process except rather than selecting `icclab::os_controller`, you will select `icclab::compute`. Simple, eh?! You can add as many compute nodes as you have hardware to this arrangement.
# Issues Encountered
Ok, so not everything is as simple. There was one major issue encountered while doing this work.
Puppet’s support of environments has a number of associated issues with it. This is manifested when a newly provisioned host has a module applied and shown by the following error message:
What this error is telling us is that the puppet resource type cannot be found on the puppet module path. This is because it lives in a location that puppet is unaware of, namely the `iaas` environment. You will also encounter the same issue if you use the `production` or `development` environments that Foreman setups up. There are more details about this issue on the puppetlabs’ tracker [under bug #4409](http://projects.puppetlabs.com/issues/4409). Thankfully there is a work around pointed out in the thread and it is the following:
Here we manually place the types and providers in a place where puppet will find them. Naturally this solution is not perfect however can be further automated by a `cron` job or executed when there are updates to the effected modules.