Tag: deployment

Ice, Ice, Baby…installing Icehouse with Mirantis OS 5.0

The release of Icehouse brought a few enhancements that were of particular interest in our work – notably Ceilometer/Telemetry developments both in terms of data models  as well as performance and improved support for live VM migration. As we had a positive experience with installing Openstack using Mirantis Openstack 4.x (based on Fuel 4.x), we thought it would be worth having a go at upgrading to Icehouse with Mirantis Openstack 5.0 (of which Fuel 5.0 is a key component) on a small set of servers. Here’s a short note on how it worked out.

Continue reading

Mirantis Fuel – Openstack installation for Noddy

While we have lots of experience working with cloud automation tools, for OpenStack in particular, it has taken us a little while to get around to checking out Fuel from Mirantis. Here, we give a short summary of our initial impressions of this very interesting tool.

Continue reading

Puppet and OpenStack: Part Two

In our [last article](http://www.cloudcomp.ch/2012/07/puppet-and-openstack-part-one/ ) we setup OpenStack using puppet on one node. In this article we’ll look at setting up OpenStack in a clustered environment. For this we’ll use the following configuration of VMs:

* The hypervisor is [VirtualBox](http://www.virtualbox.org).

* 1 OpenStack **Controller** VM.

This will host all the main services of OpenStack and will not run any virtual machines. This will be running Ubuntu 12.04 and acting as the puppetmaster. It will require 3 adapters:

– `eth0`: Host-only. Assign this an IP that matches the VirtualBox virtual switch.
– `eth1`: Host-only. Leave this unassigned. OpenStack will look after this and place a linux kernel bridge upon in named `br100`.
– `eth2`: NAT. Setup this adapter as [shown in previous articles](http://www.cloudcomp.ch/2012/06/automating-the-icclab-part-one/).

* 2 OpenStack **Compute Node** VMs.

This will only act as a node that that provide virtual machines and storage volumes when requested by the OpenStack controller VM. This will be running Ubuntu 12.04 and acting as a puppet agent (slave). These VMs will require 2 adapters:

– `eth0`: Host-only. Assign this an IP that matches the VirtualBox virtual switch.
– `eth1`: Host-only. Leave this unassigned. OpenStack will look after this.

In both cases above, `eth0` and `eth1` can share the same or different networks. In our case we set them to different virtual networks (`vboxnet0` and `vboxnet1` respectively).

## OpenStack Controller Setup

Once the controller VM is up and running and you have configured `eth2`, the next task at hand is to configure puppet. You can use the same puppet configuration as was shown in [the article on creating an OpenStack all-in-one installation](http://www.cloudcomp.ch/2012/07/puppet-and-openstack-part-one/). The same hostname configuration is also the same, however as there are two other nodes to deal with in the setup you should place these in your `/etc/hosts` file (unless you’re using a setup with a DNS server). With your puppet and hostname resolution configured, install the puppet modules as shown in the previous article. Where things begin to differ is in the configuration of `/etc/puppet/manifests/site.pp`.

In the `site.pp` file we will be using 2 different `node` definitions. One for the controller node and the second for compute nodes. For the controller node we will explicitly set the node name to that of the fully qualified domain name of the controller VM. The definition is then:

[gist id=3029148]

In the second case we are going to set the node name to one that contains a regular expression so that any hostname that is certified by the puppetmaster and matches the regular expression can partake to provide virtual machine capabilities. The definition is then:

[gist id=3029150]

With your nodes defined in `site.pp` you will need to set some particular variables:

* Global:
* set `$public_interface` to `’eth0’`
* set `$private_interface` to `’eth1’`
* Controller-specific:
* set `$controller_node_address` to the IP address that you have set to `eth0`. In our case it’s `’192.168.56.2’`
* set `floating_range` to `’192.168.56.128/25’`. This will give you enough floating IP address in this test setup.
* Compute node-specific:
* set `libvirt_type` to `’qemu’`

Now either ensure the puppet agent is running on the controller node or run the puppet agent in the foreground. Once the puppet agent on the controller node contacts the puppetmaster it will install all the necessary services for an OpenStack controller.

## OpenStack Compute Node Setup
There is little here for you to do other than ensure that the puppet agent process is configured properly and can contact the puppetmaster. When each compute node first contacts the puppetmaster they will have issued a new certificate request and will wait until the puppetmaster signs these certificates. In order to sign these you can quickly do this by issuing the command:

[gist id=3029151]

Again as in the case of the all-in-one installation, the `nova-volumes` group is not setup so [follow the previous steps to setup is up](http://www.cloudcomp.ch/2012/07/puppet-and-openstack-part-one/). Once the puppet agent on each of the compute node contacts the puppetmaster it will install all the necessary services for an OpenStack compute node.

## Validating the Install
Once all services on all VMs have been installed you should be able to list that all nodes are operational. To validate this, execute the following:

[gist id=3029153]

You should see the following output:

[gist id=3029145]

You may want to run some VMs at this stage so you will need to import a VM image into glance. This is [detailed here](http://www.cloudcomp.ch/2012/07/puppet-and-openstack-part-one/).

## Next up
We’ve now successfully installed a clustered OpenStack using puppet. In the next article we’ll look at expanding this to combine both Foreman, puppet and OpenStack so that we can go from bare metal to a multi-node, clustered OpenStack installation. Stay tuned!

***Note***: If you want copies of the VMs used in this article ping [@dizz](http://www.twitter.com/dizz)

What’s Powering the ICCLab?

Here in the ICCLab, the framework of choice is OpenStack, which enjoys significant industry and academic support and is reaching good levels of maturity. The lab will support pre-production usage scenarios on top of OpenStack services as well as experimental research on OpenStack technology and extensions. Currently the actively deployed OpenStack services are the OpenStack compute service (including keystone, glance and nova), and Swift, an object storage service.

The lab is equipped with COTS computing units, each running on 8×2.4 Ghz Cores, 64GB RAM and 4×1TB local storage per unit. To store templates and other data there is a 12TB NFS or iSCSI Storage which is connected to the switch with a 10Gbit Ethernet interface. The Computing Units are connected to a 1Gbit network for data and another 1Gbit net for control traffic.

At the heart of the CCLab is the Management Server, which provides an easy way to stage different setups for different OpenStack instances (productive, experimental, etc.). The Management Server provides a DHCP, PXE and NFS Server and some pre-configured processes which allow a bare metal computing unit to be provisioned automatically, using Foreman, and then have preassigned roles installed, using a combination of Foreman and Puppet. This provides a great deal of flexibility and support for different usage scenarios.