Tag: icclab (page 1 of 2)

Some Cloud Robotics News

We haven’t updated the blog with robotics news in a while, but we actually have plenty to tell and much more to come soon.

First of all the robotics team has gained new strength with the addition of Andy, Dimitris, Leo, Rod, and Thomas. Tobi and Lukasz have left us, but we are still working on collaborating remotely.

On top of amazing people, the team has grown also in terms of robots. From the month of October we have finally started receiving our new robotic hardware and got to play with it.

Continue reading

How to Write a Cinder Driver

After too many hours of trial and error and searching for the right solution on how to properly write and integrate your own backend in cinder, here are all the steps and instructions necessary. So if you are looking for a guide on how to integrate your own cinder driver, look no further. Continue reading

StickTrack – ICCLab hackathon project

[Note: this project took place as part of the ICCLab Hackathon – more general information on the ICCLab Hackathon is here].

Shared inventories, such as office fridges that contains items having an expiration date, have always been a problem to manage. Our very own fridge isn’t an exception either. At the moment, about 35 of us (ICCLab & DataLab guys) sharing the same fridge, go through the everyday hassle of remembering what food article do they have in the fridge and which one needs to be consumed or thrown away due their expiration date. After a terrible period of “fridge-chaos” (* – November 2015) Annette came up with a practical solution to the problem which involved manually written labels written on each food article in the fridge. There was a person assigned (thanks Denis) only for maintaining the fridge’s healthy ecosystem and check for any growth of life! Meaning that Denis was responsible to continuously check the expiration of food articles and inform their owners to take immediate action. Since then we thought of ways to automate this 1845 technology and came up with StickTrack – a QR-code based inventory tracking solution notifying our small fridge community what’s going inside of the fridge. Me, Lidia, Martin, Oleksii, Andy, Piyush and Amrita (also known as “Un Palo” team) decided to make the future happen during our internal 3-day hackathon. Continue reading

Campus Party – The O2 London 3-6 September 2013

Campus Party is an annual week long, 24-hours-a-day technology festival where thousands of “Campuseros” (hackers, developers, gamers and technophiles), equipped with laptops, camp on-site and immerse themselves in a truly unique environment.

It is  the biggest electronic entertainment event in the world which unites the brightest young minds in technology and science under the idea that “the Internet is not a network of computers, it’s a network of people.”

The event of this year is strongly supported by the EU Commission (see photos) with the participation of  FI PPP  FI-WARE project (ZHAW ICCLAB is a partner)  challenges that will be launched to promote the exposure, dissemination, and possible take-up of the FI-WARE technologies. All big names from the business sectors and gaming are present as well with respective platforms.

FI-WARE will offer technology  building blocks ( Generic Enablers ) providing certain functionalities that can be used by a large set of applications and developers at the Campus Party. Examples are cloud hosting, big data analysis, location software, Complex Event Processing, Publish/Subscribe Broker, Marketplace, IoT Things Management, Security Monitoring, Identity Management, etc. These will be made available via the Open Innovation Lab.
Yesterday the workshop on IoT technologies actracted many young developers at the FI-WARE stand to receive the IoT kit which is necessary to develop IoT applications for the contest. Today ICCLAB will contribute to the FI-WARE workshop “Advancing Web User Interfaces” – 16:00 – 18:00 – Using FI-WARE Generic Enablers (GEs), you can create fun applications and participate in the challenges. The  overview will teach how to perform advanced multimedia stream processing on GEs .
All is very very interesting here with many exibitions on: 3D scanners and printers, augmented reality,  games and brain controlled demos.
  IMG_20130903_133526 IMG_20130903_163709IMG_20130903_133316IMG_20130903_111440IMG_20130903_133455 IMG_20130903_133530 IMG_20130903_182313 IMG_20130903_171556 IMG_20130904_123806 IMG_20130904_134446

New ICCLab Testbed at Equinix Datacenter

The ICCLab has now a new testbed for their work/research in the Cloud-Computing field at no other location than the datacenters of Equinix – one of our collaboration partners and generous donor of the rackspace – in Zurich.
Continue reading

ICCLab @ Swiss Academic Cloud Computing Experience

We presented at the Swiss Academic Cloud Computing Experience conference. Below are the slides as presented (or you can grab the PDF here).

ICCLab Present on Ceilometer at 2nd Swiss OpenStack User Group Meeting

On the 19th February the 2nd Swiss OpenStack User Group Meeting took place. One of the presentations was held on Ceilometer by Toni and Lucas from the ICCLab. They talked about the history, the current and future features, the architecture and the requirements of ceilometer and explained how to use and extend it. You can take a look at the presentation here:

A video of the presentation is available here

ICCLab & Swiss Informatics Society – Cloud Computing Special Interest Group

We, the ICCLab, are proud to announce that the past Presidential Conference of the Swiss Informatics Society accepted our proposal (slides) for setting up a Special Interest Group in Cloud Computing.

The SIG is currently being formed. If you wish to participate and influence the future of Swiss Cloud Computing in this context please don’t hesitate to contact us. Any active participation is more than welcome.

Feel free to join our LinkedIn Group.

header_s_i_tmb

Parallel OpenStack Multi Hosts Deployments with Foreman and Puppet

by Josef Spillner

In our lab we have the need to have one environment which is running OpenStack Essex and another which is running OpenStack Folsom. Here’s a guide on how we setup our infrastructure so we can support the two environments in parallel.

To install Essex using Puppet/Foreman please follow the guides:

  • [OpenStack Puppet Part1](http://www.cloudcomp.ch/2012/07/puppet-and-openstack-part-one/),
  • [OpenStack Puppet Part2](http://www.cloudcomp.ch/2012/07/puppet-and-openstack-part-two/),
  • [OpenStack Puppet/Foreman](http://www.cloudcomp.ch/2012/07/foreman-puppet-and-openstack/)

Here it is only described how to integrate OpenStack Foslom with Puppet/Foreman. It is assumed that Puppet and Foreman are already set up according to the articles mentioned above.

2 environments will be created: `stable` and `research`. In the stable environment  are the puppet classes for Essex and in the research environment  the Folsom classes.
Create following directories:

[gist id=4147331]

Add the research and stable module path to /etc/puppet/puppet.conf

[gist id=4147341]

Clone Folsom classes:
[gist id=4147352]

Add compute.pp controller.pp, all-in-one.pp, params.pp
[gist id=4292299]

While applying controller.pp classes I encountered following error:
[gist id=4147369]

This issue is desribed [here](https://github.com/puppetlabs/puppetlabs-horizon/pull/26).

To overcome these issues add `include apache` in:
[gist id=4147377]

According to a [previous article](http://www.cloudcomp.ch/2012/07/foreman-puppet-and-openstack/) describing an issue with multiple environments, executing these steps is required:
[gist id=4147408]

After that in Foreman you can create new hostgroups and import the newly added classes (More – Puppet Classes – Import form local smart proxy).
Define stable and research environment and 3 hostgroups in the research environment: os-worker, os-controller, ow-aio.

Next assign the icclab::compute and icclab::params class to the worker hostgroup, icclab::controller and icclab::params class to the controller hostgroup and icclab::aio and icclab::params to the aio hostgroup.

Since we are using Ubuntu 12.04 it is required to add the Folsom repository to your installation. In order to do that create a new provisioning template. Copy the existing one and add line 14-18.
Name: Preseed Default Finish (Research)
Kind: finish
[gist id=4292436]

Please also consider the interface settings in line 1-7. Without these setting it was not possible to ping nor ssh VMs running on different physical nodes. This hint was found [here](http://www.mirantis.com/blog/openstack-networking-single-host-flatdhcpmanager/#network-configuration)

provisioning_template

After that click on Association, select Ubuntu 12.04 and assign the research hostgroup and environment.

In our installation we got this error in the VM console log:

[gist id=4292393]

In our case it was due to wrongly configured iptables by open stack.
Adding the parameters metadata_host and routing_source_ip to nova.conf on the nova-network nodes has solved the issue. To make this permanent with puppet add Line 4, 34 and 35 in `/etc/puppet/modules/research/nova/manifests/compute.pp`:

[gist id=4292497]

With these steps followed you should then be able to go about provisioning your physical hosts across both puppet environments. In the next article we’ll show how we’ve segmented our network and what will be the next steps in progressing our network architecture.

 

 

ICCLab Infrastructure Relocation

by Josef Spillner


The relocation of the ICCLab hardware and the integration of 9 additional nodes is now complete. The whole movement was done within one day thanks to the support of Pietro, Philipp and Michael – Thanks Guys! Now our lab runs 15 compute nodes, 1 controller node and 1 NAS. We will segment this infrastructure to build a development environment including 10 nodes where we can develop and test our work on OpenStack and a production environment including 5 nodes for production purposes. As the next step we are will to redeploy OpenStack by means of automation tools Puppet and Foreman as was presented at the EGI Technical Forum. Let’s see how fast we can deploy 15 nodes from scratch! We’ll be studying, timing and evaluating it!

« Older posts