The Cloud Robotics initiative was recently kicked off at SPLab.
Invited by one of our collaboration partners, we attended ROSCon 2015 in Hamburg.
ROSCon is the ROS (Robot Operating System) conference, dedicated to all research, development, and practice going around ROS. 2015 was the fourth edition of the event and was so crowded with ROS enthusiasts that was completely sold out.
ROS is the most prominent open source solution for robotic software, with a growing community and widespread industrial interest. Still, its integration with cloud computing is just in its infancy and we wanted to find out more from the problems and needs of the people using it. There is interest in ROS from companies like Canonical, Bosch, BMW, Qualcomm, and Fetch Robotics which were all present at the conference.
For those of you that still don’t know ROS, we highly recommend taking a tour of the ROS wiki Tutorial. ROS is a very exciting technology consisting in a software framework and tools to write robotics software.
“ROS was built from the ground up to encourage collaborative robotics software development”
In our previous blog post we presented an overview of Nova Cells describing its architecture and how a basic configuration can be set up. After some further investigation it is clear why this is still considered experimental and unstable; some basic operations are not supported as yet e.g. floating ip association as well as inconsistencies in management of security groups between API and Compute Cells. Here, we focused on using only the key projects in OpenStack i.e nova, glance and keystone and avoided adding extra complexity to the system; for this reason legacy networking (nova-network) was chosen instead of Neutron – Neutron is generally more complex and we had seen problems reported with between neutron and cells. In this blog post we describe our experience enabling floating ips in an Openstack Cells architecture using nova network which required making some small modifications to the nova python libraries.
In a previous series of blog posts (1, 2, 3), we have discussed how to install Monasca to monitor OpenStack, how to create alarms based on specific events happening in the monitored system, and how to setup notifications when any of these alarms are triggered.
Going further, in the context of the Cloud Orchestration initiative and the Hurtle framework, we go further by using Monasca to detect events in orchestrated applications and perform callbacks to the orchestrator so it can react to events. The motivation behind this is provide hurtle with processes able to perform continuous health management of any orchestrated application.
While initially designed to monitor the Cloud itself, it is easy to install the monasca agent on any platform, making it simple to monitor deployed VMs behaviour. Continue reading
Service scheduling and task placement within large-scale clusters is receiving a lot of interest in the cloud community at present. Moreover, service scheduling is one of the keystones of our recently kicked off ACeN project and we finally got a chance to experiment with the technology that is currently a frontrunner in this area – Apache Mesos. As Mesos provides much more control of service placement than current available built-in IaaS schedulers it elegantly addresses many problems in data centers such as task data locality, efficient resource utilization or efficient load variation accommodation. This blogpost describes Mesos architecture, its basic workflow and explains why we think it’s a big deal also in the cloud context.
Carlo Vallati was a visiting researcher during Aug/Sept 2015. (See here for a short note on his experience visiting). In this post he outlines how cloud computing needs to evolve to meet future requirements.
Despite the increasing usage of cloud computing as enabler for a wide number of applications, the next wave of technological evolution – the Internet of Things and Robotics – will require the extension of the classical centralized cloud computing architecture towards a more distributed architecture that includes computing and storage nodes installed close to users and physical systems. Edge computing will also require greater flexibility, necessary to handle the huge increase in the number of devices – a distributed architecture will guarantee scalability – and to deal with privacy concerns that are arising among end users – edge computing will limit exposure of private data. Continue reading
Us ICCLab folk are always interested in new ideas, particularly those that could have a profound impact on computing in general and cloud computing in particular. Consequently, we couldn’t miss out on the opportunity of attending ORConf – a conference loosely centred around open source silicon – which was free and (more or less) just down the road at CERN.
The conference itself was superb, comprising of an excellent mix of hobbyists/open source advocates, industry folks and academics with some of the people wearing more than one hat. There was also quite a diverse set of backgrounds ranging from ASIC designers to FPGA guys to compiler designers to some simpler software types. The quality of people was overwhelming with excellent guys from high profile organizations such as Intel, Google, Qualcomm, nvidia, Uni Cambridge, EPFL, ETH and Berkeley (although many of the industry folk were not specifically representing their employers).
The lab has been fortunate to have a successful strategic relationship with IAESTE Switzerland. Every year we have been getting about 2 exchange student interns through IAESTE from around the world. The students have learnt and grown professionally within our team and we learn a lot from them as well, boasting our rich international representation.
Our cooperation is recognised and rewarded by the IAESTE in their annual review magazine this year, where they feature our lab head’s aka TMB’s interview. Have a look! Continue reading