The Green IT Special Interest Group (SIG) of the Swiss Informatics Society met yesterday (29/10/14) in Zurich. ZKB kindly hosted the event. A full meeting report will probably appear on the group’s website, but here we just capture some of our reflections on the group’s work.
This was our second time to attend the group meetings. It attracts a very interesting cross-section of folks who have an interest in both making IT systems more energy efficient as well as some folks who want to use IT systems to make other vertical more energy efficient.
The group is led by the very active and engaging Klaus Meyer who does a fantastic job of defining the strategy and direction of the group, representing the group to interested parties, running the group meetings and generally banging the Green IT drum.
The meeting is attended by a diverse mix of folks including Data Centre folks who are interested in increasing energy efficiency in Data Centres – these include folks from some of the financial and insurance sector in Switzerland. There are also academics who are interested in energy efficiency from different perspectives. There are also consultants and small companies who are active in the space. All in all, the group has a very interesting and healthy mix of interesting perspectives which leads to interesting discussions.
At this week’s meeting the host ZKB gave a presentation on how energy efficiency is very important in their IT systems and they talked about how they have managed to achieve very significant savings in their operations by using advanced DC design, largely focused on cooling and airflow issues. This was followed by a very interesting presentation by the guys from Born Green Technologies on a system they are working on which supports understanding of the energy consumption of the IT systems within an organization, mostly focused on the equipment on people’s desks – phones, computers, monitors etc. – and described a case study they performed with a mid-size client in which they were able to obtain 25% savings on their energy bill.
The group is receiving increasing interest – there is a so-called Antenna group being formed in La Suisse Romande – and we’re sure it will go from strength to strength in the coming years. From our point of view, we’re very happy to be associated with it and will continue to contribute as it grows.
In one of our earlier blog posts, we described some test we performed to determine how server power consumption increases with compute load; this post is something of a variation on that post, but here we put the focus on work taking place within VMs rather than work taking place within the host OS. The point here is to understand how VM load and energy consumption correlate. Here we document the results obtained.
As with our previous work, we focused on compute bound loads – the focus in this test is on increasing the compute load on the servers by performing π calculations inside the VM. In this work, we used homogeneous VMs – all the VMs were of the same flavor with the following configuration 2GB RAM, 20GB local disk and 1 VCPU.
The primary focus of the Energy Theme is on reducing the energy consumption of cloud computing resources. As compute nodes consume most of the energy in cloud computing systems, work to date has been focused on reducing the energy consumed by compute loads, particularly within the Openstack context. Although, as servers get increasingly instrumented, it is clear that there is potential in understanding the energy consumption with finer granularity and ultimately this can lead to energy efficiencies and cost savings.
In the current work, the primary mechanism to achieve energy efficiencies is load consolidation combined with power control of servers. This could be augmented with managing server CPU power states, but it remains to be seen if this will lead to significant power savings. Another tool to achieve energy efficiencies is to add elastic load when the resources are underutilized – this does not reduce the overall energy consumption per se, but rather enables providers to get more bang for their energy buck.
The current architecture of the Cloud Energy Efficiency Subsystem is shown above with the components performing the following functions:
- an energy monitoring component: this obtains information on the energy consumption of the entire system – it may also make some kind of abstraction rather than working with highly granular data for each node;
- a load characterization component: this component uses primarily ceilometer data to understand what is going on in the cloud – it makes an abstraction of the usage of the system over different timescales and particularly determines which level of burstiness exists in the load patterns;
- a load consolidation mechanism: this will take the info on the system state and identify where load consolidation can be performed – it then issues a set of live migration instructions to the cloud to perform the consolidate. In general, it would be necessary to add some filters to support different hypervisors, bare metal servers, etc which makes it more complex;
- physical server manager: this will turn off servers and turn them on as necessary – this will take input from the load characterization component to determine how much spare capacity to keep in the system to deal with variations in demand.
The specific interactions between these components is evolving as this is a work in progress.
At present, the theme comprises of two initiatives. These are