Ceilometer can collect a large amount of data, particularly in a system with large amount of servers and high activity: in such a scenario, the numbers of meters and samples can be large which affects ceilometer performance and gives rise to quite large databases. In our particular case we are studying energy consumption in servers and how resource utilization (mainly cpu) may relates to overall energy consumption. The energy data is collected through Kwapi and stored in ceilometer every 10 seconds (yes, this is probably too fine-grained!). We had problems that the database accumulated too quickly, filling up the root disk partition on the controller and causing significant problems for the system. In this blog post, we describe the approach we now use for managing ceilometer data which ensures that the resources consumed by ceilometer remain under control.
In some cases, if you have done a standalone installation and haven’t considered ceilometer at the installation stage, the database may be configured in the root partition: this is not ideal for quite obvious reasons – filling up the root file system can render the system inoperable. For this reason, we advise that the ceilometer db backend be put on a partition different from the root partition. For mongodb, this is as simple as changing the dbpath parameter in the /etc/mongodb.conf config file. Then a simple restart of mongo is required.
Backing up Mongodb Data
Now with the database up and running in a different partition we can trigger a cron job which backs up the data from ceilometer regularly. We wrote a simple python script to dump selected contents of the mongodb monthly and compress it – the data can then be archived somewhere else. In our case, we found that the dumped data compresses down to approximately 10% of the size of the dump.
The simple python script we wrote uses mongodump inside the python subprocess library (it only works with Mongodb databases right now). Subprocess was used because python mongodb libraries such as pymongo are not really intended for large import or exports – they are more intended for standard database interactions. We did, however, need to use pymongo to understand which collections exist in the db and this information was then used as an input to mongodump. In the script, the collections are then backed up in a temporary directory using zip.
The script requires pymongo and zip support as well as ZIP64_SUPPORT for files larger than 64GB. The tmp and zip directories can be changed in the script.
Setting up ceilometer-expirer
Setting up an archiving system as above, is not sufficient to ensure that the db size is controlled – it is also necessary to remove redundant data from the db. Fortunately, with mongo this is very straightforward: mongo has support for expiry of data and performs internal cleanup operations to remove expired data.
Ceilometer has support for data expiry and the time_to_live parameter in ceilometer.conf controls how long the data will remain in the database. In general, the time_to_live for samples defaults to -1 in ceilometer, which means that the samples have no expiry date. The ceilometer-expirer component of ceilometer manages data expiry and its behaviour is dependent on which db backend is being used. In the case of mongo, if the time_to_live parameter is not -1, a new index is added to collections which has a timestamp field and mongo deletes the samples older than the configured ttl value. In the case of other dbs, such as mysql, ceilometer-expirer proactively removes entries from the database which have expired.
It is worth noting that as well as the expired meter/sample data, Ceilometer-expirer removes other entries in the db which which are not linked to any other entries, e.g. information relating to an instance which was terminated some time ago.
In case you want to add expiry information to a particular collection directly within the mongo backend it is possible using the following command in the mongodb client:
db.meter.ensureIndex( { “timestamp”: 1 }, { expireAfterSeconds: 2764800 } )
However, using this approach is not advised – it is much more sensible to configure ceilometer appropriately to ensure data expires correctly.
The above basic approach to ceilometer data management should ensure that ceilometer data does not grow uncontrollably and makes it highly unlikely that ceilometer data would render the system inoperable.
If you’re interested in this topic, you may be interested in some of our other blog posts:
- Collecting energy consumption data using Kwapi in Openstack
- A Web Application to Monitor and Understand Energy Consumption in an Openstack Cloud
- Understanding the relationship between ceilometer processor utilisation and system energy consumption for a basic scenario in Openstack
- Migration of Ceilometer Energy Consumption Data from Havana/MySQL to Icehouse/MongoDB
Great post! Thank You.