How to Write a Cinder Driver

After too many hours of trial and error and searching for the right solution on how to properly write and integrate your own backend in cinder, here are all the steps and instructions necessary. So if you are looking for a guide on how to integrate your own cinder driver, look no further.

Why do we need a Cinder Driver, and why are we even using Cinder? We created Hera, a distributed storage system, based on ZFS, used in SESAME, a 5G project, in which we are project partners. To achieve an integration of Hera into SESAME, which uses OpenStack, we had to create a Cinder driver.

First of all, we have the Hera storage system with a RESTful API, all the logic and functionality is already available. We position the driver as a proxy between Cinder and Hera. To implement the driver methods, one does not have to look very far, there is a page on the OpenStack Cinder docs that explain which methods need to be implemented and what they do. For a basic Cinder Driver skeleton check out this repository: Cinder Driver Example.

We have decided for a normal volume driver, but you may also decide for another driver that you want to write, then you need to inherit from another base driver, e.g.: write your Driver for a san volumes (SanDriver) or for iSCSI volumes (ISCSIDriver). Also we have always looked at other drivers (mainly the LVM driver) for some guidance during the implementation.

These methods are necessary for a complete driver, while implementing it we wanted to try single methods after implementing them. Once the mandatory methods were implemented, and we attempted to execute the driver’s code, nothing was happening! We quickly realised, that the get_volume_stats method returns crucial information of the storage system to the Cinder scheduler. The scheduler will not know anything of the driver if values are not returned, so to quickly test we had this dict hardcoded and the scheduler stopped complaining.

{
    'volume_backend_name': 'foo',
    'vendor_name': 'bar',
    'driver_version': '3.0.0',
    'storage_protocol': 'foobar',
    'total_capacity_gb': 42,
    'free_capacity_gb': 42
}

In order to provider parameters to your driver, you can also add them in the following way, as part of the driver implementation. Here, we add a REST endpoint as a configuration option to the volume_opts part.

volume_opts = [
    cfg.StrOpt('foo_api_endpoint',
               default='http://0.0.0.0:12345',
               help='the api endpoint at which the foo storage system sits')
]

All of the options that are defined, can be overwritten by putting them inside the /etc/cinder/cinder.conf file under the configuration of our own driver.

In order to understand what values Cinder will give to a driver the volume parameter can be used. When you get to implement the functionality of the driver, you will want to know what is passed to the driver by cinder, especially the volume dict parameter is of great interest, and it will have these values:

size
host
user_id
project_id
status
display_name
display_description
attach_status
availability_zone
// and if any of the following are set
migration_status
consistencygroup_id
group_id
volume_type_id
replication_extended_status
replication_driver_data
previous_status

To test your methods quickly and easily, it is very important that the driver is in the correct directory, in which all the Cinder drivers are installed, otherwise Cinder will, naturally,  not find the driver. This can differ on how OpenStack has been installed on your machine. With devstack the drivers are on: /opt/stack/cinder/cinder/volume/drivers . With packstack they will be on: /usr/lib/python2.7/site-packages/cinder/volume/drivers .

There was one last head ache that needed to be resolved to allow full integration of our cinder driver. When the driver is placed the correct directory, we proceed to add the necessary options (as shown below) to the /etc/cinder/cinder.conf file.

# first we need to enable the backend (lvm is already set by default)
enabled_backends = lvmdriver-1,foo
# then add these options to your driver configuration at the end of the file
[foo]
volume_backend_name = foo # this is super important!!!
volume_driver = cinder.volume.drivers.foo.FooDriver # path to your driver
# also add the options that you can set freely (volume_opts)
foo_api_endpoint = 'http://127.0.0.1:12956'

You must to set the volume_backend_name because it links Cinder to the correct backend, without it nothing will ever work (NOTHING!).

Finally, when you want to execute operations on it, you must create the volume type for your Cinder driver:

cinder type-create foo
cinder type-key foo set volume_backend_name=foo

Now restart the Cinder services (c-vol, c-sch, c-api) and you should be able to use your own storage system through cinder.


Leave a Reply

Your email address will not be published. Required fields are marked *