OpenStack Archive - Service Engineering (ICCLab & SPLab) https://blog.zhaw.ch/icclab/category/articles/openstack-2/ A Blog of the ZHAW Zurich University of Applied Sciences Mon, 05 Aug 2019 11:58:16 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.3 Experience using Kolla Ansible to upgrade Openstack from Ocata to Queens https://blog.zhaw.ch/icclab/experience-using-kolla-ansible-to-upgrade-openstack-from-ocata-to-queens/ https://blog.zhaw.ch/icclab/experience-using-kolla-ansible-to-upgrade-openstack-from-ocata-to-queens/#respond Fri, 27 Jul 2018 13:54:26 +0000 https://blog.zhaw.ch/icclab/?p=12122 We made a decision to use Kolla-Ansible for Openstack management approximately a year ago and we’ve just gone through the process of upgrading from Ocata to Pike to Queens. Here we provide a few notes on the experience. By way of some context: our system is a moderate sized system with 3 storage nodes, 7 […]

Der Beitrag Experience using Kolla Ansible to upgrade Openstack from Ocata to Queens erschien zuerst auf Service Engineering (ICCLab & SPLab).

]]>
We made a decision to use Kolla-Ansible for Openstack management approximately a year ago and we’ve just gone through the process of upgrading from Ocata to Pike to Queens. Here we provide a few notes on the experience.

By way of some context: our system is a moderate sized system with 3 storage nodes, 7 compute nodes and 3 controllers configured in HA. Our systems were running CentOS 7.5 with a 17.05.0-ce docker engine and we were using the centos-binary Kolla containers. Being an academic institution, usage of our system peaks during term time – performing the upgrade during the summer meant that system utilization was modest. As we are lucky enough to have tolerant users, we were not excessively concerned with ensuring minimal system downtime.

We had done some homework on some test systems in different configurations and had obtained some confidence with the Kolla-Ansible Ocata-Pike-Queens upgrade – we even managed to ‘upgrade’ from a set of centos containers to ubuntu containers without problem. We had also done an upgrade on a smaller, newer system which is in use and it went smoothly. However, we still had a little apprehension when performing the upgrade on the larger system.

In general, we found Kolla Ansible good and we were able to perform the upgrade without too much difficulty. However, it is not an entirely hands-off operation and it did require some intervention for which good knowledge of both Openstack and Kolla was necessary.

Our workflow was straightforward, comprising of the following three stages

  • generate the three configuration files passwords.yml, globals.yml and multinode.ha,
  • pull down all containers to the nodes using kolla-ansible pull
  • perform the upgrade using kolla-ansible upgrade.

We generated the globals.yml and passwords.yml config files by copying the empty config files from the appropriate kolla-ansible git branch to our /etc/kolla directory, comparing them with the files used in the previous deploy and copying changes from the previous versions into the new config file. We used the approach described here to generate the correct passwords.yml file.

Pulling appropriate containers to all nodes was straightforward:

/opt/kolla-ansible/tools/kolla-ansible \
    -i /etc/kolla/multinode.ha pull

It can take a bit of time, but it’s sensible as it does not have any impact on the operational system and reduces the amount of downtime when upgrading.

We were then ready to perform the deployment. Rather than run the system through the entire upgrade process, we chose a more conservative approach in which we upgraded a single service at a time: this was to maintain a little more control over the process and to enable us to check that each service was operating correctly after upgrade. We performed this using commands such as:

/opt/kolla-ansible/tools/kolla-ansible \
    -i /etc/kolla/multinode.ha --tags "haproxy" upgrade

We stepped through the services in the same order as listed in the main Kolla-Ansible playbook, deploying the services one by one.

The two services that we were most concerned about were those pertaining to data storage, naturally: mariadb and ceph. We were quite confident that the other processes should not cause significant problems as they do not retain much important state.

Before we started…

We had some initial problems with docker python libraries installed on all of our nodes. The variant of the docker python library available via standard CentOS repos is too old. We had to resort to pip to install a new docker python library which worked with newer versions of Kolla-Ansible.

Ocata-Pike Upgrade

Deploying all the services for the Ocata-Pike upgrade was straightforward: we just ran through each of the services in turn and there were no specific issues. When performing some final testing, however, the compute nodes were unable to schedule new VMs as neutron was unable to attach a VIF to the OVS bridge. We had seen this issue before and we knew that putting the compute nodes through a boot cycle solves it – not a very clean approach, but it worked.

Pike-Queens Upgrade

The Pike-Queens upgrade was more complex and we encountered issues that we had not specifically seen documented anywhere. The issues were the following:

    • the mariadb upgrade failed – when the slave instances were restarted, they did not join the mariadb cluster and we ended up with a cluster with 0 nodes in the ‘JOINED’ state. The master node also ended up in an inoperational state.
      • We solved this using the well documented approach to bootstrapping a mariadb cluster – we have our own variant of it for the kolla mariadb containers, which is essentially a replica of the mariadb_recovery functionality provided by kolla
      • This did involve a syncing process of replicating all data from the bootstrap node on each of the slave nodes; in our case, this took 10 minutes
    • when the mariadb database sync’d and reached quorum, we noticed many errors associated with record field types in the logs – for this upgrade, it was necessary to perform a mysql_upgrade, which we had not seen documented anywhere
    • the ceph upgrade process was remarkably painless, especially given that this involved a transition from Ceph Jewel to Ceph Luminous. We did have the following small issues to deal with
      • We had to modify the configuration of the ceph cluster using ceph osd require-osd-release luminous
      • We had one small issue that the cluster was in the HEALTH_WARN status as one application did not have an appropriate tag – this was easily fixed using ceph osd pool application enable {pool-name} {application-name}
      • for reasons that are not clear to us, Luminous considered the status of the cluster to be somewhat suboptimal and moved over 50% of the objects in the cluster; Jewel had given no indication that a large amount of the cluster data needed to be moved
    • Upgrading the object store rendered it unusable: in this upgrade, the user which authenticates against keystone with privilege to manage user data for the object store changed from admin to ceph_rgw. However, this user was not added to the keystone and all requests to the object store failed. Adding this user to the keystone and giving this user appropriate access to the service project fixed the issue.
      • This was due to a change that was introduced in the Ocata release after we had performed our deployment and it only became visible to use after we performed the upgrade.

Apart from those issues, everything worked fine; we did note that the nova database upgrade/migration in the Pike-Queens cycle did take quite a long time (about 10 minutes) for our small cluster – for a very large configuration, it may be necessary to monitor this more closely.

Final remarks…

The Kolla-Ansible upgrade process worked well for our modest deployment and we are happy to recommend it as an Openstack management tool for environments of such scale with quite standard configurations, although even with an advanced tool such as Kolla-Ansible, it is essential to have a good understanding of both Openstack and Kolla before depending on it in a production system.

Der Beitrag Experience using Kolla Ansible to upgrade Openstack from Ocata to Queens erschien zuerst auf Service Engineering (ICCLab & SPLab).

]]>
https://blog.zhaw.ch/icclab/experience-using-kolla-ansible-to-upgrade-openstack-from-ocata-to-queens/feed/ 0
Setting up container based Openstack with OVN networking https://blog.zhaw.ch/icclab/setting-up-container-based-openstack-with-ovn-networking/ https://blog.zhaw.ch/icclab/setting-up-container-based-openstack-with-ovn-networking/#respond Mon, 09 Jul 2018 09:03:11 +0000 https://blog.zhaw.ch/icclab/?p=12101 OVN is a relatively new networking technology which provides a powerful and flexible software implementation of standard networking functionalities such as switches, routers, firewalls, etc. Importantly, OVN is distributed in the sense that the aforementioned network entities can be realized over a distributed set of compute/networking resources. OVN is tightly coupled with OVS, essentially being […]

Der Beitrag Setting up container based Openstack with OVN networking erschien zuerst auf Service Engineering (ICCLab & SPLab).

]]>
OVN is a relatively new networking technology which provides a powerful and flexible software implementation of standard networking functionalities such as switches, routers, firewalls, etc. Importantly, OVN is distributed in the sense that the aforementioned network entities can be realized over a distributed set of compute/networking resources. OVN is tightly coupled with OVS, essentially being a layer of abstraction which sits above a set of OVS switches and realizes the above networking components across these switches in a distributed manner.

A number of cloud computing platforms and more general compute resource management frameworks are working on OVN support, including oVirt, Openstack, Kubernetes and Openshift – progress on this front is quite advanced. Interestingly and importantly, one dimension of the OVN vision is that it can act as a common networking substrate which could facilitate integration of more than one of the above systems, although the realization of that vision remains future work.

In the context of our work on developing an edge computing testbed, we set up a modest Openstack cluster, to emulate functionality deployed within an Enterprise Data Centre with OVN providing network capabilities to the cluster. This blog post provides a brief overview of the system architecture and notes some issues we had getting it up and running.

As our system is not a production system, providing High Availability (HA) support was not one of the requirements; consequently, it was not necessary to consider HA OVN mode. As such, it was natural to host the OVN control services, including the Northbound and Southbound DBs and the Northbound daemon (ovn-northd) on the Openstack controller node. As this is the node through which external traffic goes, we also needed to run an external facing OVS on this node which required its own OVN controller and local OVS database. Further, as this OVS chassis is intended for external traffic, it needed to be configured with ‘enable-chassis-as-gw‘.

We configured our system to use DHCP provided by OVN; consequently the neutron DHCP agent was no longer necessary, we removed this process from our controller node. Similarly, L3 routing was done within OVN meaning that the neutron L3 agent was no longer necessary. Openstack metadata support is implemented differently when OVN is used: instead of having a single metadata process running on a controller serving all metadata requests, the metadata service is deployed on each node and the OVS switch on each node routes requests to 169.254.169.254 to the local metadata agent; this then queries the nova metadata service to obtain the metadata for the specific VM.

The services deployed on the controller and compute nodes are shown in Figure 1 below.

Figure 1: Neutron containers with and without OVN

We used Kolla to deploy the system. Kolla does not currently have full support for OVN; however specific Kolla containers for OVN have been created (e.g. kolla/ubuntu-binary-ovn-controller:queens, kolla/ubuntu-binary-neutron-server-ovn:queens). Hence, we used an approach which augments the standard Kolla-ansible deployment with manual configuration of the extra containers necessary to get the system running on OVN.

As always, many smaller issues were encountered while getting the system working – we will not detail all these issues here, but rather focus on the more substantive issues. We divide these into three specific categories: OVN parameters which need to be configured, configuration specifics for the Kolla OVN containers and finally a point which arose due to assumptions made within Kolla that do not necessarily hold for OVN.

To enable OVN, it was necessary to modify the configuration of the OVS switches operating on all the nodes; the existing OVS containers and OVSDB could be used for this – the OVS version shipped with Kolla/Queens is v2.9.0 – but it was necessary to modify some settings. First, it was necessary to configure system-ids for all of the OVS chassis’ – we chose to select fixed UUIDs a priori and use these for each deployment such that we had a more systematic process for setting up the system but it’s possible to use a randomly generated UUID.

docker exec -ti openvswitch_vswitchd ovs-vsctl set open_vswitch . external-ids:system-id="$SYSTEM_ID"

On the controller node, it was also necessary to set the following parameters:

docker exec -ti openvswitch_vswitchd ovs-vsctl set Open_vSwitch . \
    external_ids:ovn-remote="tcp:$HOST_IP:6642" \
    external_ids:ovn-nb="tcp:$HOST_IP:6641" \
    external_ids:ovn-encap-ip=$HOST_IP external_ids:ovn-encap type="geneve" \
    external-ids:ovn-cms-options="enable-chassis-as-gw"

docker exec openvswitch_vswitchd ovs-vsctl set open . external-ids:ovn-bridge-mappings=physnet1:br-ex

On the compute nodes this was necessary:

docker exec -ti openvswitch_vswitchd ovs-vsctl set Open_vSwitch . \
    external_ids:ovn-remote="tcp:$OVN_SB_HOST_IP:6642" \
    external_ids:ovn-nb="tcp:$OVN_NB_HOST_IP:6641" \
    external_ids:ovn-encap-ip=$HOST_IP \
    external_ids:ovn-encap-type="geneve"

Having changed the OVS configuration on all the nodes, it was then necessary to get the services operational on the nodes. There are two specific aspects to this: modifying the service configuration files as necessary and starting the new services in the correct way.

Not many changes to the service configurations were required. The primary changes related to ensuring the the OVN mechanism driver was used and letting neutron know how to communicate with OVN. We also used the geneve tunnelling protocol in our deployment and this required the following configuration settings:

  • For the neutron server OVN container
    • ml2_conf.ini
              mechanism_drivers = ovn
       	type_drivers = local,flat,vlan,geneve
       	tenant_network_types = geneve
      
       	[ml2_type_geneve]
       	vni_ranges = 1:65536
       	max_header_size = 38
      
       	[ovn]
       	ovn_nb_connection = tcp:172.30.0.101:6641
       	ovn_sb_connection = tcp:172.30.0.101:6642
       	ovn_l3_scheduler = leastloaded
       	ovn_metadata_enabled = true
      
    • neutron.conf
              core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
       	service_plugins = networking_ovn.l3.l3_ovn.OVNL3RouterPlugin
      
  • For the metadata agent container (running on the compute nodes) it was necessary to configure it to point at the nova metadata service with the appropriate shared key as well as how to communicate with OVS running on each of the compute nodes
            nova_metadata_host = 172.30.0.101
     	metadata_proxy_shared_secret = <SECRET>
     	bridge_mappings = physnet1:br-ex
     	datapath_type = system
     	ovsdb_connection = tcp:127.0.0.1:6640
     	local_ip = 172.30.0.101
    

For the OVN specific containers – ovn-northd, ovn-sb and ovn-nb databases, it was necessary to ensure that they had the correct configuration at startup; specifically, that they knew how to communicate with the relevant dbs. Hence, start commands such as

/usr/sbin/ovsdb-server /var/lib/openvswitch/ovnnb.db -vconsole:emer -vsyslog:err -vfile:info --remote=punix:/run/openvswitch/ovnnb_db.sock --remote=ptcp:$ovnnb_port:$ovsdb_ip --unixctl=/run/openvswitch/ovnnb_db.ctl --log-file=/var/log/kolla/openvswitch/ovsdb-server-nb.log

were necessary (for the ovn northbound database) and we had to modify the container start process accordingly.

It was also necessary to update the neutron database to support OVN specific versioning information: this was straightforward using the following command:

docker exec -ti neutron-server-ovn_neutron_server_ovn_1 neutron-db-manage upgrade heads

The last issue which we had to overcome was that Kolla and neutron OVN had slightly different views regarding the naming of the external bridges. Kolla-ansible configured a connection between the br-ex and br-int OVS bridges on the controller node with port names phy-br-ex and int-br-ex respectively. OVN also created ports with the same purpose but with different names patch-provnet-<UUID>-to-br-int and patch-br-int-to-provonet-<UUID>; as these ports had the same purpose, our somewhat hacky solution was to manually remove the the ports created in the first instance by Kolla-ansible.

Having overcome all these steps, it was possible to launch a VM which had external network connectivity and to which a floating IP address could be assigned.

Clearly, this approach is not realistic for supporting a production environment, but it’s an appropriate level of hackery for a testbed.

Other noteworthy issues which arose during this work include the following:

  • Standard docker apparmor configuration in ubuntu is such that mount cannot be run inside containers, even if they have the appropriate privileges. This has to be disabled or else it is necessary to ensure that the containers do not use the default docker apparmor profile.
  • A specific issue with mounts inside a container which resulted in the mount table filling up with 65536 mounts and rendering the host quite unusable (thanks to Stefan for providing a bit more detail on this) – the workaround was to ensure that /run/netns was bind mounted into the container.
  • As we used geneve encapsulation, geneve kernel modules had to be loaded
  • Full datapath NAT support is only available for linux kernel 4.6 and up. We had to upgrade the 4.4 kernel which came with our standard ubuntu 16.04 environment.

This is certainly not a complete guide to how to get Openstack up and running with OVN, but may be useful to some folks who are toying with this. In future, we’re going to experiment with extending OVN to an edge networking context and will provide more details as this work evolves.

 

Der Beitrag Setting up container based Openstack with OVN networking erschien zuerst auf Service Engineering (ICCLab & SPLab).

]]>
https://blog.zhaw.ch/icclab/setting-up-container-based-openstack-with-ovn-networking/feed/ 0
How to Write a Cinder Driver https://blog.zhaw.ch/icclab/how-to-write-a-cinder-driver/ https://blog.zhaw.ch/icclab/how-to-write-a-cinder-driver/#respond Mon, 12 Jun 2017 12:01:34 +0000 https://blog.zhaw.ch/icclab/?p=11360 After too many hours of trial and error and searching for the right solution on how to properly write and integrate your own backend in cinder, here are all the steps and instructions necessary. So if you are looking for a guide on how to integrate your own cinder driver, look no further. Why do […]

Der Beitrag How to Write a Cinder Driver erschien zuerst auf Service Engineering (ICCLab & SPLab).

]]>
After too many hours of trial and error and searching for the right solution on how to properly write and integrate your own backend in cinder, here are all the steps and instructions necessary. So if you are looking for a guide on how to integrate your own cinder driver, look no further.

Why do we need a Cinder Driver, and why are we even using Cinder? We created Hera, a distributed storage system, based on ZFS, used in SESAME, a 5G project, in which we are project partners. To achieve an integration of Hera into SESAME, which uses OpenStack, we had to create a Cinder driver.

First of all, we have the Hera storage system with a RESTful API, all the logic and functionality is already available. We position the driver as a proxy between Cinder and Hera. To implement the driver methods, one does not have to look very far, there is a page on the OpenStack Cinder docs that explain which methods need to be implemented and what they do. For a basic Cinder Driver skeleton check out this repository: Cinder Driver Example.

We have decided for a normal volume driver, but you may also decide for another driver that you want to write, then you need to inherit from another base driver, e.g.: write your Driver for a san volumes (SanDriver) or for iSCSI volumes (ISCSIDriver). Also we have always looked at other drivers (mainly the LVM driver) for some guidance during the implementation.

These methods are necessary for a complete driver, while implementing it we wanted to try single methods after implementing them. Once the mandatory methods were implemented, and we attempted to execute the driver’s code, nothing was happening! We quickly realised, that the get_volume_stats method returns crucial information of the storage system to the Cinder scheduler. The scheduler will not know anything of the driver if values are not returned, so to quickly test we had this dict hardcoded and the scheduler stopped complaining.

{
    'volume_backend_name': 'foo',
    'vendor_name': 'bar',
    'driver_version': '3.0.0',
    'storage_protocol': 'foobar',
    'total_capacity_gb': 42,
    'free_capacity_gb': 42
}

In order to provider parameters to your driver, you can also add them in the following way, as part of the driver implementation. Here, we add a REST endpoint as a configuration option to the volume_opts part.

volume_opts = [
    cfg.StrOpt('foo_api_endpoint',
               default='http://0.0.0.0:12345',
               help='the api endpoint at which the foo storage system sits')
]

All of the options that are defined, can be overwritten by putting them inside the /etc/cinder/cinder.conf file under the configuration of our own driver.

In order to understand what values Cinder will give to a driver the volume parameter can be used. When you get to implement the functionality of the driver, you will want to know what is passed to the driver by cinder, especially the volume dict parameter is of great interest, and it will have these values:

size
host
user_id
project_id
status
display_name
display_description
attach_status
availability_zone
// and if any of the following are set
migration_status
consistencygroup_id
group_id
volume_type_id
replication_extended_status
replication_driver_data
previous_status

To test your methods quickly and easily, it is very important that the driver is in the correct directory, in which all the Cinder drivers are installed, otherwise Cinder will, naturally,  not find the driver. This can differ on how OpenStack has been installed on your machine. With devstack the drivers are on: /opt/stack/cinder/cinder/volume/drivers . With packstack they will be on: /usr/lib/python2.7/site-packages/cinder/volume/drivers .

There was one last head ache that needed to be resolved to allow full integration of our cinder driver. When the driver is placed the correct directory, we proceed to add the necessary options (as shown below) to the /etc/cinder/cinder.conf file.

# first we need to enable the backend (lvm is already set by default)
enabled_backends = lvmdriver-1,foo
# then add these options to your driver configuration at the end of the file
[foo]
volume_backend_name = foo # this is super important!!!
volume_driver = cinder.volume.drivers.foo.FooDriver # path to your driver
# also add the options that you can set freely (volume_opts)
foo_api_endpoint = 'http://127.0.0.1:12956'

You must to set the volume_backend_name because it links Cinder to the correct backend, without it nothing will ever work (NOTHING!).

Finally, when you want to execute operations on it, you must create the volume type for your Cinder driver:

cinder type-create foo
cinder type-key foo set volume_backend_name=foo

Now restart the Cinder services (c-vol, c-sch, c-api) and you should be able to use your own storage system through cinder.

Der Beitrag How to Write a Cinder Driver erschien zuerst auf Service Engineering (ICCLab & SPLab).

]]>
https://blog.zhaw.ch/icclab/how-to-write-a-cinder-driver/feed/ 0
15th OpenStack Meetup https://blog.zhaw.ch/icclab/15th-openstack-meetup/ https://blog.zhaw.ch/icclab/15th-openstack-meetup/#respond Fri, 07 Apr 2017 15:30:42 +0000 https://blog.zhaw.ch/icclab/?p=11324 On the 21st of March we held the 15th OpenStack meetup. As ever, the talks were interesting, relevant and entertaining. It was kindly sponsored by Rackspace and held at their offices in Zürich. Much thanks goes to them and to previous sponsors! At this meetup there were 2 talks and an interactive and impromptu panel discussion […]

Der Beitrag 15th OpenStack Meetup erschien zuerst auf Service Engineering (ICCLab & SPLab).

]]>
On the 21st of March we held the 15th OpenStack meetup. As ever, the talks were interesting, relevant and entertaining. It was kindly sponsored by Rackspace and held at their offices in Zürich. Much thanks goes to them and to previous sponsors!

At this meetup there were 2 talks and an interactive and impromptu panel discussion on the recent operator’s meetup in Milan.

The first talk was by Giuseppe Paterno who shared the experience in eBay on the workloads that are running there upon OpenStack.

Next up was Geoff Higginbottom from Rackspace who showed how to use Nagios and StackStorm to automate the recovery of OpenStack services. This was interesting from the lab’s perspective as much of what Geoff talked about was related to our Cloud Incident Management initiative. You can see almost the same talk that Geoff gave at the OpenStack Nordic Days.

The two presentations were followed up by the panel discussion involving those that attended  including our own Seán Murphy and was moderated by Andy Edmonds. Finally, as is now almost a tradition, we had a very nice apero!

Looking forward to the next and 16th OpenStack meetup!

Der Beitrag 15th OpenStack Meetup erschien zuerst auf Service Engineering (ICCLab & SPLab).

]]>
https://blog.zhaw.ch/icclab/15th-openstack-meetup/feed/ 0
Openstack checkpointing is simplified https://blog.zhaw.ch/icclab/openstack-checkpointing-is-simplified/ https://blog.zhaw.ch/icclab/openstack-checkpointing-is-simplified/#respond Thu, 26 Jan 2017 13:47:15 +0000 https://blog.zhaw.ch/icclab/?p=11119 At ICCLab, we have recently updated the Openstack OVA onboarding tool to include an exporting functionality that can help operators migrate and checkpoint individual VMs. Furthermore, researchers can now export VMs to their local environments, even use them offline, and at any time bring them back to the cloud using the same tool. The OpenStack OVA onboarding […]

Der Beitrag Openstack checkpointing is simplified erschien zuerst auf Service Engineering (ICCLab & SPLab).

]]>
by Josef Spillner

At ICCLab, we have recently updated the Openstack OVA onboarding tool to include an exporting functionality that can help operators migrate and checkpoint individual VMs. Furthermore, researchers can now export VMs to their local environments, even use them offline, and at any time bring them back to the cloud using the same tool.

The OpenStack OVA onboarding tool automatically transforms selected virtual machines into downloadable VMDK images. Virtual machines and their metadata are fetched from OpenStack’s Nova service, and made packed as OVA file. The tool offers a GUI integration with OpenStack’s Horizon Dashboard, but can be also deployed separately.

Openstack onboarding tool supports:

  • Virtual machine export
  • Security group export (port forwarding rules for NAT interface)
  • Network Export (all networks are exported as internal networks)

Limitations:

  • QCOW2 images have the same size as instance disk space. It means that download image of X0 GB will take some time.
  • Network configuration of virtual machine in openstack deployment should be configured for non cloudinit setup too, otherwise the exported image will not have proper networking within Virtualbox or similar virtualization tool.
  • All NAT ports are required to have the network called “private” in the tenant.

We’ve prepared a short video about this tool’s deployment and its installation steps.

As always, the code is open-source, so let us know what you think.

YouTube Video
Defaulttext aus wp-youtube-lyte.php

Der Beitrag Openstack checkpointing is simplified erschien zuerst auf Service Engineering (ICCLab & SPLab).

]]>
https://blog.zhaw.ch/icclab/openstack-checkpointing-is-simplified/feed/ 0
Monitoring an Openstack deployment with Prometheus and Grafana https://blog.zhaw.ch/icclab/monitoring-an-openstack-deployment-with-prometheus-and-grafana/ https://blog.zhaw.ch/icclab/monitoring-an-openstack-deployment-with-prometheus-and-grafana/#comments Thu, 24 Nov 2016 09:34:31 +0000 https://blog.zhaw.ch/icclab/?p=10912 Following our previous blog post, we are still looking at tools for collecting metrics from an Openstack deployment in order to understand its resource utilization. Although Monasca has a comprehensive set of metrics and alarm definitions, the complex installation process combined with a lack of documentation makes it a frustrating experience to get it up […]

Der Beitrag Monitoring an Openstack deployment with Prometheus and Grafana erschien zuerst auf Service Engineering (ICCLab & SPLab).

]]>
Following our previous blog post, we are still looking at tools for collecting metrics from an Openstack deployment in order to understand its resource utilization. Although Monasca has a comprehensive set of metrics and alarm definitions, the complex installation process combined with a lack of documentation makes it a frustrating experience to get it up and running. Further, although it is complex, with many moving parts, it was difficult to configure it to obtain the analysis we wanted from the raw data, viz how many of our servers are overloaded over different timescales in different respects (cpu, memory, disk io, network io). For these reasons we decided to try Prometheus with Grafana which turned out to be much easier to install and configure (taking less than an hour to set up!). This blog post covers the installation process and configuration of Prometheus and Grafana in a Docker container and how to install and configure Canonical’s Prometheus Openstack exporter to collect a small set of metrics related to an Openstack deployment.

Note that minor changes to this HOWTO are required to install theses services in a VM or in a host machine when using containers is not an option. As preparation, take note of your Openstack deployment’s locations for Keystone and the Docker host. Remember that all downloads should be verified by signature comparison for production use.

Installing and configuring Prometheus

First of all pull the Ubuntu image into you docker machine. Let’s call it docker-host.

Note that in this blog post we describe Prometheus installation process step-by-step – we chose to install it from scratch to get a better understanding of the system, but using the pre-canned Docker Hub image is also possible.


docker pull ubuntu:14.04

Then create the docker container opening the port 9090 which will be used to get/push metrics into Prometheus.


docker run -it -p 9090:9090 --name prometheus ubuntu:14.04

Inside the container download the latest version of Prometheus and uncompress it (version 1.3.1 is used in this HOWTO; the download size is ca. 16 MB).


wget https://github.com/prometheus/prometheus/releases/download/v1.3.1/prometheus-1.3.1.linux-amd64.tar.gz
tar xvf prometheus-1.3.1.linux-amd64.tar.gz
cd prometheus-1.3.1.linux-amd64

Configure prometheus.yml adding the targets from which prometheus should scrape metrics. See the example below for the Openstack exporter (assuming it is installed in the same docker-host):


scrape_configs:
  - job_name: 'openstack-deployment-1'
    scrape_interval: 5m
    Static_configs:
      - targets: ['docker-host:9183']

Start the Prometheus service:


./prometheus -config.file=prometheus.yml

Similarly, install and configure the Prometheus Openstack exporter in another container. Note that this container needs to be set up manually as there are configuration files to be changed and Openstack libraries to be installed.


docker run -it -p 9183:9183 --name prometheus-openstack-exporter ubuntu:14.04
sudo apt-get install python-neutronclient python-novaclient python-keystoneclient python-netaddr unzip wget python-pip python-dev python-yaml
pip install prometheus_client
wget https://github.com/CanonicalLtd/prometheus-openstack-exporter/archive/master.zip
unzip master.zip
cd prometheus-openstack-exporter-master/

Next, configure prometheus-openstack-exporter.yaml create the /var/cache/prometheus-openstack-exporter/ directory and the novarc file containing credentials for Nova user.


mkdir /var/cache/prometheus-openstack-exporter/
echo  export OS_USERNAME=nova-username \
      export OS_PASSWORD=nova-password \
      export OS_AUTH_URL=http://keystone-url:5000/v2.0 \
      export OS_REGION_NAME=RegionOne \
      export OS_TENANT_NAME=services > novarc
source novarc
./prometheus-openstack-exporter prometheus-openstack-exporter.yaml

Then you’ve got a fully functional Prometheus system with some Openstack metrics on it! Visit http://docker-host:9090 to graph and see which metrics are available.

Here is the list of the 18 metrics currently collected by Prometheus Openstack exporter:

neutron_public_ip_usage hypervisor_memory_mbs_total
neutron_net_size hypervisor_running_vms
hypervisor_memory_mbs_used hypervisor_disk_gbs_total
hypervisor_vcpus_total hypervisor_disk_gbs_used
openstack_allocation_ratio hypervisor_vcpus_used
nova_instances nova_resources_ram_mbs
nova_resources_disk_gbs swift_replication_duration_seconds
openstack_exporter_cache_age_seconds swift_disk_usage_bytes
swift_replication_stats swift_quarantined_objects

Alternatively you could use Prometheus’s Node exporter for more detailed metrics on node usage – this needs to be installed in the controller/compute nodes and the prometheus.yml configuration file also needs to be changed. A docker container is also available at Docker Hub.

Although Prometheus provides some rudimentary graph support, combining it with a more powerful graphing solution makes it much easier to see what’s going on in your system. For this reason, we set up Grafana.

Installing Grafana

The latest version of Grafana (currently 4.0.0-beta2) had a lot of improvements in its user interface, it also supports now alerting and notifications for every panel available – refer to the documentation for more information. Its integration with Prometheus is very straightforward, as described below.

First of all, pull the grafana image into your docker-host and create the docker container opening the port 3000 used to access it.


docker pull grafana/grafana
docker run -d -p 3000:3000 grafana/grafana:4.0.0-beta2

Visit http://docker-host:3000 and use the credentials admin/admin to log into the dashboard. In the Data Sources tab, add a new corresponding data source.

selection_005

Create a new dashboard and add panels containing graphs using the Prometheus datasource.

selection_006

Play around with metrics available and create your own dashboard! See a simple example below.

selection_004

Conclusion

Although not many metrics are available yet to monitor an Openstack deployment the combination of Prometheus and Grafana is quite powerful for visualising data; also it was much easier to set up in comparison with Monasca. Further, from a cursory glance, Prometheus seems to be more flexible than Monasca and for these reasons it appears more promising. That said, we are still looking into Prometheus and how it can be used to properly understand resource consumption in an Openstack context, but that will come in another blog post!

Der Beitrag Monitoring an Openstack deployment with Prometheus and Grafana erschien zuerst auf Service Engineering (ICCLab & SPLab).

]]>
https://blog.zhaw.ch/icclab/monitoring-an-openstack-deployment-with-prometheus-and-grafana/feed/ 19
Installing Monasca – a happy ending after much sadness and woe https://blog.zhaw.ch/icclab/installing-monasca-a-happy-ending-after-much-sadness-and-woe/ https://blog.zhaw.ch/icclab/installing-monasca-a-happy-ending-after-much-sadness-and-woe/#comments Mon, 21 Nov 2016 13:12:13 +0000 https://blog.zhaw.ch/icclab/?p=10904 In one of our projects we are making contributions to an Openstack project called Watcher, this project focuses on optimizing resource utilization of a cloud according to a given strategy. As part of this work it is important to understand the resource utilization of the cloud beforehand in order to make a meaningful contribution. This […]

Der Beitrag Installing Monasca – a happy ending after much sadness and woe erschien zuerst auf Service Engineering (ICCLab & SPLab).

]]>
In one of our projects we are making contributions to an Openstack project called Watcher, this project focuses on optimizing resource utilization of a cloud according to a given strategy. As part of this work it is important to understand the resource utilization of the cloud beforehand in order to make a meaningful contribution. This requires collection of metrics from the system and processing them to understand how the system is performing. The Ceilometer project was our default choice for collecting metrics in an Openstack deployment but as work has evolved we are also exploring alternatives – specifically Monasca. In this blog post I will cover my personal experience installing Monasca (which was more challenging than expected) and how we hacked the monasca/demo docker image to connect it to our Openstack deployment.

In our earlier series of blog posts we explained how to install, configure and test Monasca step-by-step using git, Maven and pip – a good option for learning the moving parts of the giant beast that is Monasca and how it is all linked together. This content was posted over a year ago and hence it is now a little bit stale; since that time work has been done on deployment using automation tools such as  Ansible or Puppet. Generally, when working on this, we encountered significant issues with lack of documentation, unspecified versioning issues, unclear dependencies etc.

Our first attempt to install Monasca was based on different Ansible repositories available on Github – pointed to by the official page of Monasca. Although most of the repositories are Ansible roles, some try to put everything in a single place. Interestingly, many of those assume that Openstack is deployed with Devstack (which was not true in our case – we had a small  Packstack deployment) and this caused many steps to fail during the installation as files were not where they were expected to be (in devstack many files are into /opt/stack/). Even after solving these issues Monasca itself did not seem to work and we always ended up with a system where only few monasca services were up and running. We’re not certain where the problems were, but there were multiple issues with Kafka and in some cases it was not possible to get monasca-agent or monasca-notifications working.

A tip if you want to go this path: Most of the repos use Ansible 1.9.2, which is not obvious. We had many issues using Ansible >= 2.0 and some roles using version 1.9.2 which required modification to the source code as there were changes in how loops are defined.

After the frustrating experience with Ansible we then decided to get back to basics, there is a functional out-of-the-box docker image in Docker Hub used for testing Monasca. It contains pretty much everything Monasca needs for running – even a local Keystone, Nova and Glance. We then decided to modify Monasca components in this image in order to connect it to our to Openstack cluster. Note that only Openstack credentials were modified inside the container – InfluxDB, MySQL and other services credentials also provided in the container were not modified. After relatively little tinkering, we managed to get this working as described below.

First of all, pull the image into your docker machine – let’s call it docker-host. Note that docker-host must be able to access your Openstack deployment (via the public APIs) and vice-versa


docker pull monasca/demo

Next, create the docker container but it is necessary to modify its entrypoint, so it does not start setting up monasca automatically and we can change the Monasca configuration files; also, note that the since it will not be running Keystone, we will remove the Keystone port:


docker run -it -p {port_to_horizon}:80 -p 8080:8080 --name monasca --entrypoint=/bin/bash monasca/demo

Change the value of port_to_horizon in case port 80 is already used in docker-host.

Inside the docker container you will be in the ‘/setup’ directory, in this directory you will see the  ansible files used to set up this container. The main file in this directory is called demo-start.sh, this file will start all the services required by Monasca but before running this script it is necessary to modify the following Monasca Openstack configuration details.

In /etc/monasca/api-config.yml add your credentials in the middleware section (note that you will need to install a text editor in the container):


##  /etc/monasca/api-config.yml
# region usually defaults to RegionOne
region: “{SERVICE_REGION}”
...
serverVIP: "KEYSTONE_URL"
serverPort: KEYSTONE_PORT
defaultAuthorizedRoles: [user, domainuser, domainadmin, monasca-user, admin]
agentAuthorizedRoles: [monasca-agent]
adminAuthMethod: password
adminUser: "monasca"
adminPassword: "MONASCA_PASSWORD"
adminProjectName: "monasca"

Similarly in /usr/local/bin/monasca-reconfigure and /setup/alarms.yml:


##  /usr/local/bin/monasca-reconfigure
#!/bin/sh
'/opt/monasca/bin/monasca-setup' \
     -u 'monasca-agent' \
     -p 'MONASCA_AGENT_PASSWORD' \
      -s 'monitoring'  \
     --keystone_url 'KEYSTONE_URL' \
     --project_name 'MONASCA_AGENT_PROJECT' \
       --monasca_url 'http://localhost:8080/v2.0'  \
      \
      --check_frequency '5'  \
      \
      \
     --overwrite

## /setup/alarms.yml
- name: Setup default alarms
   hosts: localhost
   vars:
       keystone_url: KEYSTONE_URL
       keystone_user: monasca
       keystone_password: MONASCA_PASSWORD
       keystone_project: monasca
   roles:
       - {role: monasca-default-alarms, tags: [alarms]}

In /setup/demo-start.sh there is an error regarding the location of monasca-notification init script, modify the ‘/usr/local/bin/monasca-notification &’ (line 44) to ‘/opt/monasca/bin/monasca-notification &’. Also remove the playbook which installs the Keystone service (line 11):


## /setup/demo-start.sh

# remove this line 
# ansible-playbook -i /setup/hosts /setup/keystone.yml -c local
            ...
/opt/monasca/bin/monasca-notification &

Finally remove services which are not necessary anymore, eg the local Keystone, from start.sh, leave apache2 and change the variable OPENSTACK_HOST in /etc/openstack-dashboard/local_settings.py to the IP where Keystone is installed:


## /setup/start.sh
#!/bin/bash
/etc/init.d/apache2 start

## /etc/openstack-dashboard/local_setting.py
OPENSTACK_HOST = "KEYSTONE_IP"

Before running the main script to set everything up make sure you have created the Keystone users, projects and roles which are assumed in the way monasca is configured:


 keystone tenant-create --name monasca --description "Monasca tenant"
 keystone user-create --name monasca-agent --pass password --tenant [monasca-tenant-id]
 keystone user-create --name monasca --pass password --tenant [monasca-tenant-id]
 keystone role-create --name monasca-agent
 keystone role-create --name monasca-user
 keystone user-role-add --user [monasca-agent-id] --role [monasca-agent-role-id] --tenant [monasca-tenant-id]
 keystone user-role-add --user [monasca-id] --role [monasca-user-role-id] --tenant [monasca-tenant-id]
 keystone service-create --type monitoring --name monasca --description "Monasca monitoring service"
 keystone endpoint-create --service [service-id] --publicurl http://docker-host:8080/v2.0 --internalurl http://docker-host:8080/v2.0 --adminurl http://docker-host:8080/v2.0

Once all of this is done, you can now run /setup/demo-start.sh to set everything up – it may take a couple of minutes. When it has finished, you can visit your monasca-dashboard page at http://docker-host:{port_to_horizon}. Log in as the user Monasca and visit the monitoring tab in the dashboard.

In this tab you will be able to create alarms definitions and notifications. You will also be able to see an overview of the current state of the system in the Grafana dashboard. To collect more data regarding your Openstack cluster you will need to install and configure monasca-agent service in each of the controller/compute nodes. It is possible to do it in a virtualenv so it does not modify any library in the system.


# on compute/controller nodes
virtualenv monasca_agent_env
source monasca_agent_env/bin/activate
pip install --upgrade monasca-agent

monasca-setup -u monasca-agent -p password --project_name monasca -s monitoring --keystone_url {KEYSTONE_URL} --monasca_url http://docker-host:8080/v2.0 --config_dir /etc/monasca/agent --log_dir /var/log/monasca/agent --overwrite

Then you’ve got a fully functional Monasca system which works with your Openstack cluster – it’s a good solution for experimenting with Monasca to understand how powerful it really is. This solution is probably not the best solution for a production deployment, however – it would probably make more sense to decouple the services and put the data in a persistent data store, but that’s another day’s work!

Troubleshooting

If you are getting “requests.exceptions.ConnectionError: (‘Connection aborted.’, gaierror(-2, ‘Name or service not known’))” error when running demo-start.sh script that means the container is not able reach the endpoint defined in Openstack for the monitoring service. Make sure that the container is able to connect to the external interface in your Openstack cluster and the Keystone endpoint for the monitoring service is configured correctly.

The demo-start.sh is the main script in this container, in case you find any issues you can execute each of the commands in this script sequentially to understand more clearly what is happenning.  

“Exception in thread “main” java.lang.RuntimeException: Topology with name `thresh-cluster` already exists on cluster”. This error is known to happen often as the thresh-cluster service tries to configure itself for a second time, you can ignore it.

Der Beitrag Installing Monasca – a happy ending after much sadness and woe erschien zuerst auf Service Engineering (ICCLab & SPLab).

]]>
https://blog.zhaw.ch/icclab/installing-monasca-a-happy-ending-after-much-sadness-and-woe/feed/ 16
Integration of Openstack OVA importing tool to Horizon https://blog.zhaw.ch/icclab/integration-of-ova-importing-tool-to-horizon/ https://blog.zhaw.ch/icclab/integration-of-ova-importing-tool-to-horizon/#respond Fri, 14 Oct 2016 15:33:07 +0000 https://blog.zhaw.ch/icclab/?p=10703 ICCLab is announcing an integration of the Openstack OVA onboarding tool into OpenStack’s Horizon dashboard. To deploy the OVA file to Openstack all  images are extracted from the file and  uploaded to  the Openstack cluster, all necessary file format transformations are automatically performed, glance images get created and the tool creates a heat stack out of them. […]

Der Beitrag Integration of Openstack OVA importing tool to Horizon erschien zuerst auf Service Engineering (ICCLab & SPLab).

]]>
by Josef Spillner

ICCLab is announcing an integration of the Openstack OVA onboarding tool into OpenStack’s Horizon dashboard. To deploy the OVA file to Openstack all  images are extracted from the file and  uploaded to  the Openstack cluster, all necessary file format transformations are automatically performed, glance images get created and the tool creates a heat stack out of them. As we mentioned a couple of weeks ago, uploading your local VMs into OpenStack was never easier.

screenshot-from-2016-10-14-14-31-10

Now we are making it even better by allowing everyone to import their OVA files using Horizon dashboard. In order to make the deployment straightforward and easy on future releases of Horizon, the integration itself is implemented as a separate view.

Once you (or someone from operations) have deployed it, simply navigate to the Onboarding tab where you will see a table view of all created stacks via the Onboarding tool. Due to this integration, you no longer need to provide credentials when onboarding VMs, as they are retrieved from your session automatically.

We’ve prepared a short video about this tool’s deployment and its installation steps.

As always, the code is open-source, so let us know what you think.

YouTube Video
Defaulttext aus wp-youtube-lyte.php

 

Der Beitrag Integration of Openstack OVA importing tool to Horizon erschien zuerst auf Service Engineering (ICCLab & SPLab).

]]>
https://blog.zhaw.ch/icclab/integration-of-ova-importing-tool-to-horizon/feed/ 0
A new tool to import OVA Applications to Openstack https://blog.zhaw.ch/icclab/openstack-ova-onboarding-tool-release/ https://blog.zhaw.ch/icclab/openstack-ova-onboarding-tool-release/#comments Thu, 01 Sep 2016 14:25:04 +0000 https://blog.zhaw.ch/icclab/?p=10563 If you ever thought of uploading your local VMs to OpenStack, perhaps you have come across OpenStack’s support for importing single virtual disk images. However, this cannot be used to deploy complicated VM setups, including network configurations and multiple VMs connected to each other. We at ICCLab have therefore decided to develop a tool that […]

Der Beitrag A new tool to import OVA Applications to Openstack erschien zuerst auf Service Engineering (ICCLab & SPLab).

]]>
by Josef Spillner

If you ever thought of uploading your local VMs to OpenStack, perhaps you have come across OpenStack’s support for importing single virtual disk images. However, this cannot be used to deploy complicated VM setups, including network configurations and multiple VMs connected to each other.
We at ICCLab have therefore decided to develop a tool that will allow anyone to upload their VM setups from their local environments directly to OpenStack. We call it OpenStack VM onboarding tool and it’s available as open source.

VM onboarding tool features:

  • Easy to run – the tool comprises of simple frontend, backend and Openstack client libraries to access Openstack APIs. All these components can be easily run with one command.
  • Easy to Import – to import an OVA file the user needs to provide only the basic Openstack credentials (username, password, tenant, region, keystone URL) and an OVA file.
  • Full infrastructure import – the tool imports virtual machines, external networks, internal network connections and security groups.

You can check out a quick demo of VM onboarding functionality, workflow and interface.

The tool comprises of simple frontend, backend and Openstack client libraries to access Openstack APIs. The figure below shows a high level architecture of this tool.

Screenshot from 2016-09-01 15:54:46

Once a user tries to “Log in”, a new session is created through “keystoneauth” library. This session is needed to access all openstack client services.

Import OVA file to Openstack consists of different actions:

  1. Chosen the OVA file uploaded to the backend server
  2. OVA file transforms into the simple .tar archive. The images (vmdk) and the OVF file are extracted from this archive disk.
  3. The OVF file parser extracts valid information.
  4. The images are uploaded to the glance endpoint.
  5. All information then transforms to the heat template and is uploaded to the heat client.

In future VM onboarding tool will be integrated with the Openstack Horizon dashboard for seamless user experience.

Der Beitrag A new tool to import OVA Applications to Openstack erschien zuerst auf Service Engineering (ICCLab & SPLab).

]]>
https://blog.zhaw.ch/icclab/openstack-ova-onboarding-tool-release/feed/ 4
Trust delegation in Openstack using Keystone trusts https://blog.zhaw.ch/icclab/trust-delegation-in-openstack-using-keystone-trusts/ https://blog.zhaw.ch/icclab/trust-delegation-in-openstack-using-keystone-trusts/#comments Tue, 30 Aug 2016 07:10:45 +0000 https://blog.zhaw.ch/icclab/?p=10549 In one of our blog posts we presented a basic tool which extends the Openstack Nova client and supports executing API calls at some point in the future. Much has evolved since then: the tool is not just a wrapper around Openstack clients anymore and instead we rebuilt it in the context of the Openstack […]

Der Beitrag Trust delegation in Openstack using Keystone trusts erschien zuerst auf Service Engineering (ICCLab & SPLab).

]]>
In one of our blog posts we presented a basic tool which extends the Openstack Nova client and supports executing API calls at some point in the future. Much has evolved since then: the tool is not just a wrapper around Openstack clients anymore and instead we rebuilt it in the context of the Openstack Mistral project which provides very nice workflow as service capabilities – this will be elaborated a bit more in a future blog post. During this process we came across a very interesting feature in Keystone which we were not aware of – Trusts. Trusts is a mechanism in Keystone which enables delegation of roles and even impersonation of users from a trustor to a  trustee; it has many uses but is particularly useful in an Openstack administration context. In this blog post we will cover basic command line instructions to create  and use trusts.

Setting up

First of all – trusts are only available in Keystone v3, v3 endpoints must be exposed for Keystone.

Check your openstack-client and keystone version. We had issues where Keystone Trusts were not working due to an outdated client version. Here we use openstack-client version 3.0.1 and keystone 10.0.0. Versions can be checked as follows:


:# openstack --version
:# keystone-manage --version

The openstack client can be updated using pip as follows:


:# sudo pip install --upgrade python-openstackclient

Example of users and projects

For all the commands shown in this blog post we will use the following users and projects so it is easier for you to understand what each command does. Each user has admin and member roles in their respective projects – the admin user only has access to the admin project and the alt_demo user only has access to alt_demo project.

+----------------------------------+----------+
| User ID                          | Name     |
+----------------------------------+----------+
| 54e64304fec34a06b20893b35acbdbfa | alt_demo |
| a91584188d074a36886247eff94ee1de | admin    |
+----------------------------------+----------+


+----------------------------------+--------------------+
| Project ID                       | Name               |
+----------------------------------+--------------------+
| aec1f0c8e503439ebaf7a612dcb60d96 | admin              |
| ff777ec62c7842e280db0194a27bd3dc | alt_demo           |
+----------------------------------+--------------------+

Creating trusts

Creating a trust relationship between users is fairly straightforward. The basic premise is that one user can authorize another user to act on her behalf by creating an Openstack trust object. More concretely, the trustor creates a trust object which enables a trustee to act on her behalf: the trustee uses her own credentials and the trust_id and is then able to act on behalf of the trustor.

The basic syntax of the command is shown below; This command returns a trust_id which can be used to authenticate operations in any Openstack service.

Here, we assume that credentials are configured as environment variables as is standard practice, so we don’t have to provide them as parameters in the command itself.

openstack trust create –project <project-id> –role <role-id> –impersonate  <trustor-user-id> <trustee-user-id>


# Alt_demo user granting member access to alt_demo project for admin user

openstack trust create --impersonate --project ff777ec62c7842e280db0194a27bd3dc --role 9fe2ff9ee4384b1894a90878d3e92bab(*member role) 54e64304fec34a06b20893b35acbdbfa a91584188d074a36886247eff94ee1de

In this case a trustor, alt_demo, gives a specific role, member, in a project, alt_demo, for a trustee, admin. The trustee, admin, then will be able to perform any Openstack operations in that project based on the role provided.

Note that the trustor can only grant trusts of roles and projects for which it has authorization; e.g. if the trustor is only a member in a certain project he cannot provide delegated admin role for another user in that same project using trusts.

The impersonate flag changes the attribute user in the token created using the trust_id, the user will be the trustor itself in case the flag is present otherwise the user in the token is the trustee.

Acting on the trustor’s behalf

In order to use the trust_id provided by the command above first we need to remove any environment variables related to projects and domains. This information is already included in the token created by the trust_id (an exception is thrown if there is a conflict between the environment variables and the trust_id).

To check for environment variables and remove them use the commands below:


:# env | grep OS
OS_PROJECT_DOMAIN_ID=default
OS_REGION_NAME=RegionOne
OS_USER_DOMAIN_ID=default
OS_PROJECT_NAME=admin
OS_IDENTITY_API_VERSION=3
OS_PASSWORD=REDACTED
OS_AUTH_URL=http://127.0.0.1:5000/v3
OS_USERNAME=admin
OS_TENANT_NAME=admin

:# unset {VARIABLE_NAME}

Without any project or domain variable in our environment now we are good to go using Openstack API calls!


openstack --os-trust-id 99905e50d80749b5a24292e830fff10c server create --flavor 1 --image <image-id> Server-Test

This command, executed with the trustee credentials, creates a VM in the project specified by the trust_id, which is associated with the trustor: the trustor can log into the Openstack web interface and will see this VM as if she had created it herself.

In case you don’t use environment variables there are very few changes to make in the API call:


openstack --os-trust-id 99905e50d80749b5a24292e830fff10c --os-username admin --os-password REDACTED --os-auth-url http://127.0.0.1:5000/v3 --os-region-name RegionOne server create --flavor 1 --image <image-id> Server-Test

Adding these parameters after the command openstack will be the same as using environment variables allowing the admin user to create a VM in the alt_demo project.

Conclusion

In this blog post we gave an introduction to Keystone Trust and an example of how it can be used. Note that the trust mechanisms do not solve the general problem of Openstack admins being able to act on the user’s behalf (eg creating VMs for the user based on snapshots etc) as the users still need to authorize them to do so, although there known workarounds.  However, it is very useful capability which is used in more complex configurations such as those where Openstack services need to act on behalf of a user.

Der Beitrag Trust delegation in Openstack using Keystone trusts erschien zuerst auf Service Engineering (ICCLab & SPLab).

]]>
https://blog.zhaw.ch/icclab/trust-delegation-in-openstack-using-keystone-trusts/feed/ 1