In a previous blog post, we described our experience with deploying OpenWhisk on Kubernetes on OpenStack. During subsequent testing, we observed some issues with the OpenWhisk deployment wherein some OpenWhisk components – specifically, the controller and the invoker – would fail to restart after rebooting the machines running the Kubernetes nodes for maintenance tasks. To fix this, we had to redeploy OpenWhisk after each failure which resulted in significant data loss and was clearly an unacceptable operational solution.
To ensure that our OpenWhisk deployment is fault-tolerant and can survive failures and restarts without data loss, in this short blog post we discuss how we configured persistent storage on our self-hosted Kubernetes cluster using an NFS server and the nfs-client-provisioner, which is an automatic provisioner for Kubernetes. Of course, Kubernetes supports other storage backends but they are mostly geared towards public cloud deployments; we chose NFS as it supported dynamic provisioning, is based on a very mature technology and was easy to configure.
We started by creating a new virtual machine within our OpenStack cluster with 50GBs of storage space and installed and configured an NFS server on it. We then made sure that the NFS server was accessible from all our Kubernetes nodes.
This is how the /etc/exports
file looked like after we were done with installing and configuring our NFS server:
One issue that arose related to NFS file permissions: the OpenWhisk Helm chart has an initialisation phase which sets up file ownership for persistent data in the Redis pod; however, our initial NFS configuration didn’t support setting arbitrary file ownership within the PVC due to the NFS permissions model. This caused the OpenWhisk deployment to fail as the Redis pod assumed that the file ownerships were configured in a certain way – an assumption that was not valid in the first instance. The NFS file ownership model is well understood and this is a well known issue with NFS in certain configurations. Setting the no_root_squash
option on the NFS server enabled the pod to change file ownership as required which resulted in successful initialisation of the pod.
Once we had the NFS server configured, we were then ready to deploy the nfs-client-provisioner. The nfs-client-provisioner uses the NFS server to dynamically provision persistent volumes (PVs). A pod that requires persistent storage can make a persistence storage claim (PVC) which is essentially a storage request that specifies the storage resources that the pod requires. When a PVC is made, the Kubernetes cluster will ask the dynamic provisioner – the nfs-client-provisioner in this case – to provision persistent volumes to satisfy this claim. More on PV provisioning here.
We deployed the nfs-client-provisioner on our Kubernetes cluster using the Helm package manager by running the following command:
$ helm install stable/nfs-client-provisioner \
--set nfs.server=x.x.x.x \
--set nfs.path=/exported/path \
--set storageClass.name=nfs-client
Once the deployment of the provisioner was completed, we were ready to deploy OpenWhisk with persistent storage. All we had to do was add the following to the bottom of the mycluster.yaml
file:
---
k8s:
persistence:
hasDefaultStorageClass: false
explicitStorageClass: "nfs-client"
---
And follow the rest of the steps to install OpenWhisk from the previous blog post.
After deploying OpenWhisk, we checked the created PVs by running the following command:
$ kubectl -n openwhisk get persistentvolumes
We got the following output:
And we checked the PVCs created by the OpenWhisk deployment by running:
$ kubectl -n openwhisk get persistentvolumeclaims
And we got the following output:
This showed that everything is healthy and working as expected.
With these changes, our OpenWhisk deployment is now more reliable as our application data can now persist through boot cycles.
In our next post, we’ll discuss how to deploy standard web applications which hook into MQTT data collection systems to OpenWhisk.