Diving into the Helm ecosystems: From charts to metrics

In recent months, we have extensively studied Helm charts, including setting up a continuous quality assessment, to find out more about this promising packaging format for Kubernetes applications. Apart from individual tweets and occasional talks, there was a lack of a coherent presentation of the ongoing work. Yet, due to the increasing installation base of Kubernetes stacks, the significance of this work appears to be on the rise. This blog post therefore tells what we achieved already and what we are still going to do in the next months.

From a macroscopic perspective, Helm charts are a tangible manifestation of deployable software artefacts which are used to build composite applications. They use templates and variable substitutions to reference deployable containers which may act as preconfigured microservices within the composition. As such, any quality issue in the chart itself or within its fully included dependencies may negatively impact the entire application running atop Kubernetes. Our research has concentrated on the assessment of Helm metadata quality, including automated and human-understandable creation of advice on how to fix the issues, as first direction.

As practical result, we have created HelmQA. This software can be used by developers to check Helm charts in a local collection, on GitHub or on the KubeApps Hub repository, both in CI/CD pipelines and in batch processing. On top of this software, we have built our continuous assessment by simply retrieving Helm charts once per day, running all checks, calculating all metrics and finally generating and serving advice via a tiny web application. We have also worked on the documentation, including two instructive screencast videos, and will further make it easier to use HelmQA in development and DevOps environments.

More recently, we have finally been able to upload the preprint on our work entitled «Quality Assessment and Improvement of Helm Charts for Kubernetes-Based Cloud Applications». This empirical research was based on preregistered hypothesis. However, we did not manage to submit in time for the Prereg challenge which was unfortunately limited to just a few applicable journals, and therefore decided to go the preprint-first route which may be complemented with external reviews if other researchers are interested.

A student group at University of Zurich has taken this state of the art as starting point and has implemented additional analysis and statistical methods over the published dataset. The results of this work will be summarised and partly merged into the HelmQA codebase in the immediate future.

Conforming to the recommendations given in the preprint, some of the checks may go into the “helm lint” command directly, some are more suitable as additional quality gates in repositories such as KubeApps Hub, and some are exotic or researchy enough to still warrant a separate piece of software. Accordingly, we will engage more with the Helm open source community, to which we only had brief controlled contact as part of the scientific study, to evaluate if and where merging makes sense.

Apart from the static analysis of the metadata and the current template rendering check, it is prudent to also consider basic execution checks to see if the deployment and launch of containers works. Finally, we intend to run live checks of Helm charts and other artefact types in a global distributed research infrastructure which still needs to be established. We already have concrete plans to realise this advancement and will report on it periodically in 2019.


Leave a Reply

Your email address will not be published. Required fields are marked *