Tag: docker

HDocker and Docker Image Analysis Accepted for DAIS 2021

We conducted joint work with Université de Neuchâtel on improving the handling of Docker container images in the increasingly heterogeneous hardware environments. We propose to (1) finer-grained incorporate hardware dependency information in the image metadata, (2) leveraging heuristic analysis techniques to populate such information at large scale (although of course preferring properly curated metadata), and (3) improving the tool support around container creation from images. The work has led to new tools like hdocker and heuristic analysis rules. Furthermore, to underline the need for such a solution, we have been conducting a long-term tracking over fourteen now seventeen months of selected subsets of registered Docker container images.

This work has been accepted by the 21st International Conference on Distributed Applications and Interoperable Systems (DAIS 2021). Ahead of the event we already provide the collected data and code. Have a look!

Leveraging Docker and reproducible workflows for portable research environments

Reproducibility is an important aspect of research. One particular concept of interest is FAIR principles. FAIR stands for Findable, Accessible, Interoperable and Reusable and defines a ‘best practices’ approach for research.

However, complying with such a concept presents its own unique challenges: Some research tools rely on complex infrastructure elements and software stacks that are hard to replicate. In cases such as these, the added workload acts as an inhibitor on ensuring reproducibility. The result is that many experiments, data pipelines and results are hard to replicate. According to our preliminary insights, based on interviews within the ZHAW, on how experiments are conducted, such complexity is indeed commonplace in our current research activities. 

Continue reading

Docker image checks: Quality, security, up-to-dateness, layers and inheritance

Docker images have become the valuta franca in the cloud and container platform world. Although on the path to vendor-neutral standardisation (e.g. with OCI also being in Docker Hub for a year now), developers for now have settled on plain Docker as de-facto standard due to the vast ecosystem of base images and dependency images which speed up the rapid prototyping of complex scalable applications. From a production-grade DevOps perspective, a key concern is then to be assured that the containers used are of high quality, not infected by security vulnerabilities, and still containing the latest features available. In this blog post, a novel approach to visualise the situation around a particular container image is presented.

Continue reading

Presenting the MAO Orchestrator

The MAO-MAO research collaboration aims to provide metrics, analytics and quality control for microservice artefacts of all kinds, including but not limited to, Docker containers, Helm charts and AWS Lambda functions. As such, an integral part of prior research has been the various periodic data collection experiments, gathering metadata and conducting automatic code analysis.

However, the ambition of the project to collect data consistently, combined with the need for the collaborators to be able to use each other’s tools and access each other’s data, have created a need for a collaboration framework and distributed execution platform.

In response to this need, we present the first release of the MAO Orchestrator, a tool designed to run these experiments in a smart way and on a schedule, within a federated cluster across research sites. As a plus, there is nothing implementation-wise tying it to the existing assessment tools, so it is reusable for any use-case that requires collaboratively running periodic experiments.

Continue reading

Introducing the Docker Compose Validator

When cloud application developers are working with docker-compose to combine multiple microservices into a single manageable entity, they can make some easy mistakes. To prevent these mistakes, they can rely on internal validation logic, which however does not catch many of the typical issues. Therefore, researchers at the Service Prototyping Lab at Zurich University of Applied Sciences wrote a dedicated quality check and assessment tool targeting developers, but also students trying to learn the technology, which has a wider range of checks. The DCValidator tool is available as a web application (see demo instance) or command-line interface. This blog post describes how to check that docker-compose files are free of issues.

docker-compose validator Web interface
Continue reading

Calculating net cost of AWS Lambda Functions

With the increased adoption of serverless computing, so is the need to optimise cloud functions, to make use of resources as efficiently as possible, and to lower the overall costs in the end. At the Service Prototyping Lab at Zurich University of Applied Sciences, we investigate how cloud application and platform providers can achieve a fairer billing model which comes closer to actual utility computing where you pay only for what you really use. We demonstrate our recent findings with AWS Lambda function pricing.

Continue reading

Reflections on software artefact quality tutorial

Our work in the Service Prototyping Lab at Zurich University of Applied Sciences consists of applied research, prototype development and conveying knowledge to industry. In this context, we have worked hard over the previous two years to gather educational and hands-on material, including our own contributions, for increasingly valuable tutorials. From single lectures to half-day and eventually full-day tutorials, we aim at both technology enthusiasts and experienced engineers who are open for new ideas and sometimes surprising facts. In this reflective blog post, we report on this week’s experience of giving the full-day tutorial on microservice artefact observation and quality assessment.

Continue reading

Bundling CNA Applications

For almost five years, we have been researching cloud-native applications. As part of an industry-wide push to cloud-native computing, a lot of stacks and middleware components are proposed every day, but few tools and processes help improving the applications themselves especially in terms of quality attributes such as discoverability, elasticity and resilience. With Helm charts, there is already a higher-level approach to package cloud applications in Kubernetes environments. Our work on static analysis of Helm charts and quality assessment beyond is documented and ongoing. In this post, we take a first look at CNAB, or Cloud Native Application Bundle which is self-described as secure and cloud-agnostic way to deliver applications.

Continue reading

End-to-end testing of cloud-native applications

In our research group, we have for many years observed and systematically explored how cloud applications are being developed. In particular, we focus our investigations on cloud-native applications whose properties are largely determined by exploiting the capabilities of modern cloud platforms for both their development and operation. As we are involved in European research on testing cloud applications (Elastest), our aim was to look at the current project results through the cloud-native glasses. This blog post reports about end-to-end testing of composite containerised applications from this perspective.

Continue reading