Research Directions for FaaS

by Josef Spillner

FaaS, or Function-as-a-Service, is a considerable application engineering trend under observation by the Service Tooling research initiative of the Service Prototyping Lab at Zurich University of Applied Sciences. Yesterday, the lab’s semi-regular open-for-all evening seminar series, Future Cloud Applications, centered around tools for managing the growing FaaS ecosystem. While the open source tools prototypically produced through the initiative are primarily of interest to Swiss software application engineers and cloud providers, the research challenges continuously uncovered by the work are more fundamental and encompass some harder nuts to crack in dedicated research projects. This blog reflects on what has been achieved already and what needs to be accomplished in the next years.

Second Future Cloud Applications event; photo courtesy of Manuel Perez Belmonte

FaaS is hard to capture in numbers because few deployments are publicly reported on. We know some of the limits depending on the provider and service model (e.g. temporal: 5 minutes maximum execution time; spatial: 1.5 GB maximum memory allocation; 100 maximum concurrent invocations), but we hardly know the possibilities and current usage patterns. There are statistics of dubious quality and many anecdotes around. The post is instead analysing qualitative characteristics of FaaS. It does not explain each point in technical detail as we leave this addition to a future position paper on this topic, but rather presents a high-level overview.

What we know. Despite FaaS being a recent technology trend, or perhaps due to this fact in combination with time-to-market pressure, there are many competing designs. As a programming model, there are several differences in function signatures (names, parameters, return values) and exports on the programming level, function unit names, number of functions per unit, and packaging for deployment (plain files vs. archives). There is almost a 1:1 mapping of providers and tooling for deployment and management. There is no standard API for managing (deploying, listing, calling) functions through a control plane.

Functions are sitting in silos. There is no public marketplace to browse them, subscribe to them or share them, as other researchers have already mentioned. This reduces the re-use level and thus increases the unwanted reinvent-the-wheel effect, contradicting one of the promises of service-oriented architectures, in particular when considering the rise of microservice compositions. There is no thorough analysis of whether existing service description languages (e.g. Swagger, RAML, API Blueprint) are suitable for single functions.

From an engineering perspective, analytical and development tools are rare. X-Ray and Step Functions are changing this for the aspects of debugging and composition, respectively, but are provider-specific and fill only part of the gap. Specific software design, engineering and testing methods for serverless applications are missing.

What we contribute. Function development is accelerated with tools which automatically decompose legacy code into deployable function units. Our tools Podilizer and Lambada perform this task for Java and Python, respectively. Although existing deployment tools are mostly limited to the provider offering these tools, there is snafu-import aiming to overcome this limitation with an m:1 and furthermore an m:1:n model for re-exporting functions into other function runtime targets. As existing runtimes are also limited and immature in terms of setup effort, extensibility, features and stability, Snafu follows a reasonable design to self-host functions in containers for development, testing and multi-tenant operation. This bridges the period until open source runtimes can be generally recommended for production use for providers who already want to offer FaaS pilot services. While there is no commercially operated marketplace for functions, we run a public mockup function marketplace, based on actual imported functions in Snafu, which could further develop into a functional prototype with subscription and sharing features.

What we don’t know (yet). Open research challenges include the following fifteen or so questions: Can we determine a priori the sweetspots for running code in a FaaS environment with economic benefits? Can we automatically profile and tune the code so that execution hotspots can be offloaded into FaaS? Are there scenarios in which long-running functions on FaaS beat running the same code monolithically in a container or virtual machine? How to micro-bill functions? Which level of isolation is appropriate for functions? How to reduce the startup overhead? Could functions be transpiled or compiled into native code or unikernels for improved performance? Is automated decomposition really feasible, and if so, which steps are best handled with static and which parts with dynamic code analysis? How to cluster function deployments so that all calls are as local as possible and yet the amount of code duplication is minimised? What about methods or functions which cannot easily be transferred into the FaaS environment due to input/output or insufficient dependencies? How to handle state, by service-side storage, differential transfer or other means such as state heuristics? How scalable are commercial FaaS platforms really when it comes to deploying thousands of inter-dependent functions, also considering management services such as API gateways and storage? Is in-memory storage with function instance affinity benefitial? Given the proliferation of manycore-optimised operating systems and hypervisors, is scalable self-hosting of functions becoming more attractive compared to native FaaS scaling, and do we need new languages for writing the functions?

Within the Service Tooling research initiative, answers to some of these questions will be sought in the coming months. Additional contributions will be delivered in the form of preprints, paper submissions, open source software tools and live services on our labsite.

Schlagwörter: faas, lambada, serverless, snafu, tooling

Leave a Reply

Your email address will not be published. Required fields are marked *