Rapid API generation with Ramses

by Josef Spillner

Rapid service prototyping, cloud application prototyping and API prototyping are closely related techniques which share a common goal: To get a first working prototype designed, implemented and placed online quickly, with small effort and with little headache over tooling concerns. The approaches in this area are still emerging and thus often ad-hoc or even immature. Several prototyping frameworks do nevertheless show a potential to become part of serious engineering workflows. In this post, the Ramses framework will be presented and evaluated regarding this goal.

Ramses mixes the Pyramid web framework for Python applications with RAML service descriptions, a database, a search engine and several additional modules to achieve a description-to-service transformation. The transformation considers the schemas linked to the description and thus allows for structured data manipulation according to the CRUD (Create, Read, Update, Delete) paradigm. Both relational databases (through SQLAlchemy) and document databases (i.e. MongoDB) are supported choices, whereas the search engine is fixed as ElasticSearch.

In a level model, the first level is application prototyping where the application can manage its data. The second level is to offer a (possibly remotely invocable) API. The third level is to lift the entire construct to the (micro-)service level which asks for uniform descriptions and interfaces as well as discoverability. In this model, Ramses focuses on the second level with some aspects from the third one. Hence, the focus of this post is equally on API generation as part of a more holistic service prototyping concern. The website of Ramses is lacking in several regards. The prominently shown example is only an excerpt of the actual effort to get a service going. The framework is open source with a somewhat hidden reference to the associated Git repository. And finally, the published instructions assume a prior installation and configuration of both a database server and an ElasticSearch instance. Hence, this post also aims to improve and offer more streamlined instructions suitable for auto-didactic learning and results reproduction. All instructions have been tested on a Xubuntu 16.04 virtual machine after a default installation on an 8 GB disk image.

First things first: Installation of the prerequisites. The line shown below assumes a minimal environment with an embedded database. Alternatively, replace sqlite3 with mongodb or with the two packages postgresql-server-dev-9.5 postgresql-9.5.

sudo apt-get install python-pip virtualenv git vim elasticsearch sqlite3 curl

If you use PostgreSQL, do not forget to configure a proper database.

sudo su -c psql postgres
# CREATE USER ramses WITH PASSWORD 'ramses' SUPERUSER;
# CREATE DATABASE ramses OWNER ramses

Furthermore, ElasticSearch does not start by default on Ubuntu, and the instructions are not working out of the box due to a legacy SysV init vs. Systemd compatibility issue. Instead, the following instructions work by allowing the daemon to start before enabling the Systemd unit.

sudo sed -i -e 's/^#START_DAEMON/START_DAEMON/g' /etc/default/elasticsearch
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable elasticsearch.service
sudo /bin/systemctl start elasticsearch.service

Now, the prerequisites are set. Confirm with netstat -ltp that both ElasticSearch (on ports 9200 and 9300) and the database server (if not embedded, e.g. port 5432 for PostgreSQL) are running. Then, simply follow the instructions as found in the latest development branch in Git. They will lead to a working example project using Ramses.

mkdir my_project
cd my_project
virtualenv venv
source venv/bin/activate
pip install ramses # choose MongoDB or SQLAlchemy
pcreate -s ramses_starter . # note trailing dot; configure here
pserve local.ini

Before executing the last command, modify the configuration file local.ini according to your database choice. Use one of the two lines given below; note the (odd but necessary) fourth slash for SQLite. Also take a look at the service description file (api.raml and items.json for the schema) to get a glimpse at what the service will offer.

sqlalchemy.url = postgresql://ramses:ramses@localhost:5432/ramses
sqlalchemy.url = sqlite:////tmp/test.sqlite

There should be no error messages during the execution of these commands. Notice how virtualenv is used to shield the system from additional dependencies which will be installed. Overall, the folder will be about 50 MB in size. As the last command will take your terminal, open a second session and again verify with netstat -ltp that Ramses is indeed running on the default port 6543. Furthermore, you can verify that that the data structures have been created, for instance by running \d in PostgreSQL (connect with psql –host=localhost –user=ramses –password ramses) or .tables in SQLite (connect with sqlite3 /tmp/test.sqlite).

Now you can read and write data through the HTTP API.

curl http://localhost:6543/api/items
curl -H "Content-Type: application/json" -X POST -d '{"name": "Abu Simbel", "id": 19, "description": "Nubia"}' http://localhost:6543/api/items

The example API is ready and can now be customised. An additional step towards actual service prototyping would be the deployment of the whole project. The options would be: (1) placing everything into a virtual machine or container, or (2) deploying as a Python project within a PaaS with database and search engine offered as complementary services. Unfortunately, (2) is not widely available. Hence, this post informs about (1) using a container. One basic consideration is whether to continue using virtualenv given that containers also isolate. The web is full of posts for and against it. Given that the environments produced by virtualenv are not portable, i.e. the generated scripts contain hardcoded paths, it appears to be less troublesome to just do away with them.

The following represents a Dockerfile which launches both the generated API and ElasticSearch which, unfortunately, is not an optional service for Ramses. Furthermore, given the issue with Systemd initialisation, a crude but simple workaround for launching both services is used.

FROM python

RUN apt-get update
RUN apt-get -y install elasticsearch

ADD . /opt/ramses
RUN rm -rf /opt/ramses/venv
RUN pip install ramses
RUN cd /opt/ramses && pip install -r requirements.txt && pip install nefertari_sqla

EXPOSE 6543

CMD cd /opt/ramses && (su -s /bin/bash -c /usr/share/elasticsearch/bin/elasticsearch elasticsearch &); sleep 20 && pserve local.ini

Build the container with sudo docker build -t ramsesdocker . and run it with sudo docker run -ti -p6543:6543 ramsesdocker. Note that the container will execute fine locally but may not easily run on several container platforms due to security concerns. On OpenShift, it starts despite a security warning but fails due to su expecting a terminal. One workaround is to launch elasticsearch without the use of su. This requires /usr/share/elasticsearch/data (an alias to /var/lib/elasticsearch) to be world-writeable. Furthermore, the sleep 20 part is brittle and perhaps over- or underestimating the launch time of ElasticSearch; a proper port availability notification or polling would be more robust. Installing net-tools and running netstat -ltpn during the startup sequence helps with debugging. With these changes, the container has been verified to work successfully in OpenShift (e.g. at APPUiO).

Overall, the process to get a service done quickly takes several hours with Ramses as it is documented and presumably less than half an hour with the provided instructions. We are eager to hear confirmations, rebuttals and other evaluations of this tool as well as reports about related prototyping tools.


Leave a Reply

Your email address will not be published. Required fields are marked *