Many large enterprise applications rely heavily on the underlying information stored in databases. Most of these are used every day via the web browser by hundreds of users, or run calculations in a batch mode. These actions are database intensive. As long as each resource is on premise, communication speed to those databases is mostly not an issue because data center internal network latency and bandwidth is good enough.

Modern cloud offerings allow a new flexibility for companies to plan and scale their datacenters. The data- and compute services do not need to remain in the on premise datacenter, but can be spread in an internal private cloud or externally among one or many public cloud providers. This new concept challenges the traditional IT environment and provides an unprecedented flexibility that pushes traditional data and compute services to their limit.

The hybrid cloud concept involves the usage of internal resources as well as cloud-based resources in case of bursting, which happens when internal resources are overloaded. We conducted a study of the feasibility of using public cloud resources in different use cases for large enterprises, especially in cases where the application computing core has to remain on premises (for security reasons typically) and the database is moved to the cloud.

Test setup from cloud provider to cloud provider

Test setup from cloud provider to cloud provider

Use Case

One reason to go to a public cloud is to lower operational effort. This can be done by letting the cloud provider operate the infrastructure. The enterprise just uses it as a service and take advantage of the cloud providers’ economics of scale. This would be mainly feasible for non-production applications, where performance and data security are not the main focus.

For short tests and new setups of small applications, the cloud can help to keep CAPEX low, as no hardware needs to be acquired. This saves money when resources are only needed for short time periods.

Benchmarking methodology

For performing benchmark tests on different set ups a simple java web application was created. This application wraps the TPC-C implementation of the OLTP-Benchmark and the Apache JMeter application. The tests executed were the TPC-C standard industry benchmark tests for database and custom tests with JMeter against a database structure found in one of SwissRe’s applications and running the most usual queries made by this application. The TPC-C test simulates the OLTP workload of an artificial wholesale supplier company. The test consists of five types of transactions: New order, Payment, Order-status, delivery and stock-level transaction. The most important type is the new order, which enters a complete order in a single database transaction. It was designed to simulate the variable workload found in productive OLTP environments.


Tests done in an enterprise environment showed that the internal database performance is still largely better than databases in the best known public cloud providers when the computing remains local. Going through the internet to issue DB requests simply affects the requests’ throughput by multiple order of magnitude and renders the usage of cloud databases impractical when the computing resources are not located on the same Cloud.

TPC-C Throughput comparison between local Oracle instance and remote MySQL at cloud providers with the application core on premises

TPC-C Throughput comparison between local Oracle instance and remote MySQL at cloud providers with the application core on premises

As most large enterprises already have a sophisticated database and application server infrastructure, changing to a public cloud offering can be challenging and expensive.

Using the given hardware and adding an IaaS and possibly PaaS layer would provide the flexibility of a cloud with no drastic performance impact. The existing database offering – which is often Oracle or DB2 – could be extended with a cheap MySQL alternative. Nonetheless a high performance offering close to the application improves the overall performance of heavy database relying applications drastically.

Cloud possibilities for large enterprises

Some cloud providers allow a billing concept where a customer only pays for the hour when an actual request is made. This allows running applications in the public cloud that are not very database intensive and not often used (as an example a Ski Event Registration Application). Through this model no internal hardware is required and the costs of the application infrastructure can be kept low.

In a hybrid cloud setup, development instances could also easily be deployed to a public cloud. The productive instance could be run in the private high-performance cloud.

For short bursts in the private cloud environment, it is currently not very reasonable to add additional compute power from an external provider when the working data is in the on premise environment, as the performance impact of the latency and throughput is huge.