The occasion of the 3.3.2 release of the rating-charging-billing solution for cloud software and platform providers, Cyclops, is a good opportunity for a deep dive into the new forecasting engine, the how and the why of its functionality and how to use it.
First, a bit of news. Active maintenance and further updates to Cyclops will now be found under the repository https://github.com/serviceprototypinglab/cyclops. The primary new addition is the forecasting engine. It helps SaaS/PaaS/CaaS/…XaaS providers to not only charge customers for their services, but also predict a revenue flow for deciding about future investments.
Core Estimation & Forecasting Engine
The purpose of the Estimation Engine is to provide revenue prediction functionality and to allow the user, in this case a manager or administrator, to test potential changes to the pricing strategies they employ, and see how they will affect future revenue generation.
Thus, the engine works in two stages. The first stage of the engine is the forecast generator, which is based on the principles of timeseries forecasting and integrated into Cyclops’ UDR microservice. Early versions of the engine relied on the ARIMA model for forecast generation, though some problems in its Java library implementation lead to a simpler forecasting method being used. At this stage, the forecast is generated in the following way:
- Existing usage data is collected from the usage table of the Cyclops UDR database
- Data is organized by user and usage type (eg CPU allocated or external IPs used)
- For each user and for each type of usage, the average hourly usage is calculated, then extrapolated over the desired forecast duration. This may seem like a downgrade from the ARIMA model used in previous iterations, though test-runs have shown the results are within ∓5% of what the model predicted and as stated before the Java library implementing that model was problematic
The second element of the estimation engine is the ability to test pricing rules different to the ones currently used in production to generate the revenue forecast. Special ‘testing’ rules can be added to the Coin engine, which rely on a property of the forecast generator to ‘tag’ the generated records in order for them to trigger specific rules. Put simply, rules can test records for a specific tag, so that when a forecast request is made with that tag, the records will have that tag and trigger these specific targeted rules.
Using the Engine
To demonstrate the features of the Estimation Engine, we’ll walk through some simple examples. This will be made even easier with the use of the graphical dashboard.
We will actually be following the stages of the engine backwards, since we will start by deploying our test rules. First, we navigate to the Rule Management page in the dashboard. Here, we can see all rules deployed to both instances of the Coin engine.
By going to the New Rule tab, we can add new rules to the engine. Let’s look at an example of a testing rule:
import ch.icclab.cyclops.facts.Usage;
import ch.icclab.cyclops.facts.Charge;
rule "Test 1 rule for storage_allocated (12:36:0.4 20/Aug/2019)"
salience 60
when
$usage: Usage(metric == "storage_allocated" && account.contains("test1"))
then
Charge charge = new Charge($usage);
charge.setCharge(0.00000000004 * $usage.getUsage());
charge.setCurrency("CHF");
retract($usage);
insert(charge);
end
The important part to note here is account.contains(“test1”) , and then the fact that the rule has a salience of 60 , where production rules have 50. The first item means that this rule will only be fired if the account field of the usage record contains the ‘test1’ substring. When a forecast is requested with the ‘test1’ tag, all records generated will contain this substring in the account field, so they will trigger the rule if the other conditions are met. Thus, these tags can be used to group rules together into entire pricing models. The importance of salience being higher that the production rule is simply to ensure the rule is checked first, as a rule firing means the record is deleted from the queue.
After all the rules are deployed we can move to the forecasting page to generate some forecasts and compare our models. Let’s review the options given.
For the type of forecast, we can select:
- Single account: This will only generate a forecast for the user selected in the user field that appears only when using this option
- Global: This will average the usage of all accounts and generate a single forecast, organising only by usage type. Only useful for getting an overview in databases with many accounts.
- Pattern-based global: This will generate a forecast for each and every account in the database, using the full forecasting strategy of organising historical records by user and type and then forecasting each.
The target model field is used to input the rule tag to be used for the forecast.
Finally the length of the forecast is the number of days into the future that will be estimated.
Generating the forecast will produce 3 cards:
- Revenue by account: This card breaks down the total forecast by user account
- Total forecast by model: This card displays the total forecasted revenue. It is useful for comparing multiple models when more than one forecast are generated
- Bill breakdown: This view breaks down each individual bill generated by the engine by usage type, to provide details on how each billed metric contributed to the total bill
It is worth noting that multiple forecasts can be generated in sequence, using different tags, to allow for direct comparison.
Finally, the Record Cleanup card can be used to clear the forecast. This will delete all records with a given tag, from all 3 Cyclops databases.
Here is a video of a forecast being generated from our Service Prototyping Research Videos channel on YouTube: