Category: Open Source (page 2 of 4)

N_O_conf: nagios-based monitoring of Openstack made easy

noconf

“The autoscaling cloud monitoring system that requires no manual reconfiguration”

“Nagios OS autoconfigurator” (N_O_conf) is a cloud monitoring system that automatically adapts its monitoring behavior to the current user-initiated VM infrastructure. N_O_conf works by installing a cloud environment change listener daemon which is repeatedly polling the OpenStack API for changes in the VM infrastructure. As soon as a VM destruction is detected, it initiates a reconfiguration of the Nagios monitoring server. Nagios OS autoconfigurator can be installed on top of every OpenStack-based cloud environment without interfering with the cloud providers infrastructure, because it can be installed inside virtual machines so cloud consumers can use it as their own monitoring system. N_O_conf monitoring system monitors all VMs that are owned by the user that installed it.

Continue reading

VM Reliability Tester for Openstack

vmrelt

“Measure and benchmark reliability of your OpenStack virtual machines.”

“VM Reliability Tester” is a software that tests performance and reliability of virtual machines that are hosted in an OpenStack cloud platform. It evaluates the failure rate of VMs by performing a stress test on them. VM Reliability Tester installs OpenStack virtual machines, uploads a test program to them, runs this test program remotely and then captures program execution times to determine reliability of the virtual machines. If the test program takes a significant amount of time to complete, this is considered to result in a VM failure. Such deviations in execution time are an important benchmark for testing performance and reliability of your OpenStack environment.

Why VM Reliability Tester?

Cloud computing (or to be more precise: virtualization) is providing virtual resources instead of physical ones. The performance of virtual resources is hidden from the user, because virtual resources are abstracting form the physical hardware layer. As a system administrator you still might want to know how your virtual machines react under heavy load and you want true performance measurements – instead of promises by your cloud vendor. Therefore it might be an advantage to test the reaction of virtual machines that you have created in your OpenStack cloud and measuring the VM performance before creating a productive infrastructure and deploy productive applications on them. VM Reliability Tester delivers you estimates on how your VM performs when it is running applications. With the data produced by VM Reliability Tester you will be able to:

  • Check if your VM is performing well enough to serve your performance requirements.
  • Benchmark VM images in terms of application performance.
  • Benchmark OpenStack platforms from different vendors.
  • Acquire data that helps you to shape SLAs and underpinning contracts.

How does it work?

VM Reliability Tester uses a “master” VM which serves to create test VMs and upload test programs to them. The master VM first configures the test VMs and then runs the uploaded test programs. Test program runs are repeated in a (configurable) batch of several program runs. The test programs executes for the configured number of times on the test VMs and logs execution time of each test program run. After a batch of test program runs has finished, the master VM captures the logged execution times and calculates the mean and standard deviation of execution times in the batch. If a test program run took longer than the batch mean plus 3 standard deviations, it is considered as a failure and logged by the master VM in a file called “f_rates.csv”.

Based on the numbers of batches and test program runs per batch as well as the number of failures, VM Reliability Tester computes a failure rate sample. This sample is then used to predict failure rate estimates in productive VM infrastructures.

Setup and Installation

Prerequisites for installation of VM Reliability Tester are:

  • You must have valid OpenStack authentication credentials and provide them in the setup file “openrc.py”.
  • You have to provide a private/public keypair for authentication with the VMs that you own. Local path to your public and private key file must be added to a “config.ini” and “remote_config.ini” file.
  • You must own a PC or labtop and have Python and some Python libraries installed on top of it.

Installation of the tool is done easily by cloning the Github repository and changing the contents of the files openrc.py, config.ini and remote_config.ini. Once you have cloned VM Reliability Tester repository and performed the configuration file changes, you must only run vm-reliability-tester.py. The script will create some csv files that contain failure rates of the VMs and the possible distributions of the failure rate.

Github page

VM Reliability Tester is available on the following Github-Page:

https://github.com/icclab/vm-reliability-tester

SmartDataCenter APIs – turning up the Heat

As mentioned in the first post about SmartDataCenter, it features various APIs. In this post we will have a look at them. Further I would like to present sdcadmin & sdc-heat, two small Python projects I have been working on. The former is a Python client library for SDCs admin APIs. The latter is an OpenStack Heat plugin that allows provisioning of SmartMachines and KVM VMs on SDC.

Continue reading

Introduction to SmartDataCenter

Joyent recently open sourced the IaaS Platform SmartDataCenter and the Object Storage Manta, the software they use for their own service offerings. So, what’s all the buzz about? Why should you be excited? Why is it even worth talking (or in this case, writing) about SDC when we have OpenStack? In this blog post I will cover some of the fundamentals of SDC and why it’s worth a second look.

Continue reading

MobileCloud Networking Live @ Globecomm

As part of the on-going work in MobileCloud Networking the project will demonstrate outputs of the project at this year’s Globecomm industry-track demonstrations. Globecomm is being held this year in Austin, Texas.

MobileCloud Networking (MCN) approach and architecture will be demonstrated aiming to show new innovative revenue streams based on new service offerings and the optimisation of CAPEX/OPEX. MCN is based on a service-oriented architecture that delivering end-to-end, composed services using cloud computing and SDN technologies. This architecture is NFV compatible but goes beyond NFV to bring new improvements. The demonstration includes real implementations of telco equipment as software and cloud infrastructure, providing a relevant view on how the new virtualised environment will be implemented.

For taking the advantage of the technologies offered by cloud computing today’s communication networks has to be re-designed and adapted to the new paradigm both as developing a comprehensive service enablement platform as well as through the appropriate softwarization of network components. Within the Mobile Cloud Networking project this new paradigm has been developed, and early results are already available to be exploited to the community. In particular this demonstration aims at deploying a Mobile Core Network on a cloud infrastructure and show the automated, elastic and flexible mechanism that are offered by such technologies for typical networking services. This demonstration aims at showing how a mobile core network can be instantiated on demand on top of a standard cloud infrastructure, leveraging key technologies of OpenStack and OpenShift.

Screen Shot 2014-11-05 at 12.21.47

The scenario will be as following:

  1. A tenant (Enterprise End User (EEU), in MCN terminology) – may be an MVNO or an enterprise network – requests the instantiation of a mobile core network service instance via the dashboard of the MCN Service Manager – the the service front-end where tenants can come and request the automated creation of a service instance via API or user interface. In particular the deployment of such core network will be on top of a cloud hosted in Europe. At the end of the provisioning procedures, the mobile core network endpoints will be communicated to the EEU.
  2. The EEU will have the possibility to access the Web frontend of the Home Subscriber Server (HSS) and provision new subscribers. Those subscribers information will be used also for configuring the client device (in our case a laptop).
  3. The client device will send the attachment requests to the mobile core network and establish a connectivity service. Since at the moment of the demonstration the clients will be located in the USA, there will be a VPN connection to the eNodeB emulator through which the attachment request will be sent. At the end of the attachment procedure all the data traffic will be redirected to Europe. It will be possible to show that the public IPs assigned to the subscriber are part of the IP range of the European cloud testbed.
  4. The clients attached to the network will establish a call making use of the IP Multimedia Subsystem provided by the MVNO. During the call the MVNO administrator can open the Monitoring as a Service tool provided by the MCN platform and check the current situation of the services. For this two IMS clients will be installed on the demonstration device.
  5. At the end of the demonstration it will be possible to show that the MVNO can dispose the instantiated core network and release the resources which are not anymore necessary. After this operation the MVNO will receive a bill indicating the costs for running such virtualized core network.

It specifically includes:

  • An end-to-end Service Orchestrator, managing dynamically the deployment of a set of virtual networks and of a virtual telecom platform. The service is delivered from the radiohead all the way through the core network to service delivery of IMS services. The orchestration framework is developed on an open source framework available under the Apache 2.0 license and is where the ICCLab actively develops and contributes.
  • Interoperability is guaranteed throughout the stack through the adoption of telecommunication standards (3GPPP, TMForum) and cloud computing standards (OCCI).
  • A basic monitoring system for providing momentary capacity and triggers for virtual network infrastructure adaptations. This will be part of the orchestrated composition.
  • An accounting-billing system for providing cost and billing functions back to the tenant or the provisioned service instance. This will be part of the orchestrated composition.
  • A set of virtualised network functions:
  • A realistic implementation of a 3GPP IP Multimedia Subsystem (IMS) based on the open source OpenIMSCore
  • A realistic implementation of a virtual 3GPP EPC based on the Fraunhofer FOKUS OpenEPC toolkit,
  • An LTE emulation bases on the Fraunhofer FOKUS OpenEPC eNB implementation
  • Demonstration of IMS call establishment across the provisioned on-demand virtualised network functions.

Nagios OpenStack Installer – Automated monitoring of your OpenStack VMs

There are many tools available which can be used to monitor operation of the Opentack infrastructure, but as OpenStack user you might not be interested in monitoring OpenStack itself. Your primary interest should be the operation of the VMs that are hosted on OpenStack. Nagios OpenStack Installer is a tool for exactly that purpose: it uses a Nagios VM inside the OpenStack environment and configures it to monitor all VMs that you own.

Nagios OpenStack Installer configures your OpenStack monitoring environment remotely from your desktop PC or labtop. In order to use Nagios OpenStack Installer you need to fulfil the following prerequisites.

  • You must have an SSH Key for securely accessing the Nagios VM and the VMs you own and you must know the SSH credentials to access the VMs.
  • You must know your OpenStack user account (name and id), your OpenStack password, the OpenStack Keystone authentication URL and the OpenStack tenant (“project”) (name and id) you work with.
  • You must be able to create a VM that serves as Nagios VM and you must own a publicly available IP (“floating IP”) to make the Nagios dashboard accessible to the outside world.
  • Nagios OpenStack Installer is a Python tool and requires some Python packages. Make sure to install Python 2.7 on your desktop. Additionally you need the following packages:
    • pip: The package manager to install Python packages from the PyPI repository (Windows users should refer to the pip developer’s “get pip” manual to install pip, Cygwin users are recommended to follow these guidelines in atbrox blog).
    • fabric: This package is used to access OpenStack VMs via SSH and remotely execute tasks on the VMs.
    • python-keystoneclient: To access the OpenStack Keystone API and authenticate to your OpenStack environment.
    • python-novaclient: To manage VMs which are hosted on OpenStack.
    • cuisine: This is a configuration management tool and lightweight alternative to configuration managers like Puppet or Chef. cuisine is required to manage the packages and configuration files on the Nagios VM and the monitored VMs.
    • pickle: pickle is a object serialization tool that can store objects and their current state in a file dump. Object serilaization is used to get the list of VMs which should be monitored.
    • We recommend to use pip for installation of the required packages, since pip automatically installs package dependencies.
  • You must have Git downloaded and installed.

After having installed the prerequisites on your local PC or labtop, you can use Nagios OpenStack Installer by performing the following steps.

  1. Create a new directory and clone the Nagios OpenStack Installer Github repository in it.git clone https://github.com/icclab/kobe6661-nagios-openstack-installer.git
  2. Edit the credentials in install_autoconfig.py, remote.py, remote_server_config.py and vm_list_extractor.py to match your OpenStack and SSH credentials.
  3. Run remote_server_config.py from Python console. This installs and configures Nagios server on your Nagios VM. After installation you should be able to access the Nagios Dashboard by pointing your webbrowser to “http://<your_nagios_public_ip>/nagios” and providing your Nagios login credentials.
  4. Run vm_list_extractor.py from Python console. This will extract the list of VMs on OpenStack that should be monitored and save the list as pickle file dump on your computer.
  5. Run install_autoconfig.py from Python console. This will upload the Python scripts required to automatically update the Nagios configuration in case of changes in the OpenStack VM environment (nagios_config_updater.py, config_transporter.py, config_generator.py, vm_list_extractor.py). Additionally it will run these Python scripts on the Nagios VM to let Nagios capture the VMs which should be monitored, install and run the required Nagios and NRPE plugins on these VMs and reconfigure and restart Nagios server to monitor these VMs remotely.

Now the Nagios environment is installed and you should be able to monitor your VMs. Nagios OpenStack Installer is available on ICCLab’s Github repository. Feel free to try it out and give feedback about future improvements.

A Web Application to Monitor and Understand Energy Consumption in an Openstack Cloud

In one of our projects we need to understand the energy consumption of our servers. Our initial work in this direction involved collecting energy consumption data using Kwapi and storing it in Ceilometer for further study. The data stored in Ceilometer is valuable; however, it is insufficient to really understand energy consumption in detail. Consequently, we are developing a web application which gives a much greater insight into energy consumption in our cloud resources. This is very much a work in progress, so this post just highlights a few points relating to the application as well as a video which shows the current version of the application.

The tool was developed to be totally integrated with Openstack. Users log in with their Openstack credentials (using Keystone authentication) and are  redirected to the overview page where they can see  the total energy consumed by the VMs in their projects for the the previous month as well as some  general information regarding virtual machines; a line chart displays how energy consumed varies over time.

Continue reading

Profiling the Ceilometer API to Identify Performance Bottlenecks

We are using ceilometer to collect data energy from our servers. As noted previously we were having some performance issues and we needed to investigate further. In this blog post we will cover our approach to performing profiling on ceilometer API to determine where the problems arose.

Of course, the first step was to take a look at the log files (in /var/log/ceilometer-all.log); as there was nothing unusual in there, we decided to perform profiling of the code.

Continue reading

Short video introduction to COSBench

COSBench is a tool developed by Intel for benchmarking cloud object storage services.

Here’s a brief video showing some functions of the web interface.

For more details, please refer to the COSBench user guide.

COSBench GitHub page

An overview of Load Balancing

With the advent of large scale architectures came a need to improve the distribution of requests to optimize the throughput of the system while keeping a minimum response time. This is especially true for large web services. Load balancing is the ability to make many servers participate in the same service and do the same tasks.

The goal of this post is to explain the different approaches in traditional load balancing as well as a list of existing software. The last section will be about the integration of these approaches in a cloud-environment as nowadays the large scale architecture described in the previous paragraph may be entirely cloud-based. This blog post is not meant to be an exhaustive study of Load balancing as this is a mature topic with a lot of research and available products, but rather tries to be an introduction for someone who might need to use Load Balancing in his project and would like to have knowledge of the basic types of load balancers as well as a list of the most well-known products. To investigate further, a list of useful links is provided at the end of the post.

Load Balancing is often confused with high-availability as with the growing number of servers, risk of failure anywhere increases and must be addressed, and the ability to maintain unaffected services during these failures is also part of a load-balancer’s job, redirecting requests to working resources.

The focus of this post will be on Load-Balancing HTTP applications, which is one of the most classic applications of load balancing.

Load balancing approaches

DNS-based

DNS load balancing is probably the technique which is the easiest to implement. When accessing a service through an address, a DNS server is tasked to translate the address into a comprehensible IP. Through this URL translation, the DNS can select any node from the cluster it manages based on its scheduling policy. It also provides a validity period (Time-To-Live), used to cache the translation. After the expiry of this TTL, the next request is routed again to the DNS server. Round-Robin is the simplest policy to implement, so the addresses are returned by the server in a rotating order.

Example of DNS load-balancing

host -t a www.google.com
www.google.com has address 173.194.40.52
www.google.com has address 173.194.40.49
www.google.com has address 173.194.40.48

Using a round-robin algorithm, each request is routed to one of these different IP.

Network-based

In this approach, the load-balancing architecture consists of a hardware or software equipment installed in a dedicated frond-end server that will work at the network packets level. This type of LB is also called Layer 3/4 LB, distributing requests based upon data found in network and transport layer protocols such as TCP or UDP. They will act on routing, using one of the following methods: Direct Routing (the LB routes the same service address through different local, physical servers on the same network segment), Tunneling (tunnels are established between the LB and the servers, so they can be located on remote networks) or NAT (the user connects to a virtual destination address, which the load balancer translates to one of the servers’ addresses).

Application-based

Application level LBs, also called Layer 7 LBs, act as reverse proxies and distribute requests based upon data found in application layer protocols such as HTTP. They provide a first level of security by only forwarding what they understand. They can also be combined with the previous type of Load Balancer to ensure a fine-grain request distribution.

Example of an architecture using both Layer 4 and Layer 7 LB.

LB Architecture

Current offering

Historically, most of the offers in the Load Balancing sectors come from major hardware network vendors such as Big-IP, Juniper and F5 but recently software load-balancers are increasingly used, especially in a cloud environment where the network might be virtual. As the number of existing Load-Balancers is huge we chose to focus on a handful of them, especially those released under an Open Source license.

Layer-4 capable software LB

IP VirtualServer
IPVS is built in the Linux kernel, and thus does not suffer from context switching between user space and kernel space, which introduces delays, especially under heavy traffic with many short lived connections.

HAProxy
HAProxy is an hybrid load balancer both capable of Layer 4 (TCP) and Layer 7 (HTTP) Load-Balancing. It implements an event-driven, single-process model which enables support for very high number of simultaneous connections. The idea behind this choice, which dates back to the early versions of the tool, is that because of memory limits, system scheduler limits and lock contention, multi-process/multi-threaded models are not able to cope with thousands of simultaneous connections. Since version 1.5 it supports SSL connections.

Layer-7 capable software LB

nginx
Primarily built as a lightweight HTTP server, nginx also serves quite well as an HTTP(S) load balancer. Of the listed options, nginx provides the most number of features, including many options for caching and file serving.

Apache
Through the module mod_proxy_balancer available since Apache 2.1, Apache can be used an HTTP Load Balancer retrieving requested pages from two or more backend web servers and delivering them to users, while keeping track of sessions, which allows a single user to always deal with the same backend webserver.

Pound
Between nginx and HAProxy, Pound is a lightweight HTTP-only load balancer. It offers many of the load balancing features of nginx without any of the web server capabilities and can thus be used behind any web server. This keeps Pound small and efficient.

Varnish
Although primarily used as a reverse proxy cache, Varnish also includes functionality to act as a load balancer. It does not offer a great deal of configuration, but, if already using Varnish for caching, it is possible to also make use of its load balancing abilities to simplify an architecture and avoid using too many different components.

Load-Balancing in the Cloud

Many of the Infrastructure-as-a-Service management suites provide their own component dedicated to Load Balancing, among them Apache CloudStack and Openstack. This component is in fact a connector between the virtual instances and a real load balancer such as the ones described in the previous paragraph. For instance OpenStack Neutron LoadBalancing works together with HAProxy. Cloud providers such as Amazon also provide their own LB services. The common point in all these LB are that they work “as a service”, that is a tenant can dynamically add a LB to a set of virtual servers to optimize request routing.

Useful links

http://louwrentius.com/overview-of-open-source-load-balancers.html
http://1wt.eu/articles/2006_lb/
http://huanliu.wordpress.com/2010/06/02/how-to-choose-a-load-balancer-for-the-cloud/
http://kaivanov.blogspot.ch/2013/01/building-load-balancer-with-lvs-linux.html

« Older posts Newer posts »