Month: September 2014 (page 1 of 2)

Performance of Live Migration in Openstack under CPU and network load

Previously, we analyzed the performance of virtual machine (VM) live migration in different scenarios under Openstack Icehouse. Until now, all our experiments were performed on essentially unloaded servers – clearly, this means that the results are not so widely applicable. Here, we analyze how the addition of load to the physical hosts and the network impacts the behaviour of both block live migration (BLM) and live migration (LM). (Note that the main difference is that BLM migrates the VM disk via the network while LM uses shared storage between source and destination hosts and the disk is not migrated at all). Continue reading

An analysis of the performance of live migration in Openstack

We continue our recent work regarding an analysis of the performance of live migration in Openstack Icehouse. Our previous results focused on block live migration in Openstack, without shared storage configured between computing nodes. In this post we focus on the performance of live migration in the system with a shared file system configured, compare it with block live migration and try to determine scenarios more suitable for each approach. Continue reading

Announcing 2nd CloudFoundry UserGroup DACH Meetup

link to meetup

Join us at the second CloudFoundry User Group DACH Meetup on September 22nd in Zürich.

This time the focus is on CloudFoundry in general. Learn about the basic functionality in the CloudFoundry 101 session and pick up some lessons learned from Klimpr, a startup which switched its Application from Heroku to CloudFoundry.

Date & Time
Monday, September 22nd 2014, 18:00 CEST

Place
ZHAW School of Engineering, Lagerstrasse 41, 8021 Zürich,
Room ZL O6.10 (6th floor)

Agenda
18:00 – 18:15 > Welcome
18:15 – 19:15 > CloudFoundry 101
19:15 – 19:30 > Lessons learned from Klimpr
19:30 – 21:00 > Q&A with drink and sandwiches/pizza

Please register here

How to set up a standalone Swift installation in a VM and test it

In this tutorial, we will be speaking about a “Swiftbox”. This is nothing more than our terminology for an Openstack installation that only needs and uses Swift and Keystone. The use and setup of this Swiftbox will be explained in this article.

The reason why someone might want a stripped-down OpenStack installation with only Swift and Keystone running, is that it allows easy testing of Swift services. A Swiftbox can be used to try understanding how object storage works. Also, having an independent object storage, is a good perk: it allows testing or running various different projects, with only one Swiftbox to be configured for everything.

A use case for this standalone Swift installation is to reproduce an isolated and potentially local environment to test applications that need or want to use Swift as their object storage backend. This can prove useful to experiment with the technology as well as to debug or exercise existing applications.

For the simplified nature of the Swift installation here described (everything runs inside a VM and not over a cluster of nodes!), this procedure should not be considered to run Swift for anything else than a stubby backend for a testing environment.

The main steps to set up a Swiftbox are:

  • Creating and configuring a VM (we will use Vagrant and run a Ubuntu Server 14.04 box)
  • Configuring Devstack
  • Configuring a Keystone endpoint for Swift
  • Testing Swift
  • Some troubleshooting

Continue reading

Main features of Hypervisors reviewed

We prepared this blog post to help students understand the hypervisor-support matrix introduced by OpenStack .  This information is spread throughout different sources. Many of these include Redhat Linux and OpenStack. We have tried to provide more general explanation where possible and reference links to other relevant sources.

The respective features’ command syntax are of course different for different cloud platforms. This information is however not provided currently but we will provide it in an update along with covering any comments received.

Feature:
Launch (boot) – Command to launch an instance. To specify the server name, flavor ID (small., large..), and image ID.

Reboot – Soft- or hard-reboot a running instance. A soft-reboot attempts a graceful shut down and restart of the instance. A hard-reboot power cycles the instance. By default, when you reboot a server, it is a soft-reboot.

Terminate – When an instance is no longer needed, use the terminate or delete command, to terminate it. You can use the instance name or the ID string.

Resize – If the size of a virtual machine needs to be changed, such as adding more memory or cores, this can be done using the resize operations. Using resize, you can select a new flavor for your virtual machine and instruct the cloud to adjust the configuration to match the new size. The operation will reboot the virtual machine and takes several minutes of downtime. Network configuration will be maintained but connectivity is lost during the reboot so this operation should be scheduled as it will lead to application downtime.

Rescue – An instance’s filesystem could become corrupted with prolonged usage. Rescue mode provides a mechanism for access even when the VM’s image renders the instance inaccessible. It is possible to reboot a virtual machine in rescue mode. A rescue VM is launched that allows a user to fix their VMs (by accessing with a new root password).

Pause / Un-pause – This command stores the state of the VM in RAM. A paused instance continues to run in a frozen state.

Suspend / Resume – Administrative users might want to suspend / resume an instance if it is infrequently used or to perform system maintenance. When you suspend an instance, its VM state is stored on disk, all memory is written to disk, and the virtual machine is stopped. Suspending an instance is similar to placing a device in hibernation; memory and vCPUs become available to create other instances.

Inject Networking – Allows to  set up a private network between two or more virtual machines. This network won’t be seen from the other virtual machines nor from the real network.

Inject File – It is a feature that allows to include files during the boot. Normally the target is a root partition of guest images. There are sub features that enable further functionality to inspect arbitrarily complex guest images and find the root partition to inject to.

Serial Console Output – It is possible to access VM directly using the TTY Serial Console interface, in which case setting up bridged networking, SSH, and similar is not necessary.

VNC Console – VNC (Virtual Network Computing)  is a software for remote control, it is based on server agents installed on the hypervisor. This feature indicates the support of VNC for the hypervisor and VMs.

SPICE Console – Red Hat introduced the SPICE remote computing protocol that is used for Spice client-server communication. Other components developed include QXL display device and driver, etc, solution for interaction with virtualized desktop devices.The Spice project deals with both the virtualized devices and the front-end. It is needed to enable the spice server in qemu and also needs a client to view the guest.

RDP Console – It allows to connect the hypervisor and VMs via Remote Desktop Protocol based console.

Attach / Detach Volume Allows to add / remove new volume in the volume pool. This feature also allows to add / remove extra Volumes to existing running VMs.

Live Migration – Migration describes the process of moving a guest virtual machine from one host physical machine to another. This is possible because guest virtual machines are running in a virtualized environment instead of directly on the hardware. In a live migration, the guest virtual machine continues to run on the source host physical machine while its memory pages are transferred, in order, to the destination host physical machine.

Snapshot – A snapshot creates a coherent copy of a number of block devices at a given time. Live snapshot are used if a snapshot is taken while a virtual machine is running. Ideal for live backup of guests, without guest intervention.

iSCSI – iSCSI is Internet Small Computer System Interface, an Internet Protocol (IP)-based storage networking standard for linking data storage facilities. This feature of an Hypervisor means that you can add iSCSI based disks  to the storage pool.

iSCSI CHAP – Challenge Handshake Authentication Protocol (CHAP) is a network login protocol that uses a challenge-response mechanism. You can use CHAP authentication to restrict iSCSI access to volumes and snapshots to hosts that supply the correct account name and password (or “secret”) combination. Using CHAP authentication can facilitate the management of access controls because it restricts access through account names and passwords, instead of IP addresses or iSCSI initiator names.

Fibre Channel – This feature indicates that the hypervisor support optical fiber connectivity. In particular for Fibre Channel storage network which are cabled and configured with the appropriate Fibre Channel switches. This has implication on how the zones are configured. For example, KVM virtualization with VMControl supports only SAN storage over Fibre Channel. Typically, one of the fabric switches is configured with the zoning information. Additionally, VMControl requires that the Fibre Channel network has hard zoning enabled.

Set Admin Pass – This feature is the use of a guest agent to change the administrative (root) password on an instance.

Get Guest Info – To get info about the guest machine of the hypervisor. This info can be retrieved within VM. Hypervisors can handle several guest machines which are resource configurations assigned by the virtualisation environment.

Get Host Info – To get information about the node which is hosting the  VMs.

Glance Integration – Glance is the image storage system used to store images of VM. This feature indicates that the hypervisor integrates glance storage capabilities.

Service Control – The hypervisor / compute is a collection of services that enable you to launch virtual machine instances. You can configure these services to run on separate nodes or the same node. Most services run on the controller node and the service that launches virtual machines runs on a dedicated compute node. This feature also allow to install and configure these services on the controller node.

VLAN Networking – It indicates that it is possible to pass VLAN traffic from a virtual machine out to the wider network.

Flat Networking – FlatNetworking uses ethernet adapters configured as bridges to allow network traffic to transit between all the various nodes. This setup can be done with a single adapter on the physical host, or multiple. This option does not require a switch that does VLAN tagging as VLAN networking does, and is a common development installation or proof of concept setup. For example, when you choose Flat networking, Nova does not manage networking at all. Instead, IP addresses are injected into the instance via the file system (or passed in via a guest agent). Metadata forwarding must be configured manually on the gateway if it is required within your network.

Security Groups – This is a feature of the hypervisor (compute). There are similar features offered by cloud Networking Service using a mechanism that is more flexible and powerful than the security group capabilities built in. In this case the built in should be disabled and proxy all security group calls to the Networking API . If you do not, security policies will conflict by being simultaneously applied by both services.

Firewall Rules – Allows service providers to apply firewall rules at a level above security group rules.

Routing –    It is the feature of the hypervisor to map internal addresses and external public addresses. The network part of the hypervisor essentially functions as an L2 switch and routing.

Config Drive – Auto configure disk – Automatically reconfigure the size of the partition to match the size of the flavor’s root drive before booting

Evacuate – As cloud administrator, while you are managing your cloud, you may get to the point where one of the cloud compute nodes fails. For example, due to hardware malfunction. At that point you may use server evacuation in order to make managed instances available again.

Volume swap – The hypervisor support the definition of a swap volume (disk) to be utilised as additional virtual memory.

Volume rate limiting – It is rate limiting (per day , hour ) for volume access, It is used to enable rate-limiting for all back-ends regardless of built-in feature set of back-ends.

Twitter: @ICC-LabICC_Lab newsletter

 

The SOUL-FI SME Event – FIWARE Acceleration

The SOUL-FI FIWARE Accelerator is one of 16 incubators that joined the FI-PPP with the beginning of the 3rd and most important phase of the Future Internet innovation program.

The mission of these 16 accelerators is to accelerate the uptake of the FIWARE technology foundation, which has been built in the previous two phases and has now reached a level of maturity that qualifies it for competing in the innovation wild. Consequently, the correct name of these 16 incubators is FIWARE Accelerators.

Continue reading

An analysis of the performance of block live migration in Openstack

Since our servers have been set up for live migration with Openstack Icehouse, we wondered how live migration would perform. We measured the duration of the migration process, VM downtime duration and the amount of data transfered via the ethernet during a live migration. All tests were performed across 5 different VM flavors to examine the impact of the flavor. Another point we were curious about is how  higher memory load of VMs can impact migration performance. Here, we present the results of our experiments which show how live mgration works in these different scenarios.

Continue reading

XIFI end user survey

We are conducting research in order to find out which features of the XIFI platform are most important to end users. The results will be used in order to improve the platform. If you are an application developer interested in XIFI, please feel free to participate in the survey which can be found following this link:

https://www.surveymonkey.com/s/7GN2BGY

Windows image for OpenStack

In this article I will show how to install a Windows 7 64-Bit image on OpenStack. For different versions of Windows watch out for the corresponding notes in the article.

Prepare the Installation

Ubuntu, QEMU and KVM

Creating a Windows image from scratch is best done by using a linux distribution to do the installation process. To create the image we will use QEMU and KVM, which is a full virtualization solution for Linux. On Ubuntu you can install KVM using the following commands in the shell:

$ sudo apt-get update
$ sudo apt-get install qemu-system-x86
$ sudo apt-get install qemu-kvm
$ sudo apt-get install virt-manager
$ sudo apt-get install libvirt-bin libvirt-doc

If you are working on a Windows or Mac machine you can not use a Linux-VM to do the installation. The VM will not be able to use the hardware virtualization extension (Intel VT or AMD-V).

Continue reading

Distributed Computing in the Cloud

by Josef Spillner

Description

The widespread adoption and the development of cloud platforms have increased confidence in migrating key business applications to the cloud. New approaches to distributed computing and data analysis have also emerged in conjunction with the growth of cloud computing. Among them, MapReduce and its implementations are probably the most popular and commonly used for data processing on clouds.

Efficient support for distributed computing on cloud platforms means guaranteeing high speed and ultra-low latency to enable massive amounts of uninterrupted data ingestion and real-time analysis, as well as cost-efficiency-at-scale.

Problem Statement

Currently, there are limited offerings of on demand distributed computing tools. The main challenge that applies not only to cloud environments, is to build such a framework that handles both big data and fast data. This means that the framework must be able to provide for both batch and stream processing, while allowing clients to transparently define their computations and query the results in real time. Provisioning such a framework on cloud platforms requires delivering rapid provisioning and maximal performance. Challenges also come from one of cloud’s most appealing features: elasticity and auto-scaling. Distributed computing frameworks can greatly benefit from auto-scaling, but current solutions do not support it yet.

Articles and Info

Contact Point

Piyush Harsh

Balazs Meszaros

« Older posts