Category: Open Source (page 3 of 4)

Nagios / Ceilometer integration: new plugin available

The famous Nagios open source monitoring system has become a de facto standard in recent years. Unlike commercial monitoring solutions Nagios does not come as a one-size-fits-all monitoring system with thousands of monitoring agents and monitoring functions. Nagios is rather a small, lightweight monitoring system reduced to the bare essential of monitoring: an event management and notification engine. Nagios is very lightweight and flexible, but it must be extended in order to become a solution which is valuable for your organization. Plugins are a very important part in setting up a Nagios environment. Though Nagios is extremely customizable, there are no plugins that capture OpenStack specific metrics like number of floating IPs or network packets entering a virtual machine (even if there are some Nagios plugins to check that OpenStack services are up and running).

Ceilometer is the OpenStack component that captures these metrics. OpenStack measures typical performance indices like CPU utilization, Memory allocation, disk space used etc. for all VM instances within OpenStack. When an OpenStack environment has to be metered and monitored, Ceilometer is the right tool to do the job. Though Ceilometer is a quite powerful and flexible metering tool for OpenStack, it lacks capabilities to visualize the collected data.

It can easily be seen that Nagios and Ceilometer are complementary products which can be used in an integrated solution. There are no Nagios plugins to integrate the Ceilometer API (though Enovance has developed plugins to check that OpenStack components alive) with the Nagios monitoring environment and therefore allow Nagios to monitor not only the OpenStack components, but also all the hosted VMs and other services.

The ICCLab has developed a Nagios plugin which can be used to capture metrics through the Ceilometer API. The plugin is available download on Github. The Ceilometer call plugin can be used to capture a Ceilometer metric and define thresholds for employing the nagios alerting system.

In order to use the plugin simply copy it into your Nagios plugins folder (e. g. /usr/lib/nagios/plugins/) and define a Nagios command in your commands.cfg file (in /etc/nagios/objects/commands.cfg). Don’t forget to make your Nagios plugin executable to the Nagios API (chmod u+x).

A command to monitor the CPU utilization could look like this:

define command {
command_name    check_ceilometer-cpu-util
command_line    /usr/lib/nagios/plugins/ceilometer-call -s "cpu_util" -t 50.0 -T 80.0
}

Then you have to define a service that uses this command.

define service {
check_command check_ceilometer-cpu-util
host_name
normal_check_interval 1
service_description OpenStack instances CPU utilization
use generic-service
}

Now Nagios can employ Ceilometer API to monitor VMs inside OpenStack.

SmartOS Series: Storage

In the third part of the SmartOS Series (check the first two parts: Intro and Virtualisation) we are going to deep dive into the SmartOS Storage: ZFS.

ZFS is a combination of file system and logical volume manager originally introduced in Solaris 10. ZFS includes many features that are key for a Cloud OS such as SmartOS:

  • Data integrity. ZFS is designed with a focus on data integrity, with particular emphasis on preventing silent data corruption. Data integrity in ZFS is achieved using a checksum or a hash throughout the whole file system tree. Checksums are stored in the  pointer to the block rather than in the block itself. The checksum of the block pointer is stored in its pointer, and so on all the way up to the root node. When a block is accessed, its checksum is calculated and compared to the one stored in the pointer to the block. If the checksums match, the data is passed to the process requesting it; if the checksums do not match, ZFS will retrieve one of the redundant copy and try to heal the damaged block.
  • Snapshots. ZFS uses a copy-on-write transactional model that, when new data is written, the block containing the old data is retained, making it possible to create a snapshot of a file system. Snapshots are both quick to create and space efficient, since data is already stored and unmodified blocks are shared with the file system. ZFS allows for snapshots to be moved between pools, either on the same host or on a remote host. The ZFS send command creates a stream representation either of the entire snapshot or just of the blocks that have been modified, providing an efficient and fast way to backup snapshots.
  • Clones. Snapshots can be cloned, creating an identical writable copy. Clones share blocks and, as changes are made to any of the clones, new data block are created following the copy-on-write principles. Given that initially clones share all the blocks, cloning can be instantaneous doesn’t consume additional disk space. This means that SmartOS is able to quickly create new almost identical virtual machines.
  • Adaptive Replacement Cache (ARC). ZFS makes use of different levels of disk caches to improve file system and disk performance and to reduce latency. Data is then cached in a hierarchical manner: RAM for frequently accessed data, SSD disks for less frequently accessed data. The first level of disk cache (RAM) is always used for caching and uses a variant of the ARC algorithm similar to level 1 CPU cache. The second level (SSD disks) is optional and is split into two different caches, one for reads and one for writes. Populating the read cache, called L2ARC, can take some hours and, in case the L2ARC SSD disk is lost, data will be safely written to disk, i.e. no data will be lost. The write cache, called Log device, stores small synchronous writes and flushes them to disk periodically.
  • Fast file system creation. In ZFS, creating or modifying a filesystem within a storage pool is much easier than creating or modifying a volume in a traditional filesystem. This also means that creating a new virtual machine in SmartOS, i.e. adding a new customer for a cloud provider, is extremely fast.

Another very interesting feature added by Joyent in SmartOS with regards to storage is disk I/O throttling. In Solaris, a zone or an application could potentially monopolize access to the local storage, almost blocking other zones and applications to access it and causing performance degradation. With disk I/O throttling in place all zones are guaranteed a fair amount of access to disk. If a zone is requesting access to disk over its limits, each I/O call will get delayed by up to 100 microseconds, leaving enough time for the system to press I/O requests from other zones. Disk I/O throttling only comes into effect when the system is under heavy load from multiple tenants; during quiet times tenants are able to enjoy faster I/O.

The main ZFS components in SmartOS are pools and datasets:

  • Pools are an aggregation of virtual devices (vdevs), which in turn can be constructed of different block devices, such as files, partitions or entire disks. Block devices within a vdev can be configured with different RAID levels, depending on redundancy needs. Pools can be composed of a mix of different types of block devices, e.g. partitions and disks, and their size can be flexibly extended, as easy as adding a new vdev to the pool.
  • Datasets are a tree of blocks within a pool, presented either as a filesystem, i.e. file interface, or as a volume, i.e. block interface. Datasets can be easily resized and volumes can be thinly provisioned. Both zones and KVMs make use of ZFS datasets: zones use ZFS filesystems while KVMs use ZFS volumes.

In the next part of the SmartOS Series we’ll explore DTrace, a performance analysis and troubleshooting tool.

SmartOS Series: Virtualisation

Last week we started a new blog post series on SmartOS. Today we continue in this series and explore in details the virtualisation aspects of SmartOS.

SmartOS offers two types of OS virtualisation: the Solaris-inherited container-based virtualisation, i.e. zones, and the hosted virtualisation ported to SmartOS by Joyent, KVM.

Containers are a combination of resource controls and Solaris zones, i.e. a complete isolated virtual environment, that provide an efficient virtualisation solution and a complete and secure user space environment on a single global kernel. SmartOS uses sparse zones, meaning that only a portion of the file system is replicated in the zone, while the rest of the file system and other resources, e.g. packages, are shared across all zones. This limits the duplication of resources, provides a very lightweight virtualisation layer and makes OS upgrading and patching very easy. Given that no hardware emulation is involved and that guest applications talk directly to the native kernel, container-based virtualisation gives a close-to-native level of performance.

osvirt

SmartOS container-based virtualisation (Source: wiki.smartos.org)

SmartOS provides two resource controls methods: fair share scheduler and CPU capping. With fair share scheduler a system administrator is able to define a minimum guaranteed share of CPU for a zone; this guarantee that, when the system is busy, all zones will get their fair share of CPU. CPU capping sets an upper limit on the amount of CPU that a zone will get. Joyent also introduced a CPU bursting feature that let system administrators define a base level of CPU usage and an upper limit and also specify how much time a zone is allowed to burst, making it possible for the zone to get more resources when required.

SmartOS already offer a wide set of features, but to make it a truly Cloud OS an important feature was missing: hosted virtualisation. Joyent bridged this gap by porting to SmartOS one of the best hosted virtualisation platform: KVM. KVM on SmartOS is only available on Intel processors with VT-x and EPT (Extended Page Tables) enabled and only supports x86 and x86-64 guests. Nonetheless, this still gives the capability to run unmodified Linux or Windows guests on top of SmartOS.

In hosted virtualisation hardware is emulated and exposed to virtual machine; in SmartOS, KVM doesn’t emulate hardware itself, but it exposes an interface that is then used by QEMU (Quick Emulator). When the guest emulated architecture is the same as the host architecture, QEMU can make use of KVM features such as acceleration to increase performance.

kvm

SmartOS KVM virtualisation (Source: wiki.smartos.org)

KVM virtual machines on SmartOS still run inside a zone, therefore combining the benefits of container-based virtualisation with the power of hosted virtualisation, with QEMU as the only process running in the zone.

In the next part of the SmartOS Series we will look into ZFS, SmartOS powerful storage component.

Distributed File System Series: GlusterFS Introduction

In the first part of the Distributed File Systems Series we gave an introduction to Ceph. Today we will explore another file system: GlusterFS.

GlusterFS is a clustered network filesystem that uses FUSE and that allows to aggregate different storage devices, bricks in GlusterFS terminology, into a single storage pool. The storage in each brick is formatted using a local file system, e.g. XFS or ext4, and then exposed to GlusterFS for storing data. The user space approach gives GlusterFS flexible release cycles and easiness of use without taking tolls on the performance of the file system.

Two of the main concepts of GlusterFS are volumes and translators.

A volume is a collection of one or more bricks. There are three different type of volumes in GlusterFS that differentiate in how the volume stores the data in the bricks:

  • Distribute Volume. In this type of volume all the data is distributed throughout all the bricks. The data distribution is based on an algorithm that takes into account the size available in each brick. This is the default volume type.
  • Replicate Volume. In a replicate volume the data is duplicated, hence replicate, over every brick in the volume. The number of bricks must be a multiple of the replica count.
  • Stripe Volume. In a stripe volume the data is striped into units of a given size among the bricks. The default unit size is 128KB and the number of bricks should be a multiple of the stripe count.

A translator is a GlusterFS component that has a very specific function; some of the main functionalities of GlusterFS are implemented as translators, e.g. I/O scheduling, striping and replication, load balancing, failover of volumes and disk caching. A translator connects to one or more volumes, perform it specific function and offers a volume connection; therefore translators can be hooked together to provide a file system tailored to certain needs. The whole set of translators linked together is called a graph.

GlusterFS Translators graph (Source: gluster.org)

GlusterFS offers a fairly long list of translators for different needs:

  • Storage Translators. These translators define the behaviour of the back-end storage for GlusterFS. A storage translator is typically the first translator in a chain.
    • POSIX: tells GlusterFS to use a normal POSIX file system as the backend, e.g. ext4.
    • BDB: tells GlusterFS to use the Berkeley DB as the backend storage mechanism. This translators uses key-value pairs to store data and POSIX directories to store directories.
  • Clustering Translators. These translators are used to allow GlusterFS to use multiple servers to create a cluster. These translators are used to define the basic behaviour of GlusterFS.
    • Unify: with this translator all the subvolumes from the storage servers will appear as a single volume. An important feature of this translator is that a given file can exist on only one of the subvolumes in the cluster. The translator uses a scheduler to determine where a file resides, e.g. Adaptive Least Usage, Round Robin, random.
    • Distribute: this translator aggregate storage from several storage servers.
    • Replicate: this scheduler replicates files and directories across the subvolumes. If there are two subvolumes then a copy of each file will be on each subvolume.
    • Stripe: with this translator the content of a file is distributed across subvolumes.
  • Performance Translators. These translator are used to improve he performance of GlusterFS.
    • Read ahead: this translator pre-fetches data before it’s required, usually data that appears next in the file.
    • Write behind: this allows the write operation to return even if the operation has not completed.
    • Booster: using this translator applications are allowed to skip using FUSE and access the GlusterFS directly.

More translators can be written according to specific requirements.

For the next part of the Distributed File Systems Series we will be looking at XtreemFS.

Hardware Extension for Ceilometer

Ceilometer Introduction

Ceilometer is a monitoring tool for OpenStack cloud environments. In the next OpenStack release called Havana it will take part as a core component. However, Ceilometer is also available for the OpenStack releases Folsom and Grizzly. Currently Ceilometer offers only data of the OpenStack core components and the virtual machines of the cloud. For this reason the ICCLab decided to extend Ceilometer in a way that it is possible to collect data from hardware devices as well.

Ceilometer Extension – Concept

The collection of the data from the hardware devices should be independent and expandable. Therefore, a new Ceilometer agent with a modular structure is needed. We call this agent Hardware Agent. In the picture below the conceptual architecture is shown.

conceptual architecture

Conceptual Architecture Ceilometer Hardware Extension

As the Hardware Agent is part of Ceilometer it is installed nearly on every physical server. The Hardware Agent should be able to poll the data from various sources like IPMI, SMART or SNMP through Inspectors on different devices. This allows the Hardware Agent to get data from devices like switches or router.
It should be possible to deactivate each of these sources globally or per host. Which data will be extracted of the source should also be configurable globally or per host. The structured data is stored in Pollsters.
The Hardware Agent sends the collected data to the Ceilometer Event Bus which is in general a rabbit message queue. The central Ceilometer Collector takes the messages and stores it on a database. Through the Ceilometer API the data could be read by other systems like a billing and rating system or a graphical depiction.

Ceilometer Extension – Configuration Example

The configuration of the Hardware Agent allows the administrator to deactivate Pollsters and Inspectors globally or per device/host. The host settings take only place when no global settings are set.
To access the sources it might be necessary to set additional information like username or password. This configuration could be set globally or per host. If there is no host configuration the Hardware Agent takes the global configuration. If none of both configurations are set the inspector takes standard values to access the sources.

Configuration Sequence

Configuration Sequence of the Hardware Agent

These configurations could be set in /etc/ceilometer/hardware-agent.conf in this way:

disabled_hardware_pollsters=network
disabled_hardware_inspectors=ipmi
hardware_inspector_configurations = {“snmp” :
{“securityName”: “public”, “port”: 161}}

hardware_hosts={“10.0.0.1” :
{“disabled_pollsters”: [“cpu”],
“disabled_inspectors”: [“smart”],
“inspector_configurations”:
{“snmp” : {“port”: 163}}},
“10.0.0.2” :{…}}

Ceilometer Extension – Status and Prospect

With the programmed extension it is possible to get data of CPU (1,5 and 15 minutes usage in %), network (incoming/outcoming traffic in bytes, number of errors) storage(used/total space in bytes) and memory(used/total space in bytes) from any devices over SNMP. With new Inspectors it is imaginable to get more data of new sources like IPMI or SMART.
The base of the Ceilometer extension is currently being reviewed by other Ceilometer programmers (Review). If the review succeeds the extension take place in the Havana release of OpenStack in October 2013.

SmartOS Series: A SmartOS Primer

Some time back we introduced a piece of work that we are working on: OpenStack on SmartOS. Today we start a new blog post series to dig into SmartOS and its features. We’ll start with a quick introduction to SmartOS to get everyone started with this platform.

SmartOS is an open source live operating system mainly dedicated to offer a virtualisation platform. It’s based on illumos, which in turn it’s derived form OpenSolaris, thus inherits many Solaris-like features, such as zones, ZFS and DTrace. Joyent, the company behind SmartOS, further enhanced the illumos platform by adding a porting of KVM and features like I/O throttling. The core features of SmartOS will be the topic of the next posts in this Series. Thanks to these features, SmartOS makes the perfect candidate for a truly Cloud OS.

The following presentation will walk you through the basic tasks to setup, configure and administer SmartOS:

For the next topics we will cover SmartOS virtualisation (Zones and KVM), SmartOS storage (ZFS), SmartOS networking (Crossbow) and SmartOS observability (DTrace).

Distributed File Systems Series: Ceph Introduction

With this post we are going to start a new series on Distributed File Systems. We are going to start with an introduction to a file system that is enjoying a good amount of success: Ceph.

Ceph is a distributed parallel fault-tolerant file system that can offer object, block, and file storage from a single cluster. Ceph’s objective is to provide an open source storage platform with no Single-Point-of-Failure, highly available and highly scalable.

A Ceph Cluster has three main components:

  • OSDs. A Ceph Object Storage Devices (OSD) are the core of a Ceph cluster and are in charge of storing data, handling data replication and recovery, and data rebalancing. A Ceph Cluster requires at least two OSDs. OSDs also check other OSDs for a heartbeat and provide this information to Ceph Monitors.
  • Monitors: A Ceph Monitor keeps the state of the Ceph Cluster using maps, e.g.. monitors map, OSDs map and the CRUSH map. Ceph also maintains a history, also called an epoch, of each state change in the Ceph Cluster components.
  • MDSs: A Ceph MetaData Server (MDS) stores metadata for the Ceph FileSystem client. Thanks to Ceph MDSs, POSIX file system users are able to execute basic commands such as ls and find without overloading the OSDs. Ceph MDSs can provide both metadata high-availability, i.e. multiple MDS instances, at least one in standby – and scalability, i.e. multiple MDS instances, all active and managing different directory subtrees.
ceph-architecture

Ceph Architecture (Source: docs.openstack.org)

One of the key feature of Ceph is the way data is managed. Ceph clients and OSDs compute data locations using a pseudo random algorithm called Controlled Replication Under Scalable Hashing (CRUSH). The CRUSH algorithm distributes the work amongst clients and OSDs, which free them from depending on a central lookup table to retrieve location information and allow for a high degree of scaling. CRUSH also uses intelligent data replication to guarantee resiliency.

Ceph allows clients to access data through different interfaces:

  • Object Storage: The RADOS Gateway (RGW), the Ceph Object Storage component, provides RESTful APIs compatible with Amazon S3 and OpenStack Swift. It sits on top of the Ceph Storage Cluster and has its own user database, authentication, and access control. The RADOS Gateway makes use of a unified namespace, this means that you can write data using one API, e.g. Amazon S3-compatible API, and read them with another API, e.g. OpenStack Swift-compatible API. Ceph Object Storage doesn’t make use fo the Ceph MetaData Servers.
stack

Ceph Clients (Source: ceph.com)

  • Block Devices: The RADOS Block Devices (RBD), the Ceph Block Device component, provides resizable, thin-provisioned block devices. The block devices are striped across multiple OSDs in the Ceph cluster for high performance. The Ceph Block Device component also provides image snapshotting and snapshots layering, i.e. cloning of images. Ceph RBD supports QEMU/KVM hypervisors and can easily be integrated with OpenStack and CloudStack (or any other cloud stack that uses libvirt).
  • Filesystem: CephFS, the Ceph Filesystem component, provides a POSIX-compliant filesystem layered on top of the Ceph Storage Cluster, meaning that files get mapped to objects in the Ceph cluster. Ceph clients can mount the Ceph Filesystem either as a Kernel object or as a Filesystem in User Space (FUSE). CephFS separates the metadata from the data, storing the metadata in the MDSs, and storing the file data in one or more OSDs in the Ceph cluster. Thanks to this separation the Ceph Filesystem can provide high performances without stressing the Ceph Storage Cluster.

Our next topic in the Distributed File Systems Series will be and introduction to GlusterFS.

Automated OpenStack High Availability installation now available

The ICCLab developed a new High Availability solution for OpenStack which relies on DRBD and Pacemaker. OpenStack services are installed on top of a redundant 2 node MySQL database. The 2 node MySQL database stores its data tables on a DRBD device which is distributed on the 2 nodes. OpenStack can be reached via a virtual IP address. This makes the user feel that he is dealing with only one OpenStack node. All OpenStack services are monitored by the Pacemaker tool. When a service fails, Pacemaker will restart it on either node.

Fig. 1: Architecture of OpenStack HA.

Fig. 1: Architecture of OpenStack HA.

The 2 node OpenStack solution can be installed automatically using Vagrant and Puppet. The automated OpenStack HA installation is available on a Github repository.

Automated Vagrant installation of MySQL HA using DRBD, Corosync and Pacemaker

Fig. 1: Redundant MySQL Server nodes using Pacemaker, Corosync and DRBD.

Fig. 1: Redundant MySQL Server nodes using Pacemaker, Corosync and DRBD.

If automation is required, Vagrant and Puppet seem to be the most adequate tools to implement it. What about automatic installation of High Availability database servers? As part of  our Cloud Dependability efforts, the ICCLab works on automatic installation of High Availability systems. One such HA system is a MySQL Server – combined with DRBD, Corosync and Pacemaker.

In this system the server-logic of the MySQL Server runs locally on different virtual machine nodes, while all database files are stored on a clustered DRBD-device which is distributed on all the nodes. The DRBD resource is used by Corosync which acts as resource layer for Pacemaker. If one of the nodes fails, Pacemaker automagically restarts the MySQL server on another node and synchronizes the data on the DRBD device. This combined DRBD and Pacemaker approach is best practice in the IT industry.

At ICCLab we have developed an automatic installation script which creates 2 virtual machines and configures MySQL, DRBD, Corosync and Pacemaker on both machines. The automated installation script can be downloaded from Github.

Open Cloud Day 2013

Open Cloud Day 2013

Another successful  Open Cloud Day 2013 held on 11-Jun-2013 in Winterthur in cooperation with  /ch/open and  ICCLab.

Over one hundred participants from academies, operators, industries, SMEs and open source organisations attended the plenary, the two parallel sessions and panel discussion. Relevant presentations on OpenStack and CloudStack architectures were discussed by Muharem Hrnjadovic and Sebastien Goasguen, respectively, and was complemented by other speeches by companies and providers that base their business of these platforms and other open platforms like CloudSigma, Anolim, EveryWare, Citrix and Cisco. Specific topics and architectures like Cloud/SDN, PaaS (Christof Marti), Elasticity and Automation were presented with in-dept discussions.

The plenary session gave the opportunity to introduce strategies for the cloud infrastructures and in particular for the Swiss Government and  Swiss Academic Open Cloud. In the first session Willy Müller (ISB) gave an update on the state of the cloud strategy of the Swiss Federal Government. This raised the discussion on the importance of security and privacy aspects for data stored in the cloud and the needs for a governmental cloud strategy. Next, Dr. Jens Piesbergen (Netcetera) presented the Swiss Governance Cloud (SGC) offering of Netcetera and partners and the challenges bringing eGovernment applications to the cloud. The 3d presentation “Open source goes Cloud” introduced the “Die Deutsche Wolke” an initiative to build a federal cloud infrastructure in cooperation with several OSB Alliance members (100% sites in Germany and under German jurisdiction).  The plenary also covered the positions of industries like Cisco (server architectures, virtualisation, automation and new configuration of data centres), Red Hat (OpenShift as PaaS infrastructure to meet the IT challenges needs to accelerate, automate, and standardise developer workflows) and CloudSigma detailing the challenges in delivering reliable and scalable performance to customers and the role of SDN for SLAs requirements.

The afternoon was dedicated to two parallel tracks for specific presentations by actors involved in the business of the Cloud mainly. Below are summaries of some of the presentations.

Track1 – Open stacks and independence

  • EveryWare AG – “Cloud provider Independence with Chef“: Features and capabilities of the automation platform CHEF can be used to realise independency for moving from a cloud operator to another. Key aspects of configuration management adaptation have been developed as well.
  • Rackspace – “Elasticity in the OpenStack cloud“:  After having introduced a graph describing the elasticity and benefits in the cloud, the interesting presentation had a special focus on the automation of the scaling and triggers to activate scaling actions within the framework of the OpenStack architecture. Elasticity using OpenStack Heat was shown with a note that the autoscaling functionality was just contributed by Rackspace through the Otter project.
  • ICCLab, ZHAW – “Provide your own PaaS environment (on OpenStack)“: The goal of PaaS is to provide developers an easy to use flexible, scalable and stable environment for their applications and let the provider do the operations of the underlying infrastructure and services. Providing your own PaaS system puts you in charge of this demanding task. The presentations gives a brief overview on how complex and challenging these systems are, using the example of CloudFoundry and BOSH.
  • Anolim – “Hand on experience with CloudStack“: This presentation provided a good B2B vision of the CloudStack from the perspective of a provider. In particular regarding the aspects of NaaS, Multi-Tenancy, Self Service Portal and utilisation of Amazon APIs.
  • Citrix, Apache CloudStack – “The Apache Cloud Ecosystem“: CloudStack together with other Apache projects helps to build a real open source cloud infrastructure. The utilisation of API wrappers allows connectivity of multiple cloud providers to data storage infrastructures. The presentation was useful to understand how Apache projects can deploy public or private cloud.

Track2 – Management, security for open clouds.

  • Stepping stone – “Automation in the Cloud – Using Open Source Software“: The stoneycloud service was presented as the foundation of their public cloud offer to thin provision new servers of customers. The Zabbix software was presented for the monitoring and configuration management based on Puppet software. The entire process was nicely shown with an example.
  • Bundesamt fur Landestopografie Swisstopo – “Load testing in the Cloud“: Mapping applications and those using satellite data are extreme bandwidth consuming. The presentation introduced the Federal Spatial Geodata Infrastructure (FSDI), launched on Jan.2013. In the presentation, how the cloud service’s load impact was detailed which was essential work used to prepare the large official go-live of their service.
  • Grid Computing Competence Centre, University of Zurich – “Roadmap for a Swiss Academic Open Cloud Infrastructure“: SwiNG roadmap presentation for promoting the adoption of Open Cloud Infrastructure for supporting scientific computing research on a Swiss wide academic scale.
  • Swisscom – “Cloud/SDN in Service Provider Networks“: The relevant focus was the paradigm called Network Function Virtualisation (NfV) and its relations with the Cloud, software defined networking and open innovation.

The conference was concluded with the panel discussion moderated by Pietro Brossi (ZHAW) involving Willy Muller, Sergio Maffioletti, Muharem Hrnjadovic and Sebastien Goasguen with relevant questions like “Where is the cloud computing today in Switzerland?” and “Where is the heading in next 2 years?” In all, the day was an excellent one, with excellent talks and organisation.

 

 

Normal
0

false
false
false

EN-US
X-NONE
X-NONE

/* Style Definitions */
table.MsoNormalTable
{mso-style-name:”Table Normal”;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-parent:””;
mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
mso-para-margin-top:0cm;
mso-para-margin-right:0cm;
mso-para-margin-bottom:10.0pt;
mso-para-margin-left:0cm;
line-height:115%;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:”Calibri”,”sans-serif”;
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;}

« Older posts Newer posts »