Paraguayan Perspective on Cloud Applications

The Itaipu Technology Park (PTI) in Paraguay, founded in 2009, is involved with scientific and technological development which contributes positively to regional development. Several of its centres with a total number of 90 engineers and researchers put emphasis on ICT integration and the challenges connected with it. Among them are diverse plans to use cloud applications. In this context, the Service Prototyping Lab of Zurich University of Applied Sciences (SPLab) in Switzerland is conducting a two-week guest lecturing and research exchange presenting its research initiatives and outputs on the PTI premises in close proximity to Ciudad del Este.

Continue reading

Reflections on Teaching Internet Service Prototyping

Our lab (Service Prototyping Lab) offers a unique homonymous elective module on the bachelor’s level in computer science at the School of Engineering at Zurich University of Applied Sciences (ZHAW): Internet Service Prototyping. In the recently concluded semester term, the module was organised for the first time. Several students were brave enough to vote this combined Monday-morning lecture and lab into their curriculum which takes place in the 5th semester for full-time students and slightly later for part-time students of computer science. This post reflects on the educational motivation, the design of the course, the didactic and technological concepts, and some expected and unexpected results in the wake of rapid technology changes on a monthly basis.

Continue reading

Global ICT Module: Swiss Students in China

chinesearchitectureSupported by Huawei’s Seeds for the Future programme, 16 students from Swiss universities of applied sciences are spending a couple of education and project days at production and research facilities in China which cover the broad topics of telecommunications equipment, enterprise computing solutions and mobile handhelds. Among the participants, two study Computer Science and Business Information Technology, respectively, at Zurich University of Applied Sciences (ZHAW). Their study programmes are complemented well by the technical and business contents of this on-site module.

Continue reading

Cloud Computing Summer School 2016 – highlights

The Service Engineering group just finished the Cloud Computing Summer School 2016 last Friday. Summer School is a yearly activity at ZHAW, organised in the first 2 weeks of July, in collaboration with the Grand Valley State University (GVSU), USA.

Students of both universities attend the lectures in Winterthur. Swiss students are given the option to attend 2 complimentary weeks in the USA right after the Summer School in Winterthur. The program was slightly changed this year by introducing guest lecturers. In the previous years all lectures were given by the SE group members, which we modified this year by inviting known experts in the field of Cloud Computing from Switzerland and abroad, to talk about the current technologies as well as the current practices in their organisation. This mix of academic and applied modules was very well received by the students.

Swiss 2016 group pic

Continue reading

Summer School 2015

 Summerschool_Class2015

ICCLab again organised the Summer School this year. This was the 3rd year since this program was incorporated as part of the ZHAW international exchange program. We had 16 students in total this year, 6 from the United States of America (GVSU, Michigan) and 10 from Switzerland (ZHAW).

The Summer School is an overall 4 week program, of which 2 weeks where spent in Winterthur, teaching Cloud Computing and Computer System lectures and labs everyday. Two more weeks of education are currently spent at our partner-university, Grand-Valley State University, in Allendale, MI, USA.

The lectures & labs were held by our own team members from the lab, the experts teaching their topic. Hence it provided a good opportunity for the aspirant young researchers to have some formal teaching experience. Some regular lecturers and professors (who look after this responsibility in the formal semester of the University), could take a back seat to supervise the course and some others transferred their long term expertise to the next generation of ICT-engineers.

Continue reading

How to install a multi region devstack Part-1

Introduction
I have introduced Disaster Recovery (DR) services in the tutorial of last ICCLAB  newsletter which also made an overview of possible OpenStack configurations. Several configuration options could be considered. In particular, in case of a stakeholder having both the role of Cloud provider and DR service provider, a suitable safe configuration consists in distributing the infrastructures in different geographic locations. OpenStack gives the possibility to organise the controllers in different Regions which are sharing the same keystone. Here you will find the an overall specification using heat, I will simulate same configurations using devstack on Virtual BoX environments. One of the scope of this blog post is to support  the students who are using Juno devstack.

There will be a second part of this tutorial to show a possible implementation of  DR services lifecycle between two regions.

Continue reading

The Cloud for testing environments

In our last ICCLab newsletter, in the cloud economics tutorial, we introduced how cloud infrastructures could be utilized for off-loading the variable and unpredictable resource needs. This is one of the basic principles of public cloud business. The InIT ICCLab cloud-economics lecture provides extensive use case studies and lab exercises on these topics.

1. Use Cases

Within the editorial, the same newsletter reported another use case related to the deployment of performing environment for measurements and tests based on the public cloud. Hence this represent another good opportunity to utilize cloud based infrastructures.

top applications

Even GreenPages introduce these concept as enterprise case, however it can be extended to other actors with similar needs. For example, requirements  to simulate production conditions for testing, while not affecting live deployments. With cloud services, suitable environments can be provisioned for apps development teams without affecting production environments, and then can be decommissioned with resulting charge-back reports for the respective cost centers. The Cloud will solve complex business needs with efficient, replicable and cost-effective solutions. With traditional hardware infrastructure, setting up a dedicated development environment could be expensive and time consuming. Unlike physical test environment labs, the tests in the cloud enable to offer architects access to test environments on demand with no resource constraints and eliminating capital expenditure.

2. Automation for Operating costs saving
Compared to traditional test environments (server based) the cloud allows to reduce IT operating costs by utilizing the automation and orchestration features. In addition to these savings, the organization can redirect key resources for manual configuration activities of more mission-critical and value-added tasks increasing the globally the margins. Test cloud environments allow working with live environments for their testing services and not just modeling tools. The scenario prepared for tests are closer to the final production configuration therefore increasing productivity and lowering the risks in the IT environment.

3. What is the best strategy for test deployment in the cloud?

As test configuration may grow in complexity for fast delivery of innovative applications to the marketplace. It is interesting to see how to reduce the time to plan, install and validate test environments. One key aspect is to consider that the cloud enables provisioning of test infrastructures on demand to maximize the utilisation of the asset.  Feasibility studies are required to find the scenarios in which, moving testing to the cloud, can benefit the organization. Cost analysis should be made for private and public cloud utilisation with a correct mix.

test strategy

The steps that should be followed to move, more effectively, applications in the cloud would be:

  • Business needs and understanding of the benefit of the cloud

Define the business and technical objectives of moving a particular testing project to the cloud, to gain more from your cloud investment

  • Formulate the testing strategy

The test strategies should clearly answer what is intended to be achieved by moving testing to the cloud, including cost savings, easy access to infrastructure, short cycle times, etc. The economics need to be analysied for defined type of cloud tests, the risks and the duration of the tests (costs).

  • Plan your infrastructure

To  define the infrastructure requirements necessary for building a test environment (private and public cloud). In case of public cloud, the service provider offers & prices should be an input (costs, terms and conditions, exit or movement to another service provider).

  • Executing the test

The applications are tested according to the defined test strategy. Optimal utilization of the test infrastructure has to be defined  to achieve cost benefits.

  • Monitor and analyze test results

Monitoring of test results in real-time to understand and evaluate capacity and  performance issues. The monitoring will consider also the financial performance of cloud services. The test results could be mined in the cloud and their analytics cloud also take advantages of data science and bigdata technologies. This represents another opportunity.

4. Our experience with testing on the cloud

ICC_Lab is investing a lot on infrastructures dedicated to the cloud and currently we have two OpenStack based installations. Some of them have test environments that will be used for internal projects and cooperation projects in the FI-PPP and H2020 programs.

The advantage of being able to use a cloud environment for testing is clear in our everyday activities. A typical concrete use case is that of setting up backend services running on a certain number of virtual machines, that can be easily (re-)created and destroyed in a very short time and without affecting any other running activity.

These testing backends represent a very convenient and reliable point of presence for the applications that need them, but at the same time, the flexibility of the cloud is such that reorganizing or radically changing the testing environment comes at a very low effort.

Some frequent use cases include:

  • Setting up cloud environments to support applications running locally during the development cycle. Using the cloud approach instead of having local testing environments ensures a higher degree of consistency and reliability.
  • Run automated tests against cloud backends.
  • Support demonstrations. This is a particularly useful scenario, as the testing environment running on the cloud can be easily used to showcase demos of our applications.

Another factor to consider is that a service or the applications using it, can be easily moved from the testing to the pre-production phase. One of the internal projects we are currently developing, requires a Swift backend and, in a longer time frame, nothing to small changes will be required if we will want to distributed our application publicly and still have it running as we expect.

On a different perspective than that of testing applications we are developing, we often use our cloud to setup temporary services (e.g., open source frameworks) for evaluation or analysis purposes. This kind of testing takes great advantage from the “on-demand, self-service” factor of cloud computing!

by Antonio Cimmino, Vincenzo Pii 

ICC_Lab

A Web Application to Monitor and Understand Energy Consumption in an Openstack Cloud

In one of our projects we need to understand the energy consumption of our servers. Our initial work in this direction involved collecting energy consumption data using Kwapi and storing it in Ceilometer for further study. The data stored in Ceilometer is valuable; however, it is insufficient to really understand energy consumption in detail. Consequently, we are developing a web application which gives a much greater insight into energy consumption in our cloud resources. This is very much a work in progress, so this post just highlights a few points relating to the application as well as a video which shows the current version of the application.

The tool was developed to be totally integrated with Openstack. Users log in with their Openstack credentials (using Keystone authentication) and are  redirected to the overview page where they can see  the total energy consumed by the VMs in their projects for the the previous month as well as some  general information regarding virtual machines; a line chart displays how energy consumed varies over time.

Continue reading

Main features of Hypervisors reviewed

We prepared this blog post to help students understand the hypervisor-support matrix introduced by OpenStack .  This information is spread throughout different sources. Many of these include Redhat Linux and OpenStack. We have tried to provide more general explanation where possible and reference links to other relevant sources.

The respective features’ command syntax are of course different for different cloud platforms. This information is however not provided currently but we will provide it in an update along with covering any comments received.

Feature:
Launch (boot) – Command to launch an instance. To specify the server name, flavor ID (small., large..), and image ID.

Reboot – Soft- or hard-reboot a running instance. A soft-reboot attempts a graceful shut down and restart of the instance. A hard-reboot power cycles the instance. By default, when you reboot a server, it is a soft-reboot.

Terminate – When an instance is no longer needed, use the terminate or delete command, to terminate it. You can use the instance name or the ID string.

Resize – If the size of a virtual machine needs to be changed, such as adding more memory or cores, this can be done using the resize operations. Using resize, you can select a new flavor for your virtual machine and instruct the cloud to adjust the configuration to match the new size. The operation will reboot the virtual machine and takes several minutes of downtime. Network configuration will be maintained but connectivity is lost during the reboot so this operation should be scheduled as it will lead to application downtime.

Rescue – An instance’s filesystem could become corrupted with prolonged usage. Rescue mode provides a mechanism for access even when the VM’s image renders the instance inaccessible. It is possible to reboot a virtual machine in rescue mode. A rescue VM is launched that allows a user to fix their VMs (by accessing with a new root password).

Pause / Un-pause – This command stores the state of the VM in RAM. A paused instance continues to run in a frozen state.

Suspend / Resume – Administrative users might want to suspend / resume an instance if it is infrequently used or to perform system maintenance. When you suspend an instance, its VM state is stored on disk, all memory is written to disk, and the virtual machine is stopped. Suspending an instance is similar to placing a device in hibernation; memory and vCPUs become available to create other instances.

Inject Networking – Allows to  set up a private network between two or more virtual machines. This network won’t be seen from the other virtual machines nor from the real network.

Inject File – It is a feature that allows to include files during the boot. Normally the target is a root partition of guest images. There are sub features that enable further functionality to inspect arbitrarily complex guest images and find the root partition to inject to.

Serial Console Output – It is possible to access VM directly using the TTY Serial Console interface, in which case setting up bridged networking, SSH, and similar is not necessary.

VNC Console – VNC (Virtual Network Computing)  is a software for remote control, it is based on server agents installed on the hypervisor. This feature indicates the support of VNC for the hypervisor and VMs.

SPICE Console – Red Hat introduced the SPICE remote computing protocol that is used for Spice client-server communication. Other components developed include QXL display device and driver, etc, solution for interaction with virtualized desktop devices.The Spice project deals with both the virtualized devices and the front-end. It is needed to enable the spice server in qemu and also needs a client to view the guest.

RDP Console – It allows to connect the hypervisor and VMs via Remote Desktop Protocol based console.

Attach / Detach Volume Allows to add / remove new volume in the volume pool. This feature also allows to add / remove extra Volumes to existing running VMs.

Live Migration – Migration describes the process of moving a guest virtual machine from one host physical machine to another. This is possible because guest virtual machines are running in a virtualized environment instead of directly on the hardware. In a live migration, the guest virtual machine continues to run on the source host physical machine while its memory pages are transferred, in order, to the destination host physical machine.

Snapshot – A snapshot creates a coherent copy of a number of block devices at a given time. Live snapshot are used if a snapshot is taken while a virtual machine is running. Ideal for live backup of guests, without guest intervention.

iSCSI – iSCSI is Internet Small Computer System Interface, an Internet Protocol (IP)-based storage networking standard for linking data storage facilities. This feature of an Hypervisor means that you can add iSCSI based disks  to the storage pool.

iSCSI CHAP – Challenge Handshake Authentication Protocol (CHAP) is a network login protocol that uses a challenge-response mechanism. You can use CHAP authentication to restrict iSCSI access to volumes and snapshots to hosts that supply the correct account name and password (or “secret”) combination. Using CHAP authentication can facilitate the management of access controls because it restricts access through account names and passwords, instead of IP addresses or iSCSI initiator names.

Fibre Channel – This feature indicates that the hypervisor support optical fiber connectivity. In particular for Fibre Channel storage network which are cabled and configured with the appropriate Fibre Channel switches. This has implication on how the zones are configured. For example, KVM virtualization with VMControl supports only SAN storage over Fibre Channel. Typically, one of the fabric switches is configured with the zoning information. Additionally, VMControl requires that the Fibre Channel network has hard zoning enabled.

Set Admin Pass – This feature is the use of a guest agent to change the administrative (root) password on an instance.

Get Guest Info – To get info about the guest machine of the hypervisor. This info can be retrieved within VM. Hypervisors can handle several guest machines which are resource configurations assigned by the virtualisation environment.

Get Host Info – To get information about the node which is hosting the  VMs.

Glance Integration – Glance is the image storage system used to store images of VM. This feature indicates that the hypervisor integrates glance storage capabilities.

Service Control – The hypervisor / compute is a collection of services that enable you to launch virtual machine instances. You can configure these services to run on separate nodes or the same node. Most services run on the controller node and the service that launches virtual machines runs on a dedicated compute node. This feature also allow to install and configure these services on the controller node.

VLAN Networking – It indicates that it is possible to pass VLAN traffic from a virtual machine out to the wider network.

Flat Networking – FlatNetworking uses ethernet adapters configured as bridges to allow network traffic to transit between all the various nodes. This setup can be done with a single adapter on the physical host, or multiple. This option does not require a switch that does VLAN tagging as VLAN networking does, and is a common development installation or proof of concept setup. For example, when you choose Flat networking, Nova does not manage networking at all. Instead, IP addresses are injected into the instance via the file system (or passed in via a guest agent). Metadata forwarding must be configured manually on the gateway if it is required within your network.

Security Groups – This is a feature of the hypervisor (compute). There are similar features offered by cloud Networking Service using a mechanism that is more flexible and powerful than the security group capabilities built in. In this case the built in should be disabled and proxy all security group calls to the Networking API . If you do not, security policies will conflict by being simultaneously applied by both services.

Firewall Rules – Allows service providers to apply firewall rules at a level above security group rules.

Routing –    It is the feature of the hypervisor to map internal addresses and external public addresses. The network part of the hypervisor essentially functions as an L2 switch and routing.

Config Drive – Auto configure disk – Automatically reconfigure the size of the partition to match the size of the flavor’s root drive before booting

Evacuate – As cloud administrator, while you are managing your cloud, you may get to the point where one of the cloud compute nodes fails. For example, due to hardware malfunction. At that point you may use server evacuation in order to make managed instances available again.

Volume swap – The hypervisor support the definition of a swap volume (disk) to be utilised as additional virtual memory.

Volume rate limiting – It is rate limiting (per day , hour ) for volume access, It is used to enable rate-limiting for all back-ends regardless of built-in feature set of back-ends.

Twitter: @ICC-LabICC_Lab newsletter

 

30th Birthday of the Swiss Informatics Society

30th birthday of the Swiss Informatics Society

The 30th birthday of the Swiss Informatics Society (SI), held on Tue 25 June  in Fribourg CH, concluded successfully with more then 200 participants who globally have attended the thematic workshops in the morning, the inaugural Meeting of the Swiss AIS Chapter and the plenary in the afternoon.

We post hereafter relevant topics on the Cloud Computing workshop, moderated by ZHAW ICCLAB,  and the award ceremony.

Workshop: Cloud Computing in Switzerland

Cloud Computing is transforming the IT industry, and this concerns a high-tech country like Switzerland in particular. The resulting potentials and risks need to be well understood in order to be able to fully leverage the technical as well as economical advantages. This workshop  provided an overview of current technological and economical trends with a particular focus on Switzerland and its Federal Cloud Computing strategy

8:45 – 9:00  Intro by Christof Marti (ZHAW)
Workshop introduction, goals and activities on Cloud Computing at ZHAW.

The Cloud Computing Special Interest Group (SIG), whose formation is coordinated by ZHAW ICCLAB, was introduced with its overall goals identified  to stimulate the knowledge, implementation and development of Cloud Computing in Industry, Research, SMEs and Education. The Kick-Off meeting is foreseen in September (watch si-cc-sig or linkedin group for more details ).  Further information were presented on the InIT Cloud Computing Lab (ICCLAB),  Research Lab dedicated to Cloud Computing in the focus area of Service Engineering encompassing important research themes and cloud initiatives like: Automation, Interoperability, Dependability, SDN for Clouds, Monitoring, Rating, Charging, Billing and Future Internet platforms.

9:00-09:20  Peter Kunszt  (SystemsX)
Cloud computing services for research – first steps and recommendations

The view of the scientific community on technological trends and the opportunities offered by Cloud Computing infrastructures.  Interesting start of the workshop by the Project leader of SyBIT (SystemsX.ch Biology IT: SyBIT) with overview of possible cloud services for science and education, recommendation concerning commercial vs. selfmade clouds and possible pricing & billing models for science .

9:20-09:40 Markus Brunner (Swisscom)
Cloud/SDN in Service Provider Networks

Markus illustrated “why a new network architecture” with feature comparision of aging network technology (static) and current trend (dynamic) on global needs like cost effectiveness, agility and service oriented. The proposal was to  look at new infrastructures based on SDN (Software Defined Network) and NFV (Network Function Virtualisation). NFV is concerned with porting network or telecommunications applications, that today typically run on dedicated and specialized hardware platforms, to virtualized Cloud platforms. Some basic architectures were discussed and interplay of NFV-SDN.   The presentation concluded with analysis of challenges for Cloud technologies today for communication oriented applications like: Real-time, Security, Predictable performance, Fault Management in Virtualized Systems and fixed /  mobile differences.

9:40-10:00  Sergio Maffioletti (University of Zurich)
A roadmap for an Academic Cloud 

“The view of the scientific community on how cloud technology could be used as a foundation for building a national research support infrastructure”. Interesting and innovative presentation made by Sergio starting from the “why and what’s wrong” analysis through the initiatives in places (new platforms, cloud utilisation and long tem competitiveness objectives). The presentation also made an overview of how this is implemented with National Research Infrastructure program (the Swiss Academic Compute  Cloud project) and innovative management systems (a mechanism to collect community requirements and implementing technical services and solution ).  The presentation concluded on the objectives and targets like: inter-operate, intra/inter access to institutional infrastructure, cloud enabled,   research clustering and national computational resources.

10:00-10:20 Michèal Higgins  (CloudSigma) – remote
CloudSigma and the Challenges of Big Science in the Cloud

Switzerland based CloudSigma is a pure-cloud IaaS service provider, offering highly-available flexible enterprise-class cloud servers in Europe and the U.S. It offers innovative services like all SSD storage, high performance solutions and firewall/VPN services. Helping building the a federated cloud platform (Helix Nebula) that addresses the needs of big science, CloudSigma sees the biggest challenges and values to have huge data-sets available close to the computing instances. As a conclusion CloudSigmas offers the Science community to store common big data sets for free close to their compute instances reducing the cost and time to transfer the data.

         10:20-10:40 Muharem Hrnjadovic (RackSpace)

An overview of key capabilities of cloud based infrastructures like OpenStack and challenging scenarios were presented during this session.

10:40-10:45 All
Q&A session

Swiss Informatics Competition 2013

Aside from speakers and panel discussions, captivating student projects (Bachelors &  Masters in Computer Science), from Universities and High Schools Specialty, have been introduced  to illustrate the diversity of computing technologies. Selected projects by team of experts have been also awarded. The details on the student projects are available here.

 Some photos taken from the cloud computing workshop, the plenary and ending awards:

Capture33 IMG_20130625_174341_stitch IMG_20130625_174455 IMG_20130625_174733 IMG_20130625_180000Foto 5Foto 3

Foto 2 Foto 1