Tag: networking

Programmatic identification of cloud providers

by Josef Spillner

There is an ongoing debate in the community about the level of awareness (and related to this, influence) an application or SaaS instance should have concerning where and how it is hosted. The arguments range from “none at all”, spoken with a deploy-and-forget mindset, to “as much as possible”, spoken with a do-it-yourself attitude. In practice, some awareness and influence is certainly present, for instance in application-specific autoscaling, self-healing, self-management in general.

One particular aspect in the discussion is about whether an application should know in which cloud environment it is running. Even though the engineer may have targeted a specific stack with project conventions, there may be migrations between several instance of the same, e.g. different installations of OpenShift, Cloudfoundry or other stacks to run cloud applications, across regions or even across providers. Already some time ago we looked into identifying the level of virtualisation in a nested virtualisation context. Now we complement this vertical view with a horizontal one. This blog post does not argue for or against cloud provider identification; it merely describes a tool to gain this knowledge and exploit it in any possible way. The tool is called whatcloud.

Continue reading

A Design Draft for Tenant Isolation without Tunneling in Openstack

The Problem

Cloud networking bases on tech and protocols that were not initially designed for it. This has lead to unnecessary overhead and complexity in all phases of a cloud service. Tunneling protocols generate inherent cascading and encapsulation especially in multi tenant systems. The problem increases by vendor specific configuration requirements and heterogenous architectures. This complexity leads to systems which are hard to reason about, prone to errors, energy inefficient and increases the difficulty of configuration and maintenance. Continue reading

5th Swiss OpenStack User Group Meetup – at University of Zurich

5th Swiss OpenStack User Group

This fifth edition of the OpenStack CH User Group has been dedicated to the networking aspects of OpenStack but not only.
More then 40 people attended the meeting on 24-Oct. from 18.00 to 21.00 at the University of Zurich, Irchel campus
Sergio Maffioletti (GC3 project director,  University of Zurich) gave a short welcome before the presentations started.

The agenda included four talks, of about 30 minutes, in the following order:

OpenStack Networking Introduction by Yves Fauser, System Engineer VMware NSBU

The talk encompassed this topics: Traditional Networking – refresher, OpenStack integrated projects big picture, Why OpenStack Networking is called Neutron now, Networking before Neutron, Nova-Networking, Drawbacks of Nova-Networking that led to Neutron, OpenStack Networking with Neutron, Neutron Overview, Available Plugins, Neutron Demo and Neutron – State of the Nation.

NFV and Swisscom’s Openstack Architecture by Markus Brunner – Swisscom

Markus Brunner gave an introduction to Network Function Virtualization and how Swisscom sees how its implementation in the service chain could help to overcome the increasing traffic vs. decreasing customer fees dilemma, by offering value added networking virtual services (firewall, IP-TV, …).  Another major aspect is to minimize the number of different hardware boxes by using virtualized components running on cloud infrastructure and reduce vendor lock-in.

Mirantis Fuel and Monitoring and how it all powers the XIFI FI PPP Project by Federico Facca – Create-Net – Italy

Federico gave a presentation on the  XIFI project, XIFI architetcure and Infrastructure TOOLBOX which has the objectives of automating the installation of host operating system, hypervisor and OpenStack software through the Preboot eXecution Environment (PXE). The TOOLBOX also defines and selects a deployment model among the ones available and discovers the servers where to install the software.
XIFI federation allows to specify a “role” (controller, storage, compute etc) for each server and makes set up & network configuration (vlan etc), supports registration of the infrastructure into the federation and finally tests the deployment so to verify that everything has been installed correctly.

Ceph Storage in OpenStack by Jens-Christian Fischer SWITCH

The presentation gave interesting hints on Ceph Design Goals, Ceph Storage options, Ceph architecture, CRUSH Algorithm, Monitor – MON and Metadata Server MDS.  Jens-Christian then concluded with information about OpenStack at SWITCH and Test Cluster.

 

OS ZU oct13_stitch

IMG_20131024_202415 IMG_20131024_202404 IMG_20131024_200646 IMG_20131024_194917 IMG_20131024_194744 IMG_20131024_194739 IMG_20131024_183818

SDN – OpenFlow Presentation to the IT-MAS students

Last Friday Philipp from the ICCLab gave a presentation about SDN and OpenFlow to ZHAW master students. The big difference is that the average age of the students is higher and all of them are working for many years in the field of IT. Furthermore, most of them have a leading position in their daily work. The content of the presentation is not that detailed an covers basically the two whitepapers from the ONF and openflowhub.org about SDN and OpenFlow. We also talked about the available products in the field of OpenFlow controllers and why SDN in general is such an important thing for the datacenter providers, ISP’s or Carrier Ethernet.

The discussion we had after the presentation contained also some critical voices that addressed problems like:

  • OK, we are vendor independent and have full control over the network but this means also, that we are responsible for it.
  • Is it not easier for SME’s to have a ready made network component from e.g. Cisco instead of programming the logic by themselves?
  • The centralized network controller looks like a single point of failure and without the network, most business applications will not work.
  • Will the programmed network logic inside the controller not bee a huge bunch of code that was before distributed and small on every device?

Of course, we can answer the questions and solve these problems with the SDN paradigm. But the conclusion for us is that we can only get these people on board if we not only talk about SDN concepts but present demonstrators.  What we need at this point are:

  • Concrete working pieces of code and open working network logic that is tested and maintained as e.g. spanning-tree modules.
  • Testbeds and use-cases for implementation, migration and operation.
  • Fully functional and easy to implement network controller modules.

Such people as the master students are needed because they are and/or will be the decision makers. It is also not enough to say: “Look, Google uses it in their wide area network.”

Integration and migration of our existing network infrastructure is exactly what we are planning to do at the ICCLab. I hope that more people will share their knowledge and experience about a successful migration of their classical network to a SDN based infrastructure.

An Introduction to Software-Defined Networking (SDN)

Software-Defined Networking (SDN) is an architecture for computer networking. The overall key concept for SDN-based architecture is to define a control plane and a data plane. The control plane is represented as a server or appliance that takes the accountability for the communication between the business applications and the data plane. The data plane is represented by the network infrastructure where we don’t differ anymore between effective hardware and virtualized network devices. Thus, a control plane has to abstract the network for an administrator in both sides, the application and the infrastructure.

Figure 1: SDN architecture (source: https://www.opennetworking.org/images/stories/downloads/white-papers/wp-sdn-newnorm.pdf)

Currently there exists one SDN specification and related implementations for the communication between the data plane and the business applications which is called OpenFlow. OpenFlow does neither specify how the control plane is technically implemented nor how the network infrastructure is build, it is responsible for the communication of them. The standardizing of the elements in SDN were made by the Open Network Foundation (ONF) which is a non-profit industry consortium, working in close collaboration with OpenFlow. This circumstance lead to the general opinion that OpenFlow is the equivalent to SDN and that there is no limitation in what technology can be used in a SDN-based infrastructure.
As the architecture for SDN describes, the control plane is a single, abstracted entry point for network administrators what has the following advantages:

  • Centralized control of different network infrastructure vendors
  • Reduced complexity of newly added business applications and/or network devices
  • More network reliability and security
  • More granular network control of the incoming and outgoing network traffic
  • A higher and less complex rate of automation

All these points cover the problems that big datacentre’s currently have from the perspective of the network infrastructure. But what about the really small networks, for example a home-network. Does it make sense to separate the control and the data plane from each other if you have only one router/modem with 2 computers connected to it? The Answer is: Think big. Why do we have to manage the router/modem in the home-network by ourselves? In future times, this may be a task for the Internet Service Provider who is doing this today in some way anyways. The benefit for the ISP and the end-user is clear, less support tickets means happier end-users and a smaller support effort for the ISP itself.

We at the ICCLab have realised that we are having in our OpenStack Cluster problems that can be solved easily with a SDN architecture for our internal network infrastructure. If you are interested in some of the experiences we have made with SDN, we will publish soon an article how we setup our test-environment

[1] Software-Defined Networking: The New Norm for Networks

[2] OpenFlow White Paper