Tag: virtualisation

SmartOS Series: Virtualisation

Last week we started a new blog post series on SmartOS. Today we continue in this series and explore in details the virtualisation aspects of SmartOS.

SmartOS offers two types of OS virtualisation: the Solaris-inherited container-based virtualisation, i.e. zones, and the hosted virtualisation ported to SmartOS by Joyent, KVM.

Containers are a combination of resource controls and Solaris zones, i.e. a complete isolated virtual environment, that provide an efficient virtualisation solution and a complete and secure user space environment on a single global kernel. SmartOS uses sparse zones, meaning that only a portion of the file system is replicated in the zone, while the rest of the file system and other resources, e.g. packages, are shared across all zones. This limits the duplication of resources, provides a very lightweight virtualisation layer and makes OS upgrading and patching very easy. Given that no hardware emulation is involved and that guest applications talk directly to the native kernel, container-based virtualisation gives a close-to-native level of performance.

osvirt

SmartOS container-based virtualisation (Source: wiki.smartos.org)

SmartOS provides two resource controls methods: fair share scheduler and CPU capping. With fair share scheduler a system administrator is able to define a minimum guaranteed share of CPU for a zone; this guarantee that, when the system is busy, all zones will get their fair share of CPU. CPU capping sets an upper limit on the amount of CPU that a zone will get. Joyent also introduced a CPU bursting feature that let system administrators define a base level of CPU usage and an upper limit and also specify how much time a zone is allowed to burst, making it possible for the zone to get more resources when required.

SmartOS already offer a wide set of features, but to make it a truly Cloud OS an important feature was missing: hosted virtualisation. Joyent bridged this gap by porting to SmartOS one of the best hosted virtualisation platform: KVM. KVM on SmartOS is only available on Intel processors with VT-x and EPT (Extended Page Tables) enabled and only supports x86 and x86-64 guests. Nonetheless, this still gives the capability to run unmodified Linux or Windows guests on top of SmartOS.

In hosted virtualisation hardware is emulated and exposed to virtual machine; in SmartOS, KVM doesn’t emulate hardware itself, but it exposes an interface that is then used by QEMU (Quick Emulator). When the guest emulated architecture is the same as the host architecture, QEMU can make use of KVM features such as acceleration to increase performance.

kvm

SmartOS KVM virtualisation (Source: wiki.smartos.org)

KVM virtual machines on SmartOS still run inside a zone, therefore combining the benefits of container-based virtualisation with the power of hosted virtualisation, with QEMU as the only process running in the zone.

In the next part of the SmartOS Series we will look into ZFS, SmartOS powerful storage component.

OpenFlow – Setting up A Learning Switch

If you are also interested in SDN architecture and want to use the OpenFlow specification for it, a good starting point is the tutorial on openflow.org. OpenFlow has its roots at Stanford University and provides the communication between the data- and the control-plane in a SDN based network architecture. In this article we will show you how to setup a system that implements a learning switch using the OpenFlow specification.

First thing you need are OpenFlow-ready switches or, if you don’t want to buy them, a virtual switch that implements and supports OpenFlow will do just as well. For the second use-case exists a tool called mininet, with which you can create easily an OpenFlow-based SDN network topology. In this article we will use mininet, which you will need to install first or download the virtual appliance from github. We recommend the virtual appliance instead of a native installation because there are a lot of development tools also for OpenFlow in the virtual appliance.

After installing mininet just run the following command, that creates a topology with 3 hosts connected to one switch and a remote controller.

[gist id=3834980]

For the first controller use POX. POX is a easy to use control-plane to get quick first results. The POX controller is coming out of the NOX project, witch was the first controller for OpenFlow. In contrast to NOX, it has a better performance for components written in python. To install the controller itself, it’s nothing more then cloning the git repository and firing up the controller. In order to install the controller, running it and doing some first tests, have a look at the openflow.org tutorial.

When a OpenFlow-ready switch receives a package and the switch has no flowtable-entry for that package, the switch sends the package to the controller. In OpenFlow, every switch (physical or virtual) has its own flowtable. This table contains information, how the incoming package “flows” trough the network. The switch from our created topology has at the moment no flowtable-entry. Thus, we create a very simple component-class for the controller that implements learning switch logic that has the following tasks to do, if two hosts want to communicate (e.g. host-1 and host-2):

  1. A package arrives at the switch and we have no flowtable-entry that matches our package -> the package will be send to the controller.
  2. The controller performs now various checks on the package
    1. Do we know the MAC-address and the switch-port from the sender (host-1)? No, we store this information in the controller, because we know at which port the package has received and who the sender was.
    2. Do we know the MAC-address and the switch-port of the recipient (host-2)? No, we flood the package to every port from the controller.

The recipient (host-2) will send an answer to the sender (host-1) but this time the controller has already some information.

  1. host-2 sends an answer back to host-1 but we have still no flowtable-entry in our switch -> the package will be send to the controller once more.
    1. Do we know the MAC-address and the switch-port from the sender (host-2)? No, we store this information in the controller, because we know at which port the package has received and who the sender was.
    2. Do we know the MAC-address and the switch-port from the recipient (host-1)? YES, we stored it before in the controller. The controller can deliver the package at the port where the recipient (host-1) is.
  2. Now we know the MAC-address and the port of the two hosts in our controller. We can install a flowtable-entry in the switch for these two hosts

The two hosts can now send packages to each other, without the need of the controller, because our switch has learned the MAC-addresses and the ports of the two hosts. If the controller knows both hosts, he can setup a flowtable-entry in the switch. Now let us have look at the python code for the component-class in the POX-controller.

First thing we need in the controller is a component-class where an object is created for every connected switch. When switch shows up at the controller, the constructor from this component-instance will be called and a connection object for that switch will be passed to the constructor.

Next we need a launch function outside our component-class but somewhere, where POX can find it. The name of the component is specified by the command line option when the POX controller starts. Simply we put our launch function inside our component, where also our component-class MyCtrl is.

[gist id=3863852]

Inside the launch function is an eventListener attached by name. The listener expects an eventTyp, that is passed as string to the listener and a method or function witch should be called, when the eventTyp is raised. Our component-class on the other hand implements also an _handle_PacketIn function that is used to listen to the PacketIn events. These events are fired from the switch to the controller.

[gist id=3864066]

Our component-class only listens to the PacketIn event from the switch where the package is parsed and after that, passed to the logic of the learning switch. The function act_like_lswitch makes all the things discussed above:

  • Manage our macStore
  • Installing flowtable-entrys if necessary
  • Forwarding packages if necessary
  • Decide, if the switch must act as a hub

[gist id=3864113]

The two functions, self.send_packet and self.act_like_hub, will be discussed later. If we have the destination port and MAC address in our macStore list, the component-class installs a flowtable-entry in the switch. With the match structure provided by openflow, the switch then decides, witch packages has to follow the installed flowtable-entry. This is done by the ofp_match object from the ofp_flow_mod class. In this component-class, the flowtable-entry should match all packages, that arrives at a specific port on the switch with the MAC-address from the destination host. Then, the ofp_actions object appends a new action that specifies the output port for the package that targets our match. Appending the action to the ofp_actions object does not affect the switch, thus we must send the flow modification to the switch by invoking self.connection.send(fm). Do you remember, the connection object is set in the constructor of our component-class and we can use it, to send information’s like flowtable modifications to the switch. The function self.send_packet encapsulates some error handling and the logic, to send packages in a generic way to any network device in our infrastructre thus, its a helper function. The function self.act_like_hub on the other hand, encapsulates the logic for flooding packages to every port except the input port.

That’s all. Fire up the virtual network infrastructure with mininet with the command above and also start the controller with log level DEBUG as follows:

[gist id=3864371]

If we’re calling mininet> pingall in the mininet console, we can see the following output in the console where our controller is running (depends on the mnininet infrastructure)

[gist id=3864376]

By running the mininet command mininet> pingall a second time, you should see no further debug messages in the controller console. The reason for this is exact the effect we want to achieve. The switch can forward the packages now by itself and you can check with the dpctl tool the flowtable of the switch witch could look like the following:

[gist id=3864747]

This simple implementation should give you an idea, what further things can be done with OpenFlow. There are still a lot of unsolved problems in our component-class like:

  • The macStore list must be multidimensional to store the MAC addresses from more than one switch
  • Automatic handle changes at the input ports of the switch
  • A more meaningful error handling

Our next steps at the ICCLab will be configuring a physical switch for OpenFlow and use this switch with a productive controller to flow the traffic in our Lab. And of course, we will publish our experiences with OpenFlow right here on our website.

An Introduction to Software-Defined Networking (SDN)

Software-Defined Networking (SDN) is an architecture for computer networking. The overall key concept for SDN-based architecture is to define a control plane and a data plane. The control plane is represented as a server or appliance that takes the accountability for the communication between the business applications and the data plane. The data plane is represented by the network infrastructure where we don’t differ anymore between effective hardware and virtualized network devices. Thus, a control plane has to abstract the network for an administrator in both sides, the application and the infrastructure.

Figure 1: SDN architecture (source: https://www.opennetworking.org/images/stories/downloads/white-papers/wp-sdn-newnorm.pdf)

Currently there exists one SDN specification and related implementations for the communication between the data plane and the business applications which is called OpenFlow. OpenFlow does neither specify how the control plane is technically implemented nor how the network infrastructure is build, it is responsible for the communication of them. The standardizing of the elements in SDN were made by the Open Network Foundation (ONF) which is a non-profit industry consortium, working in close collaboration with OpenFlow. This circumstance lead to the general opinion that OpenFlow is the equivalent to SDN and that there is no limitation in what technology can be used in a SDN-based infrastructure.
As the architecture for SDN describes, the control plane is a single, abstracted entry point for network administrators what has the following advantages:

  • Centralized control of different network infrastructure vendors
  • Reduced complexity of newly added business applications and/or network devices
  • More network reliability and security
  • More granular network control of the incoming and outgoing network traffic
  • A higher and less complex rate of automation

All these points cover the problems that big datacentre’s currently have from the perspective of the network infrastructure. But what about the really small networks, for example a home-network. Does it make sense to separate the control and the data plane from each other if you have only one router/modem with 2 computers connected to it? The Answer is: Think big. Why do we have to manage the router/modem in the home-network by ourselves? In future times, this may be a task for the Internet Service Provider who is doing this today in some way anyways. The benefit for the ISP and the end-user is clear, less support tickets means happier end-users and a smaller support effort for the ISP itself.

We at the ICCLab have realised that we are having in our OpenStack Cluster problems that can be solved easily with a SDN architecture for our internal network infrastructure. If you are interested in some of the experiences we have made with SDN, we will publish soon an article how we setup our test-environment

[1] Software-Defined Networking: The New Norm for Networks

[2] OpenFlow White Paper