Month: October 2013 (page 1 of 2)

Tea Kolevska

Tea KolevskaTea Kolevska is doing a six month traineeship in the cloud computing lab as part of the IAESTE program for exchange of students for technical experience. She is a bachelor student at the Faculty of Electrical Engineering and Information Technologies in Skopje, Macedonia, majoring in Informatics and Computer Engineering. During her stay, she is working on the Rating, Charging and Billing project for the clouds, developing the ICCLab’s Cyclops solution.

First meeting of the SDN Group Switzerland at SWITCH

The first meeting of the SDN Group Switzerland took place at SWITCH, organized by Philipp Aeschlimann from the ICCLab and Kurt Baumann from SWITCH. There were close to 40 participants from Universities around Switzerland with the goal, to bring researchers and campus IT operators on one table and talk about SDN.

sdn_ws

The talks were targeting a wide range of topics along the research theme SDN:

  • Privacy Proxy (ETHZ, Bernhard Ager)
  • Outsourcing the Routing Control Logic (ETHZ, Xenofontas Dimitropoulos)

  • Lab Experiences with OpenFlow (SDN) (ETHZ, Derk-Jan Valenkamp)
  • A framework inspired by Chemical-reaction networks (UniBas, Manolis Siflakis)
  • Proposal, a new abstraction of a network, fault tolerant and heavily automation (EPFL, Maciej Kuzniar)
  • SDNs and Cloud Computing for the Swiss academic community (SWITCH, Simon Leinen)
  • QoS with OpenFlow for OpenStack on the wire (ZHAW, Philipp Aeschlimann)
  • Software-Defined Service-Centric Networking (UniBE, Torsten Braun)
  • OFELIA Testbed for Experimentation with OpenFlow/SDN (ETHZ, Vasileios Kotronis)

The SDN Group Switzerland was having a good start along with this topics and for the second meeting is planned at ZHAW in Winterthur we will keep the good mix between research and operational SDN tasks. A further goal is to invite also industrial partners. If you are interested in the SDN Group Switzerland, join the linkedIn group or contact one of the two chairs of the group directly (Philipp Aeschlimann or Kurt Baumann). The periode in we want to have meetings are 3 months and a blog with the topics will be created soon. From the ICCLab attended Piyush Harsh, Thomas Michael Bohnert, Antonio Cimmino and Philipp Aeschlimann. If you are intrested in the SDN research topics of the ICCLab, there will be multiple options for you:

Create-Net Visits the ICCLab

ZHAW ICCLab – Winterthur 24/10/2013

ICCLab invited Federico Facca, from Create-Net, to give an overview of the research activities, of his lab, and relevant projects in execution.

Around two hours were dedicated to this event to exchange reciprocally information about the activities and innovation interests.

The first part of the presentation, of Federico, was dedicated to introduce how Create-Net is organised and areas of competences:

CREATE-NET,  headquartered in Trento – Italy, is an international research center which operates as a non-profit association and with a mission to  built around four key points: Achieve Research excellence in ICT, promote technology & innovation transfer and have a focus on key application areas and services. One of  key goals is to provide significant benefit to the Autonomous Province of Trento. It is mainly structured in the following application domains:Intelligent transportation and sustainable mobility, Interactive and mobile social media, Smart Energy Systems and Well being and e-health.

After this introductory speech, most of the time was dedicated to the FI-PPP XI-FI project and Mirantis FUEL tool that is used to automate the introduction and configuration of new nodes in the federated cloud  XI-FI.   Mirantis opened its private library, of configuration and deployment tools for OpenStack, to the public. The library, called FUEL, has already been used in many OpenStack projects that the company completed for its business customers. ZHAW ICCLab is very interested in Fuel because it will be used in its datacenter.

The event was concluded with a round table with introduction of reseach interests and projects by each researcher of ICCLab.

ZHAW ICCLAB and Create-Net are contributing to the FI PPP programme with their participations to FI-WARE and XI-FI respectively. Nevertheless, both the organisations are participating to the CONCORD supporting and coordination action of FI PPP.

fede_stitch     IMG_20131024_152447 IMG_20131024_151807

Future Networks 12th FP7 Concertation – FI Cluster Meeting

Green and Energy-efficient Networking Workshop – 22 October 2013, Brussels

This workshop on “Green and Energy-efficient Networking” is jointly organized by the FP7 projects TREND and ECOnet, together with the GreenTouch Consortium.

It is organized in the context of the FI Cluster and is also open to the relevant experts and researchers of projects usually participating in the RAS and CaON Clusters.

The agenda foresees presentations, panels and interactive (live) discussions between the participants.

The fisrt session of the day is dedicated to the “Energy Efficiency Modelling and Metrics” with presentations and interaction with participants. The topics discussed are related to core networks, wireless access networks and wired network devices including green metering & wireless network modelling by GreenTouch.

The second session addresses “Project Perspectives and Research Challenges” where the projects, dealing with green technologies, have the possibility to briefly present their focus, key outcomes, next steps in green networking. Presentations planned from UniverSelf, FLAMINGO, CONCERTO, MobileCloud, Content, GEYSER, eBalance, and others.

The workshop ends with panel discussion and Closing by EC Officers and Future Internet Cluster Chairs.

IMG_20131022_152228 IMG_20131022_135749

IMG_20131022_162751

 

ICCLab joins the new COST Action Autonomous Control for a Reliable Internet of Services (ACROSS) – IC1304

We are happy to announce that ICCLab was invited to join the COST Action Autonomous Control for a Reliable Internet of Services (ACROSS) – IC1304 as Swiss representative in the Management Committee.

ICT COST Action IC1304 Autonomous Control for a Reliable Internet of Services (ACROSS)

Descriptions are provided by the Actions directly via e-COST.

Currently, we are witnessing a paradigm shift from the traditional information-oriented Internet into an Internet of Services (IoS). This transition opens up virtually unbounded possibilities for creating and deploying new services. Eventually, the ICT landscape will migrate into a global system where new services are essentially large-scale service chains, combining and integrating the functionality of (possibly huge) numbers of other services offered by third parties, including cloud services. At the same time, as our modern society is becoming more and more dependent on ICT, these developments raise the need for effective means to ensure quality and reliability of the services running in such a complex environment. Motivated by this, the aim of this Action is to create a European network of experts, from both academia and industry, aiming at the development of autonomous control methods and algorithms for a reliable and quality-aware IoS.

Information and Communication Technologies COST Action IC1304

Action Fact Sheet

Download AFS as .RTF

Memorandum of Understanding

Download MoU as PDF

Rating, Charging, and Billing for the Clouds

Cloud has revolutionized the way we think of computing now. Now everything is on-demand, self-service, pay-as-you-go, and scalable. Although, these are welcome features that tremendously reduces the CAP-Ex and OP-Ex for any business, but the true potential of the clouds in regards to novel rating-charging and billing potentials has yet to be realized. Infrastructure clouds are being treated as commodity now. Much of the innovation is now shifting towards platform and software services over infrastructure clouds.

Telecom domain has always seen lots of innovations in this regard – different tiers of pricing, bundled services, numerous packages, and what not. And think about it – they only offer in reality just 1 type of service namly voice traffic. They have standardized their protocols, they even have a standard to facilitate charging and billing a.k.a Diameter. Not downplaying the significance of their innovation, the standards are needed as the consumers are mobile and they roam from one business domain into another – and unless their interfaces, user-equipment radios, accounting is streamlined (read standardized) it would be almost impossible to support the demands of a modern telco consumer.

So now the question to ask is – is there a need to replicate what has been done in the telecom world for the cloud services? I tend to lean towards a NO. The needs of the cloud consumers are not same as a telco consumers. By keeping things manageable and simple – cloud providers can keep costs low which further reinforces the USP of the clouds – simplicity and lower costs. The computations in the cloud are generally not mobile – there is typically no need. By virtue of the Internet – a computation could take place in any part of the world and there is a complete delinking of customers’ actual location and where the services are being offered from, as long as certain broad-ranging SLAs are satisfied. Therefore there is no real need for cloud hardware systems to implement complex plethora of standards. And that could be argued for the charging and billing strategies too (there is a need for standards in the consumer facing management interfaces for mitigating vendor lock-ins, but I reserve it for another blog post for another day).

A unified and simple billing strategy would work wonders for the consumers. But one also should be mindful of willy-nilly application of such a strategy. There is a need to justify the cost to the consumer as well as the cost to the provider. And hence a proper rating-charging engine is desired. And that is where there is still a lot of room for innovation in the world of clouds. Modern infrastructure management stacks including CloudStack and OpenStack already include monitoring models and metering functions in their core offering. These must be utilized by the provider in determining the real cost of operating their cloud infrastructure, and then tie this to the cost offered to the consumers.

A proper rating-charging engines would really help the providers make a sound judgement in this aspect. There are already numerous open source products including jBilling, openbillingsystem.com, opensourcebilling.org, etc. in the market. The “open” part is a severely crippled offering either providing simple billing interfaces, or features without ability to compute and process “usage data records” (UDRs). There is a real need for a true open-source platform that would allow numerous cloud services to accurately undertake rating-charging activity for their customer so that they can keep the billing model accurate, simple but “no simpler”.

If we now consider PaaS services – there is a scope to do lot more with a unified RCB engine. The umbrella of metrics of interest would be dependent on the platform services and therefore the RCB engine must be adaptive for such systems. The built-in measurement systems have to be evaluated and proper metering mechanisms enabled / provided.

In summary, rating-charging-billing innovations are necessary for several reasons (money, money, and more money …), and the innovations in the world of the clouds are just starting.

How to model service quality in the cloud

Why is service quality important?

A cloud can be seen as a service which is provided by a cloud provider and consumed by an end user. The cloud provider has the goal to maximize profit by providing cloud services to end users. Usually there are no fixed prices for using cloud services: users have to pay a variable price that depends on the consumption of cloud services. Service quality is a constraint to the cloud provider’s optimization goal of profit-maximization. The cloud provider should deliver cloud services with sufficiently good performance, capacity, security and availability and maximize his profit. Since quality costs more, a low quality cloud service is preferable to a high quality service, because it costs less. So why should profit-oriented cloud providers bother at all with quality?

A new view of service quality

In the new view, we see service quality not as a restriction to profit maximization. Cloud service quality is an enabler of further service consumption and therefore a force that increases profit of cloud providers. If we think of cloud computing as a low quality service with low degrees of availability (many outages), running slowly and in an insecure environment, one can easily see that cloud consumers will stop using the cloud service as soon as there are alternatives to it. But there is another argument in favour of using clouds with a high degree of quality of service (QoS): if cloud service consumption is performing well, it can be used more often and by more users at once. Therefore an operator of a quality cloud service can handle more user requests and at lower costs than a non-quality-oriented cloud provider.

What is quality in the cloud?

Quality can have different meanings: for us it must be measured in terms of availability, performance, capacity and security. For each of these four terms we have to define metrics that measure quality. The following metrics are used in service management practice:

  1. Availability: Availability can be calculated only indirectly by measuring the downtime, because outages are directly observable while normal operation of a system is not. When an outage occurs, the downtime is reported as the time difference between discovery of an outage and restoration of the service. Availability is then the ratio of total operating time minus downtime to the total operating time. Availability of a system can be tested by using the Dependability Modeling Framework, i. e. a series of simulated random outages which tell system operators how stable their system is.
  2. Performance: Performance is usually the tested by measurement of the time it takes to perform a set of sample queries in a computer program. Such a time measurement is called a benchmark test. Performance of a cloud service can be measured by running multiple standard user queries and then measure their execution time.
  3. Capacity: By capacity we mean storage which is free for service consumption. Capacity on disks can be measured directly by checking how much storage is used and how much storage is free. If we want to know how much working memory must be available, the whole measurement becomes a little bit more complicated: we must measure memory consumption during certain operations. Usually this can be done by profiling the system: like in benchmarking we run a set of sample queries and measure how much memory is consumed. Then we calculate the memory which is necessary to operate the cloud service.
  4. Security: Security is the most abstract quality indicator, because it can not be measured directly. A common practice is to create a vector of potential security threats and estimate the probability that a threat will lead to an attack and estimate the potential damage in case of an attack. Threats can be measured as the product of the attack probability and the potential of a damage. The goal should be  to mitigate the biggest risks with a given budget. A risk is mitigated when there are countermeasures against identified security threats (risk avoidance), minimization measures for potential damages (damage minimization), transfer of security risks to other organizations (e. g. insurances) and (authorized) risk acceptance. Because nobody can know all potential threats in advance there is always an unknown rest risk which cannot be avoided. Security management of a cloud service is good, when the security threat vector is regularly updated and the worst risks are mitigated.

The given metrics are a good starting point for modelling service quality. In optimization there are two types of models: descriptive models and optimization models.

A descriptive model of service quality in the cloud

Descriptive models describe how a process is performed and are used to explore how the process works. Usually descriptive models answer “What If?”-questions. They consist in an input of variables, a function that transforms the input in output and a set of (unchangeable) parameters that influence the transformation function. A descriptive model of cloud service quality would describe how a particular configuration of service components (service assets like hardware, software etc. and management of the service assets) delivers a particular set of output in terms of service quality metrics. If we can e. g. increase availability of the cloud service by using a recovery tool like Pacemaker, a descriptive model is able to tell us how the quality of the cloud service changes.

Sets are all possible resources we can use in our model to produce an outcome. In OpenStack we use hardware, software and labour. Parameters are attributes of the set entities which are not variable: e. g. labour cost, price of hardware assets etc. All other attributes are called variables:  The goal of the modeler is to change these variables and see what comes out. The outcomes are called consequences.

A descriptive model of the OpenStack service could be described as follows:

  • Sets:
    • Technology used in the OpenStack environment
      • Hardware (e. g. physical servers, CPU, RAM, harddisks and storage,  network devices, cables, routers)
      • Operating system (e. g. Ubuntu, openSUSE)
      • Services used in OpenStack (e. g. Keystone, Glance, Quantum, Nova, Cinder, Horizon, Heat, Ceilometer)
      • HA Tools (e. g. Pacemaker, Keepalive, HAProxy)
      • Monitoring tools (e. g.
      • Benchmark tools
      • Profiling tools
      • Security Tools (e. g. ClamAV)
    • Management of the OpenStack environment
      • Interval of availability tests.
      • Interval of performance benchmark tests.
      • Interval of profiling and capacity tests.
      • Interval of security tests.
      • Interval of Risk Management assessments (reconsideration of threat vector).
  • Parameters:
    • Budget to run the OpenStack technology and service management actions
      • Hardware costs
      • Energy costs
      • Software costs (you don’t have to pay licence fees in the Open Source world, but you still have maintenance costs)
      • Labor cost to handle tests
      • Labor costs to install technologies
      • Labor costs to maintain technologies
    • Price of technology installation, maintenance and service management actions
      • Price of tangible assets (hardware) and intangible assets (software, energy consumption)
      • Salaries, wages
    • Quality improvement by operation of particular technology or by performing service management actions
      • Price of tangible assets (hardware) and intangible assets (software, energy consumption)
      • Salaries, wages
  • Variables:
    • Quantities of a particular technology which should be installed and maintained:
      • Hardware (e. g. quantitity of physical servers, CPU-speed, RAM-size, harddisks and storage size, number of network devices, speed of cables, routers)
      • Operating system of each node (e. g. Ubuntu, openSUSE)
      • OpenStack services per node(e. g. Keystone, Glance, Quantum, Nova, Cinder, Horizon, Heat, Ceilometer)
      • HA Tools per node (e. g. Pacemaker, Keepalive, HAProxy)
      • Monitoring tools (e. g.
      • Benchmark tools
      • Profiling tools
      • Security Tools (e. g. ClamAV)
  • Consequences:
    • Costs for installation and maintenance of the OpenStack environment:
      • Infrastructure costs
      • Labour costs
    • Quality of the OpenStack service in terms of:
      • Availability
      • Performance
      • Capacity
      • Security

In the following picture we show a generic descriptive model for optimization of quality of an IT service:

Fig. 1: Descriptive model of service quality of an IT service.

Fig. 1: Descriptive model of service quality of an IT service.

Such a descriptive model is good to exploit the quality improvements delivered by different system architectures and service management operations. The input variables form a vector of systems and operations: Hardware, network architecture, operating systems, OpenStack services, HA tools, benchmark tools, profiling monitors, security software and service operations performed by system administrators. One can experiment with different systems and operations and then check the outcomes. The outcomes are the costs (as a product of prices and systems) and the service quality. The service quality is then measured by our metrics we have defined.

Even if the descriptive model is quite useful, it is very hard to actually optimize service quality. Therefore the descriptive model has to be extended to an optimization model.

An optimization model of service quality in the cloud

Optimization models enhance descriptive models by adding constraints to the inputs of the descriptive model and by defining an objective function.  Optimization models answer “What’s Best?”-questions. They consist in an input of variables, a function that transforms the input in output and a set of (unchangeable) parameters that influence the transformation function. Additionally they contain constraints that restrict the number of possible inputs and an objective function which tells the model user what output should be achieved.

An optimization model of the OpenStack service could be described as follows:

  • Sets:
    • Technology used in the OpenStack environment
      • Hardware (e. g. physical servers, CPU, RAM, harddisks and storage, network devices, cables, routers)
      • Operating system (e. g. Ubuntu, openSUSE)
      • Services used in OpenStack (e. g. Keystone, Glance, Quantum, Nova, Cinder, Horizon, Heat, Ceilometer)
      • HA Tools (e. g. Pacemaker, Keepalive, HAProxy)
      • Monitoring tools (e. g.
      • Benchmark tools
      • Profiling tools
      • Security Tools (e. g. ClamAV)
    • Management of the OpenStack environment
      • Interval of availability tests.
      • Interval of performance benchmark tests.
      • Interval of profiling and capacity tests.
      • Interval of security tests.
      • Interval of Risk Management assessments (reconsideration of threat vector).
  • Parameters:
    • Budget to run the OpenStack technology and service management actions
      • Hardware costs
      • Energy costs
      • Software costs (you don’t have to pay licence fees in the Open Source world, but you still have maintenance costs)
      • Labor cost to handle tests
      • Labor costs to install technologies
      • Labor costs to maintain technologies
    • Price of technology installation, maintenance and service management actions
      • Price of tangible assets (hardware) and intangible assets (software, energy consumption)
      • Salaries, wages
    • Quality improvement by operation of particular technology or by performing service management actions
      • Price of tangible assets (hardware) and intangible assets (software, energy consumption)
      • Salaries, wages
  • Variables:
    • Quantities of a particular technology which should be installed and maintained:
      • Hardware (e. g. quantitity of physical servers, CPU-speed, RAM-size, harddisks and storage size, number of network devices, speed of cables, routers)
      • Operating system of each node (e. g. Ubuntu, openSUSE)
      • OpenStack services per node(e. g. Keystone, Glance, Quantum, Nova, Cinder, Horizon, Heat, Ceilometer)
      • HA Tools per node (e. g. Pacemaker, Keepalive, HAProxy)
      • Monitoring tools (e. g.
      • Benchmark tools
      • Profiling tools
      • Security Tools (e. g. ClamAV)
  • Constraints:
    • Budget limitation for installation and maintenance of the OpenStack environment:
      • Infrastructure costs
      • Labour costs
    • Technological constraints:
      • Incompatible technologies
      • Limited knowledge of system administrators
    • Objective Function:
      • Maximization of service quality in terms of:
        • Availability
        • Performance
        • Capacity
        • Security

The following picture shows a generic optimization model for an IT service:

Fig. 2: Service quality optimization model for an IT service.

Fig. 2: Service quality optimization model for an IT service.

With such an optimization model at hand we are able to optimize service quality of an OpenStack environment. What we need are clearly defined values for the sets, parameters, constraints and objective functions. We must be able to create a formal notation for all model elements.

What further investigations are required?

The formal model can be created if we get to know all information required to assign concrete values to all model elements. This infomration is:

  • List of all set items (OpenStack system environment plus regular maintenance operations): First we must know all possible values for the systems and operations used in our OpenStack environment. We must know which hardware, OS and software we can use to operate OpenStack and which actions (maintenance) must be performed regularly in order to keep OpenStack up and running.
  • List of all parameters (costs of OpenStack system environment  elements, labour cost for maintenance operations and quality improvement per set item): In a second step we must obtain all prices for our set items. This means we must know how much it costs to install a particular hardware, OS or software and we must know how much the maintance operations cost in terms of salaries. Additionally we must know the quality improvement which is delivered per set item: this can be done by testing the environment with and without the item (additional system or service operation) and using our quality metrics.
  • List of constraints (budget limit and technical constraints): In a third step we must get to know the constraints, i. e. budget limits and technical constraints. A technical constraint can be a restriction like that you can use only one profiling tool.
  • Required outcomes (targeted quality metric value maximization): Once we know the sets, parameters and constraints, we must define how quality is measured in a function. Again we can use our quality metrics for that.
  • Computation of optimal variable values (which items should be bought): Once we know all model elements, we can compute the optimal variables. Since we will not get a strict mathematical formula for the target function and since we may also work with incomplete information, it is obvious that we should use a metaheuristic (like e. g. evolutionary algorithms) to find a way on how to optimize service quality.

We have seen that creating a model for service quality optimization in the cloud requires a lot of investigation. Some details about it will be revealed in further articles.

 

IEEE GLOBECOM 2013 – 9th International Workshop on Broadband Wireless Access

The 9th International Workshop on Broadband Wireless Access (BWA) will be held in conjunction with IEEE GLOBECOM 2013 on December 9th 2013, Atlanta, USA

This edition will  provide a continuation of the successful BWA workshop series. This year, 31 technical papers will be presented, covering a wide range of topics in
the field of broadband wireless access research, and being complemented by the following keynote speakers and panelists:

The broadband wireless access topics emphasized in this workshop are
Novel physical layer techniques,  Novel MAC design for broadband wireless access,  Further evolution of multi-antenna and cooperative communications,  Management of dense, heterogeneous networks,  Novel forms of spectrum access and usage and Context-aware communications.

GENERAL CHAIRS: Dr. Patrick Marsch, Nokia Solutions and Networks, Poland; Dr. Andreas Maeder, NEC Laboratories Europe, Germany

TPC CHAIRS: Dr. Arun Ghosh, AT&T Labs, USA; Prof. Giridhar K, IIT Madras, India; Dr. Peter Fertl, BMW Group Research & Techn., Germany

STEERING COMMITTEE: Prof. Thomas M. Bohnert, Zurich Univ. of Appl. Sciences, Switzerland; Dr. Dirk Staehle, DOCOMO Communications Laboratories Europe, Germany; Dr. Gabor Fodor, Ericsson Research, Sweden

Registration information are on the Globecom 2013 website.  BWA_Flyer

OpenStack Development Process

by Josef Spillner

Preface

OpenStack is a cloud computing project to provide an infrastructure as a service (IaaS). It is free open source software released under the terms of the Apache License. The project is managed by the OpenStack Foundation, a non-profit corporate entity established in September 2012 to promote OpenStack software and its community. More than 200 companies joined the project.
How you can understand that is large project with hundreds of developers, hundreds of thousands lines of code. This topic will make clear development process in OpenStack and how to push the code into OpenStack.

Start OpenStack development

1. Signing up for accounts:

The first thing you should do is signing up for LaunchPad account. LaunchPad is a web application and website that allows users to develop and maintain software, particularly open-source software. Launchpad is developed and maintained by Canonical Ltd. The OpenStack project uses LaunchPad for mailing list, blueprints, groups, bug tracking. Each OpenStack project will have a LaunchPad project.  You can create account here.

The next you should signing up in Gerrit. Gerrit is a free, web-based team software code review tool. Software developers in a team can review each other’s modifications on their source code using a Web browser and approve or reject those changes. It integrates closely with Git, a distributed version control system. To interact with Gerrit you need to set SSH key. Because all Gerrit commands are using SSH protocol and the host port is 29418. A user can access Gerrit’s Git repositories with SSH or HTTP protocols. The user must have registered in Gerrit and a public SSH key before any command line commands can be used.

2. Communication tools: 

You should be on OpenStack’s mailing list and also on your OpenStack project’s mailing list. It’s necessary to take part in discussions about code development, project design etc. You can subscribe to mailing list according this instruction.

Also for quick answers you can use the IRC channels. It can be questions about how to work with particular methods or about why tests fail. Every week you should take part on IRC meeting of your project where you can discuss some release detail or your own bugs etc. Information about IRC channels you can find here.

3. Setting up development environment:

You can develop on any system you want but the most used and the most comfortable for this aim is Ubuntu. The first thing you need it is Git. In software development, Git is a distributed revision control and source code management (SCM) system with an emphasis on speed. You need Git to pull and push code into Gerrit.

The next step is DevStack. DevStack’s mission is to provide and maintain tools used for the installation of the central OpenStack services from source (git repository master, or specific branches) suitable for development and operational testing. It also demonstrates and documents examples of configuring and running services as well as command line client usage.

git clone git://github.com/openstack-dev/devstack.git
cd devstack; ./stack.sh

Now you can get an Openstack project:

git clone https://git.openstack.org/openstack/ceilometer

This topic not about development of ceilometer so let skip it. Let image that you already have a complete code.
You need install a pip. Pip is a tool for installing and managing Python packages.

sudo apt-get install python pip

Mostly you need pip for install a tox.

sudo pip install tox

OpenStack has a lot of projects. For each project, the OpenStack Jenkins needs to be able to perform a lot of tasks. If each project has a slightly different way to accomplish those tasks, it makes the management of a consistent testing infrastructure very difficult to deal with. Additionally, because of the high volume of development changes and testing, the testing infrastructure has to be able to pre-cache artifacts that are normally fetched over the internet. To that end, each project should support a consistent interface for driving tests and other necessary tasks.

  • tox -epy26 – Unit tests for python2.6
  • tox -epy27 – Unit tests for python2.7
  • tox -epep8 –  pep8 checks

If all tests are running you can already pull the code into Gerrit.

4.Publication code:

Simply running git review should be sufficient to push your changes to Gerrit, assuming your repository is set up as described above, you don’t need to read the rest of this section unless you want to use an alternate workflow.

If you want to push your changes without using git-review, you can push changes to gerrit like you would any other git repository, using the following syntax (assuming “gerrit” is configured as a remote repository):

git push gerrit HEAD:refs/for/$BRANCH[/$TOPIC]

Where $BRANCH is the name of the Gerrit branch to push to (usually “master”), and you may optionally specify a Gerrit topic by appending it after a slash character.

If you want to commit changes: Git commit messages should start with a short 50 character or less summary in a single paragraph. The following paragraph(s) should explain the change in more detail.

If your changes addresses a blueprint or a bug, be sure to mention them in the commit message using the following syntax:

blueprint BLUEPRINT
Closes-Bug: ####### (Partial-Bug or Related-Bug are options)

For example:
Adds keystone support

...Long multiline description of the change...

Implements: blueprint authentication
Closes-Bug: #123456
Change-Id: I4946a16d27f712ae2adf8441ce78e6c0bb0bb657

5.Code review: 

Automatic testing occurs and the results are displayed. Reviewers comment in the comment box or in the code itself.

If someone leaves an in-line comment, you can see it from expanded “Patch Set.” “Comments” column shows how many comments are in each file. If you click a file name that has comments, the new page shows a diff page with the reviewer’s name and comments. Click “Reply” and write your response. It is saved as a draft if you click “Save.” Now, go back to the page that shows a list of patch sets and click “Review,” and then, click “Publish comments.”

If your code is not ready for review, click “Work in Progress” to indicate that a reviewer does not need to review it for now. Note that the button is invisible until you login the site.

 

Oleksii Serhiienko

This page is kept for archiving. Please navigate to our new site: blog.zhaw.ch/splab.

Oleksii Serhiienko Oleksii Serhiienko is a part-time master student and researcher at the ICCLab working on the on the Rating-Charging-Billing initiative and its OpenSource solution –Cyclops.

Oleksii has been graduated at the Kiev Polytechnic University majoring computer engineering. Before he had already been as a part of the ICCLab community. In 2014, he resided in the laboratory like exchange student by IAESTE program. During that year, he was working on OpenStack in particular at the  Ceilometer projects. His code has been added to the “Icehouse” OpenStack release. When he came back after internship to Ukraine he continued working with OpenStack technologies and python programming.

Currently, Oleksii is working on the SafeSwiss cloud project in particular on the Prediction engine development.

 

 

 

« Older posts