Tag: Ceph

Testing Alluxio for Memory Speed Computation on Ceph Objects

In a previous blog post, we showed how “bringing the code to the data” can highly improve computation performance through the active storage (also known as computational storage) concept. In our journey in investigating how to best make computation and storage ecosystems interact, in this blog post we analyze a somehow opposite approach of “bringing the data close to the code“. What the two approaches have in common is the possibility to exploit data locality moving away in both cases from the complete disaggregation of computation and storage.

The approach in focus for this blog post, is at the basis of the Alluxio project, which in short is a memory speed distributed storage system. Alluxio enables data analytics workloads to access various storage systems and accelerate data-intensive applications. It manages data in-memory and optionally on secondary storage tiers, such as cheaper SSDs and HDDs, for additional capacity. It achieves high read and write throughput unifying data access to multiple underlying storage systems reducing data duplication among computation workloads. Alluxio lies between computation frameworks or jobs, such as Apache Spark, Apache MapReduce, or Apache Flink, and various kinds of storage systems, such as Amazon S3, OpenStack Swift, GlusterFS, HDFS or Ceph. Data is available locally for repeated accesses to all users of the compute cluster regardless of the compute engine used avoiding redundant copies of data to be present in memory and driving down capacity requirements and thereby costs.

For more details on the components, the architecture and other features please visit the Alluxio homepage. In the rest of the blog post we will present our experience in integrating Alluxio on our Ceph cluster and use a Spark application to demonstrate the obtained performance improvement (the reference analysis and testing we aimed to reproduce can be found here).

The framework used for testing

Fig. 1: Alluxio testing set-up.
Continue reading

Experimenting on Ceph Object Classes for Active Storage

What is active storage about?

In most of the distributed storage systems, the data nodes are decoupled from compute nodes. Disaggregation of storage from the compute servers is motivated by an improved efficiency of storage utilization and a better and mutually independent scalability of computation and storage.

While the above consideration is indisputable, several situations exist where moving computation close to the data brings important benefits. In particular, whenever the stored data is to be processed for analytics purposes, all the data needs to be moved from the storage to the compute cluster (consuming network bandwidth). After some analytics on the data, in most cases the results need to go back to the storage. Another important observation is that large amounts of resources (CPU and memory) are available in the storage infrastructure which usually remain underutilized. Active storage is a research area that studies the effects of moving computation close to data and analyzes the fields of application where data locality actually introduces benefits. In short, active storage allows to run computation tasks where the data is, leveraging storage nodes’ underutilized resources, reducing data movement between storage and compute clusters.

There are many active storage frameworks in the research community. One example of active storage is is the OpenStack Storlets framework, developed by IBM and integrated within OpenStack Swift deployments. IOStack is European funded project, that builds around this concept for object storage. Another example is ZeroVM, which allows developers to push their application to their data instead of having to pull their data to their application.

So, what about Ceph?

Continue reading

Deploy Ceph and start using it: end to end tutorial – simple librados client (part 3/3)

(Part 1/3 – Installation – Part 2/3 – troubleshooting)

This part of the tutorial describes how to setup a simple Ceph client using librados (for C++).

The only information that the client requires for the cephx authentication is

  • Endpoint of the monitor node
  • Keyring containing the pre-shared secret (we will use the admin keyring)

Install librados APIs

On Ubuntu, the library is available on the repositories

$ sudo apt-get install librados-dev

Create a client configuration file

This is the file from which librados will read the client configuration.

The content of the file is structured according to this template:

[global]
mon host= <IP address of one of the monitors>
keyring = <path/to/client.admin.keyring>

for example:

[global]
mon host = 192.168.252.10:6789
keyring = ./ceph.client.admin.keyring

The public endpoint of the monitor node can be retrieved with

$ ceph mon stat

The keyring file can be copied from the admin node. No change is needed to this file. The same information that is contained in the file can be retrieved with this command that will also list the client capabilities:

$ ceph auth get client.admin

Connect to the cluster

The following simple client will perform the following operations:

  • Read the configuration file (ceph.conf) from the local directory
  • Get an handle to the cluster and IO context on the “data” pool
  • Create a new object
  • Set an xattr
  • Read the object and xattr back
  • Print the list of pools
  • Print the list of objects in the “data” pool
  • Cleanup
  1. #include <rados/librados.hpp>
  2. #include <string>
  3. #include <list>
  4. int main(int argc, const char **argv)
  5. {
  6.   int ret = 0;
  7.   /*
  8.    * Errors are not checked to avoid pollution.
  9.    * After each Ceph operation:
  10.    * if (ret < 0) error_condition
  11.    * else success
  12.    */
  13.   // Get cluster handle and connect to cluster
  14.   std::string cluster_name(“ceph”);
  15.   std::string user_name(“client.admin”);
  16.   librados::Rados cluster;
  17.   cluster.init2(user_name.c_str(), cluster_name.c_str()0);
  18.   cluster.conf_read_file(“ceph.conf”);
  19.   cluster.connect();
  20.   // IO context
  21.   librados::IoCtx io_ctx;
  22.   std::string pool_name(“data”);
  23.   cluster.ioctx_create(pool_name.c_str(), io_ctx);
  24.   // Write an object synchronously
  25.   librados::bufferlist bl;
  26.   std::string objectId(“hw”);
  27.   std::string objectContent(“Hello World!”);
  28.   bl.append(objectContent);
  29.   io_ctx.write(objectId, bl, objectContent.size()0);
  30.   // Add an xattr to the object.
  31.   librados::bufferlist lang_bl;
  32.   lang_bl.append(“en_US”);
  33.   io_ctx.setxattr(objectId, “lang”, lang_bl);
  34.   // Read the object back asynchronously
  35.   librados::bufferlist read_buf;
  36.   int read_len = 4194304;
  37.   //Create I/O Completion.
  38.   librados::AioCompletion *read_completion =
  39.                                              librados::Rados::aio_create_completion();
  40.   //Send read request.
  41.   io_ctx.aio_read(objectId, read_completion, &read_buf, read_len, 0);
  42.   // Wait for the request to complete, and print content
  43.   read_completion>wait_for_complete();
  44.   read_completion>get_return_value();
  45.   std::cout << “Object name: “ << objectId << \n
  46.             << “Content: “ << read_buf.c_str() << std::endl;
  47.   // Read the xattr.
  48.   librados::bufferlist lang_res;
  49.   io_ctx.getxattr(objectId, “lang”, lang_res);
  50.   std::cout << “Object xattr: “ << lang_res.c_str() << std::endl;
  51.   // Print the list of pools
  52.   std::list<std::string> pools;
  53.   cluster.pool_list(pools);
  54.   std::cout << “List of pools from this cluster handle” << std::endl;
  55.   for (auto pool_id : pools) {
  56.     std::cout << \t << pool_id << std::endl;
  57.   }
  58.   // Print the list of objects
  59.   librados::ObjectIterator oit = io_ctx.objects_begin();
  60.   librados::ObjectIterator oet = io_ctx.objects_end();
  61.   std::cout << “List of objects from this pool” << std::endl;
  62.   for (; oit != oet; oit++) {
  63.     std::cout << \t << oit>first << std::endl;
  64.   }
  65.   // Remove the xattr
  66.   io_ctx.rmxattr(objectId, “lang”);
  67.   // Remove the object.
  68.   io_ctx.remove(objectId);
  69.   // Cleanup
  70.   io_ctx.close();
  71.   cluster.shutdown();
  72.   return 0;
  73. }

Find the pastebin here.

This example can be compiled and executed with

$ g++ client.cpp -lrados -o cephclient
$ ./cephclient

Operate with cluster data from the command line

To quickly verify if an object was written or to remove it, use the following commands (e.g., from the monitor node).

  • List objects in pool data

    $ rados -p data ls
  • Check the location of an object in pool data

    $ ceph osd map data <object name>
  • Remove object from pool data

    $ rados rm <object name> --pool=data

Deploy Ceph and start using it: end to end tutorial – Troubleshooting (part 2/3)

(Part 1/3 – Installation – Part 3/3 – librados client)

It is quite common that after the initial installation, the Ceph cluster reports health warnings. Before using the cluster for storage (e.g., allow clients to access it), a HEALTH_OK state should be reached:

cluster-admin@ceph-mon0:~/ceph-cluster$ ceph health
HEALTH_OK

This part of the tutorial provides some troubleshooting hints that I collected during the setup of my deployments. Other helpful resources are the Ceph IRC channel and mailing lists.

Useful diagnostic commands

A collection of diagnostic commands to check the status of the cluster is listed here. Running these commands is how we can understand that the Ceph cluster is not properly configured.

  1. Ceph status
    $ ceph status

    In this example, the disk for one OSD had been physically removed, so 2 out of 3 OSDs were in and up.

    cluster-admin@ceph-mon0:~/ceph-cluster$ ceph status
        cluster 28f9315e-6c5b-4cdc-9b2e-362e9ecf3509
         health HEALTH_OK
         monmap e1: 1 mons at {ceph-mon0=192.168.0.1:6789/0}, election epoch 1, quorum 0 ceph-mon0
         osdmap e122: 3 osds: 2 up, 2 in
          pgmap v4699: 192 pgs, 3 pools, 0 bytes data, 0 objects
                87692 kB used, 1862 GB / 1862 GB avail
                     192 active+clean
  2. Ceph health
    $ ceph health
    $ ceph health detail
  3. Pools and OSDs configuration and status
    $ ceph osd dump
    $ ceph osd dump --format=json-pretty

    the second version provides much more information, listing all the pools and OSDs and their configuration parameters

  4. Tree of OSDs reflecting the CRUSH map
    $ ceph osd tree

    This is very useful to understand how the cluster is physically organized (e.g., which OSDs are running on which host).

  5. Listing the pools in the cluster
    $ ceph osd lspools

    This is particularly useful to check clients operations (e.g., if new pools were created).

  6. Check the CRUSH rules
    $ ceph osd crush dump --format=json-pretty
  7. List the disks of one node from the admin node
    $ ceph-deploy disk list osd0
  8. Check the logs.
    Log files in /var/log/ceph/ will provide a lot of information for troubleshooting. Each node of the cluster will contain logs about the Ceph components that it runs, so you may need to SSH on different hosts to have a complete diagnosis.

Check your firewall and network configuration

Every node of the Ceph cluster must be able to successfully run

$ ceph status

If this operation times out without giving any results, it is likely that the firewall (or network configuration) is not allowing the nodes to communicate.

Another symptom of this problem is that OSDs cannot be activated, i.e., the ceph-deploy osd activate <args> command will timeout.

Ceph monitor default port is 6789Ceph OSDs and MDS try to get the first available ports starting at 6800.

A typical Ceph cluster might need the following ports:

Mon:  6789
Mds:  6800
Osd1: 6801
Osd2: 6802
Osd3: 6803

Depending on your security requirements, you may want to simply allow any traffic to and from the Ceph cluster nodes.

References: http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/2231

Try restarting first

Without going for fine troubleshootings and log analysis, sometimes (especially after the first installation), I’ve noticed that a simple restart of the Ceph components has helped the transition from a HEALTH_WARN to a HEALTH_OK state.

If some of the OSDs are not in or not up, like in the case below

    cluster 07d28faa-48ae-4356-a8e3-19d5b81e159e
     health HEALTH_WARN 192 pgs incomplete; 192 pgs stuck inactive; 192 pgs stuck unclean; 1/2 in osds are down; clock skew detected on mon.1, mon.2
     monmap e3: 3 mons at {0=192.168.252.10:6789/0,1=192.168.252.11:6789/0,2=192.168.252.12:6789/0}, election epoch 36, quorum 0,1,2 0,1,2
     osdmap e27: 6 osds: 1 up, 2 in
      pgmap v57: 192 pgs, 3 pools, 0 bytes data, 0 objects
            84456 kB used, 7865 MB / 7948 MB avail
                 192 incomplete

try to start the OSD daemons with

# on osd0
$ sudo /etc/init.d/ceph -a start osd0

If the OSDs are in, but PGs are in weird states, like in the example below

cluster 07d28faa-48ae-4356-a8e3-19d5b81e159e
     health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean; clock skew detected on mon.1, mon.2
     monmap e3: 3 mons at {0=192.168.252.10:6789/0,1=192.168.252.11:6789/0,2=192.168.252.12:6789/0}, election epoch 36, quorum 0,1,2 0,1,2
     osdmap e34: 6 osds: 6 up, 6 in
      pgmap v71: 192 pgs, 3 pools, 0 bytes data, 0 objects
            235 MB used, 23608 MB / 23844 MB avail
                 128 active+degraded
                  64 active+replay+degraded

try to restart the monitor(s) with

# on mon0
$ sudo /etc/init.d/ceph -a restart mon0

Unfortunately, a simple restart will be the solution in just a few rare cases. More troubleshooting will be required in the majority of the situations.

Unable to find keyring

During the deployment of the monitor nodes (the ceph-deploy <mon> [<mon>] create-initial step), Ceph may complain about missing keyrings:

[ceph_deploy.gatherkeys][WARNIN] Unable to find
/etc/ceph/ceph.client.admin.keyring on ['ceph-server']

If this warning is reported (even if the message is not an error), the Ceph cluster will probably not reach an healthy state.

The solution to this problem is to use exactly the same names for the hostnames (i.e., the output of hostname -s) and the Ceph node names.

This means that the files

  • /etc/hosts
  • /etc/hostname
  • .ssh/config (only for the admin node)

and the result of the command hostname -s, all should have the same names for a certain node.

See also:

 Check that replication requirements can be met

I’ve found that most of my problems with Ceph health were related to wrong (i.e., unfeasible) replication policies.

This is particularly likely to happen in test deployment where one doesn’t care about setting up many OSDs or separating them across different hosts.

Some common pitfalls here may be:

  1. The number of required replicas is higher than the number of OSDs (!!)
  2. CRUSH is instructed to separate replicas across hosts but multiple OSDs are on the same host and there are not enough OSD hosts to satisfy this condition

The visible effect when running diagnostic commands is that PGs will be in wrong statuses.

CASE 1the replication level is such that it cannot be accomplished with the current cluster (e.g., a replica size of 3 with 2 OSDs).

Check the replicated size of pools with

$ ceph osd dump

Adjust the replicated size and min_size, if required, by running

$ ceph osd pool set <pool_name> size <value>
$ ceph osd pool set <pool_name> min_size <value>

CASE 2: the replication policy would require replicas to sit on separate hosts, but OSDs are running within the same hosts

Check what crush_ruleset applies to a certain pool with

$ ceph osd dump --format=json-pretty

In the example below, the pool with id 0 (“data”) is using the crush_ruleset with id 0

"pools": [
        { "pool": 0,
          "pool_name": "data",
          [...]
          "crush_ruleset": 0,  <----
          "object_hash": 2,
          [...]

then check with

$ ceph osd crush dump --format=json-pretty

what crush_ruleset 0 is about.

In the example below, we can observe that this rules says to replicate data by choosing the first available leaf in the CRUSH map, which is of type host.

"rules": [
        { "rule_id": 0,
          "rule_name": "replicated_ruleset",
          "ruleset": 0,
          "type": 1,
          "min_size": 1,
          "max_size": 10,
          "steps": [
                { "op": "take",
                  "item": -1,
                  "item_name": "default"},
                { "op": "chooseleaf_firstn",     <-----------
                  "num": 0,
                  "type": "host"},               <-----------
                { "op": "emit"}]}],

If not enough hosts are available, then the application of this rule will fail.

To allow replicas to be created on different OSDs but possibly on the same host, we need to create a new ruleset:

$ ceph osd crush rule create-simple replicate_within_hosts default osd

After the rule has been created, it should be listed in the output of

$ ceph osd crush dump

from where we can not its id.

The next step is to apply this rule to the pools as required:

$ ceph osd pool set data crush_ruleset <rulesetId>
$ ceph osd pool set metadata crush_ruleset <rulesetId>
$ ceph osd pool set rbd crush_ruleset <rulesetId>

Deploy Ceph and start using it: end to end tutorial – Installation (part 1/3)

Ceph is one of the most interesting distributed storage systems available, with a very active development and a complete set of features that make it a valuable candidate for cloud storage services. This tutorial goes through the required steps (and some related troubleshooting), required to setup a Ceph cluster and access it with a simple client using librados. Please refer to the Ceph documentation for detailed insights on Ceph components.

(Part 2/3 – Troubleshooting – Part 3/3 – librados client)

Assumptions

  • Ceph version: 0.79
  • Installation with ceph-deploy
  • Operating system for the Ceph nodes: Ubuntu 14.04

Cluster architecture

In a minimum Ceph deployment, a Ceph cluster includes one Ceph monitor (MON) and a number of Object Storage Devices (OSD).

Administrative and control operations are issued from an admin node, which must not necessarily be separated from the Ceph cluster (e.g., the monitor node can also act as the admin node). Metadata server nodes (MDS) are required only for Ceph Filesystem (Ceph Block Devices and Ceph Object Storage do not use MDS).

Preparing the storage

WARNING: preparing the storage for Ceph means to delete a disk’s partition table and lose all its data. Proceed only if you know exactly what you are doing!

Ceph will need some physical storage to be used as Object Storage Devices (OSD) and Journal. As the project documentation recommends, for better performance, the Journal should be on a separate drive than the OSD. Ceph supports ext4, btrfs and xfs. I tried setting up clusters with both btrfs and xfs, however I could achieve stable results only with xfs, so I will refer to this latter.

  1. Prepare a GPT partition table (I have observed stability issues when using a dos partition)
    $ sudo parted /dev/sd<x>
    (parted) mklabel gpt
    (parted) mkpart primary xfs 0 ­100%
    (parted) quit

    if parted complains about alignment issues (“Warning: The resulting partition is not properly aligned for best performance”), check this two links to find a solution: 1 and 2.

  2. Format the disk with xfs (you might need to install xfs tools with sudo apt-get install xfsprogs)
    $ sudo mkfs.xfs /dev/sd<x>1
  3. Create a Journal partition (raw/unformatted)
    $ sudo parted /dev/sd<y>
    (parted) mklabel gpt
    (parted) mkpart primary 0 100%

 Install Ceph deploy

The ceph-deploy tool must only be installed on the admin node. Access to the other nodes for configuration purposes will be handled by ceph-deploy over SSH (with keys).

  1. Add Ceph repository to your apt configuration, replace {ceph-stable-release} with the Ceph release name that you want to install (e.g., emperor, firefly, …)
    $ echo deb http://ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
  2. Install the trusted key with
    $ wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -
  3. If there is no repository for your Ubuntu version, you can try to select the newest one available by manually editing the file /etc/apt/sources.list.d/ceph.list and changing the Ubuntu codename (e.g., trusty -> raring)
    $ deb http://ceph.com/debian-emperor raring main
  4. Install ceph-deploy
    $ sudo apt-get update
    $ sudo apt-get install ceph-deploy

Setup the admin node

Each Ceph node will be setup with an user having passwordless sudo permissions and each node will store the public key of the admin node to allow for passwordless SSH access. With this configuration, ceph-deploy will be able to install and configure every node of the cluster.

NOTE: the hostnames (i.e., the output of hostname -s) must match the Ceph node names!

  1. [optional] Create a dedicated user for cluster administration (this is particularly useful if the admin node is part of the Ceph cluster)
    $ sudo useradd -d /home/cluster-admin -m cluster-admin -s /bin/bash

    then set a password and switch to the new user

    $ sudo passwd cluster-admin
    $ su cluster-admin
  2. Install SSH server on all the cluster nodes (even if a cluster node is also an admin node)
    $ sudo apt-get install openssh-server
  3. Add a ceph user on each Ceph cluster node (even if a cluster node is also an admin node) and give it passwordless sudo permissions
    $ sudo useradd -d /home/ceph -m ceph -s /bin/bash
    $ sudo passwd ceph
    <Enter password>
    $ echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
    $ sudo chmod 0440 /etc/sudoers.d/ceph
  4. Edit the /etc/hosts file to add mappings to the cluster nodes. Example:
    $ cat /etc/hosts
    127.0.0.1       localhost
    192.168.58.2    mon0
    192.168.58.3    osd0
    192.168.58.4    osd1

    to enable dns resolution with the hosts file, install dnsmasq

    $ sudo apt-get install dnsmasq
  5. Generate a public key for the admin user and install it on every ceph nodes
    $ ssh-keygen
    $ ssh-copy-id ceph@mon0
    $ ssh-copy-id ceph@osd0
    $ ssh-copy-id ceph@osd1
  6. Setup an SSH access configuration by editing the .ssh/config file. Example:
    Host osd0
       Hostname osd0
       User ceph
    Host osd1
       Hostname osd1
       User ceph
    Host mon0
       Hostname mon0
       User ceph
  7. Before proceeding, check that ping and host commands work for each node
    $ ping mon0
    $ ping osd0
    ...
    $ host osd0
    $ host osd1

Setup the cluster

Administration of the cluster is done entirely from the admin node.

  1. Move to a dedicated directory to collect the files that ceph-deploy will generate. This will be the working directory for any further use of ceph-deploy
    $ mkdir ceph-cluster
    $ cd ceph-cluster
  2. Deploy the monitor node(s) – replace mon0 with the list of hostnames of the initial monitor nodes
    $ ceph-deploy new mon0
    [ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy new mon0
    [ceph_deploy.new][DEBUG ] Creating new cluster named ceph
    [ceph_deploy.new][DEBUG ] Resolving host mon0
    [ceph_deploy.new][DEBUG ] Monitor mon0 at 192.168.58.2
    [ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
    [ceph_deploy.new][DEBUG ] Monitor initial members are ['mon0']
    [ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.58.2']
    [ceph_deploy.new][DEBUG ] Creating a random mon key...
    [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
    [ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
  3. Add a public network entry in the ceph.conf file if you have separate public and cluster networks (check the network configuration reference)
    public network = {ip-address}/{netmask}
  4. Install ceph in all the nodes of the cluster. Use the --no-adjust-repos option if you are using different apt configurations for ceph. NOTE: you may need to confirm the authenticity of the hosts if your accessing them on SSH for the first time!
    Example (replace mon0 osd0 osd1 with your node names):

    $ ceph-deploy install --no-adjust-repos mon0 osd0 osd1
  5. Create monitor and gather keys
    $ ceph-deploy mon create-initial
  6. The content of the working directory after this step should look like
    cadm@mon0:~/my-cluster$ ls
    ceph.bootstrap-mds.keyring  ceph.bootstrap-osd.keyring  ceph.client.admin.keyring  ceph.conf  ceph.log  ceph.mon.keyring  release.asc

Prepare OSDs and OSD Daemons

When deploying OSDs, consider that a single node can run multiple OSD Daemons and that the journal partition should be on a separate drive than the OSD for better performance.

  1. List disks on a node (replace osd0 with the name of your storage node(s))
    $ ceph-deploy disk list osd0

    This command is also useful for diagnostics: when an OSD is correctly mounted on Ceph, you should see entries similar to this one in the output:

    [ceph-osd1][DEBUG ] /dev/sdb :
    [ceph-osd1][DEBUG ] /dev/sdb1 other, xfs, mounted on /var/lib/ceph/osd/ceph-0
  2. If you haven’t already prepared your storage, or if you want to reformat a partition, use the zap command (WARNING: this will erase the partition)
    $ ceph-deploy disk zap --fs-type xfs osd0:/dev/sd<x>1
  3. Prepare and activate the disks (ceph-deploy also has a create command that should combine this two operations together, but for some reason it was not working for me). In this example, we are using /dev/sd<x>1 as OSD and /dev/sd<y>2 as journal on two different nodes, osd0 and osd1
    $ ceph-deploy osd prepare osd0:/dev/sd<x>1:/dev/sd<y>2 osd1:/dev/sd<x>1:/dev/sd<y>2
    $ ceph-deploy osd activate osd0:/dev/sd<x>1:/dev/sd<y>2 osd1:/dev/sd<x>1:/dev/sd<y>2

Final steps

Now we need to copy the cluster configuration to all nodes and check the operational status of our Ceph deployment.

  1. Copy keys and configuration files, (replace mon0 osd0 osd1 with the name of your Ceph nodes)
    $ ceph-deploy admin mon0 osd0 osd1
  2. Ensure proper permissions for admin keyring
    $ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  3. Check the Ceph status and health
    $ ceph health
    $ ceph status

    If, at this point, the reported health of your cluster is HEALTH_OK, then most of the work is done. Otherwise, try to check the troubleshooting part of this tutorial.

Revert installation

There are useful commands to purge the Ceph installation and configuration from every node so that one can start over again from a clean state.

This will remove Ceph configuration and keys

ceph-deploy purgedata {ceph-node} [{ceph-node}]
ceph-deploy forgetkeys

This will also remove Ceph packages

ceph-deploy purge {ceph-node} [{ceph-node}]

Before getting a healthy Ceph cluster I had to purge and reinstall many times, cycling between the “Setup the cluster”, “Prepare OSDs and OSD Daemons” and “Final steps” parts multiple times, while removing every warning that ceph-deploy was reporting.

 

Distributed File Systems Series: Ceph Introduction

With this post we are going to start a new series on Distributed File Systems. We are going to start with an introduction to a file system that is enjoying a good amount of success: Ceph.

Ceph is a distributed parallel fault-tolerant file system that can offer object, block, and file storage from a single cluster. Ceph’s objective is to provide an open source storage platform with no Single-Point-of-Failure, highly available and highly scalable.

A Ceph Cluster has three main components:

  • OSDs. A Ceph Object Storage Devices (OSD) are the core of a Ceph cluster and are in charge of storing data, handling data replication and recovery, and data rebalancing. A Ceph Cluster requires at least two OSDs. OSDs also check other OSDs for a heartbeat and provide this information to Ceph Monitors.
  • Monitors: A Ceph Monitor keeps the state of the Ceph Cluster using maps, e.g.. monitors map, OSDs map and the CRUSH map. Ceph also maintains a history, also called an epoch, of each state change in the Ceph Cluster components.
  • MDSs: A Ceph MetaData Server (MDS) stores metadata for the Ceph FileSystem client. Thanks to Ceph MDSs, POSIX file system users are able to execute basic commands such as ls and find without overloading the OSDs. Ceph MDSs can provide both metadata high-availability, i.e. multiple MDS instances, at least one in standby – and scalability, i.e. multiple MDS instances, all active and managing different directory subtrees.

ceph-architecture

Ceph Architecture (Source: docs.openstack.org)

One of the key feature of Ceph is the way data is managed. Ceph clients and OSDs compute data locations using a pseudo random algorithm called Controlled Replication Under Scalable Hashing (CRUSH). The CRUSH algorithm distributes the work amongst clients and OSDs, which free them from depending on a central lookup table to retrieve location information and allow for a high degree of scaling. CRUSH also uses intelligent data replication to guarantee resiliency.

Ceph allows clients to access data through different interfaces:

  • Object Storage: The RADOS Gateway (RGW), the Ceph Object Storage component, provides RESTful APIs compatible with Amazon S3 and OpenStack Swift. It sits on top of the Ceph Storage Cluster and has its own user database, authentication, and access control. The RADOS Gateway makes use of a unified namespace, this means that you can write data using one API, e.g. Amazon S3-compatible API, and read them with another API, e.g. OpenStack Swift-compatible API. Ceph Object Storage doesn’t make use fo the Ceph MetaData Servers.

stack

Ceph Clients (Source: ceph.com)

  • Block Devices: The RADOS Block Devices (RBD), the Ceph Block Device component, provides resizable, thin-provisioned block devices. The block devices are striped across multiple OSDs in the Ceph cluster for high performance. The Ceph Block Device component also provides image snapshotting and snapshots layering, i.e. cloning of images. Ceph RBD supports QEMU/KVM hypervisors and can easily be integrated with OpenStack and CloudStack (or any other cloud stack that uses libvirt).
  • Filesystem: CephFS, the Ceph Filesystem component, provides a POSIX-compliant filesystem layered on top of the Ceph Storage Cluster, meaning that files get mapped to objects in the Ceph cluster. Ceph clients can mount the Ceph Filesystem either as a Kernel object or as a Filesystem in User Space (FUSE). CephFS separates the metadata from the data, storing the metadata in the MDSs, and storing the file data in one or more OSDs in the Ceph cluster. Thanks to this separation the Ceph Filesystem can provide high performances without stressing the Ceph Storage Cluster.

Our next topic in the Distributed File Systems Series will be and introduction to GlusterFS.

Dependability Modeling: Testing Availability from an End User’s Perspective

In a former article we spoke about testing High Availability in OpenStack with the Chaos Monkey. While the Chaos Monkey is a great tool to test what happens if some system components fail, it does not reveal anything about the general strengths and weaknesses of different system architectures.  In order to determine if an architecture with 2 redundant controller nodes and 2 compute nodes offers a higher availability level than an architecture with 3 compute nodes and only 1 controller node, a framework for testing different architectures is required. The “Dependability Modeling Framework” seems to be a great opportunity to evaluate different system architectures on their ability to achieve availability levels required by end users.

Overcome biased design decisions

The Dependability Modeling Framework is a hierarchical modeling framework for dependability evaluation of system architectures. Its purpose is to model different alternative architectural solutions for one IT system and then calculate the dependability characteristics of each different IT system realization. The calculated dependability values can help IT architects to rate system architectures before they are implemented and to choose the “best” approach from different possible alternatives. Design decisions which are based on Dependability Modeling Framework have the potential to be more reflective and less biased than purely intuitive design decisions, since no particular architectural design is preferred to others. The fit of a particular solution is tested versus previously defined criteria before any decision is taken.

Build models on different levels

The Dependability Models are built on four levels: the user level, the function level, the service level and the resource level. The levels reflect the method to first identify user interactions as well as system functions and services which are provided to users and then find resources which are contributing to accomplishment of the required functions. Once all user interactions, system functions, services and resources are identified, models are built (on each of the four levels) to assess the impact of component failures on the quality of the service delivered to end users. The models are connected in a dependency graph to show the different dependencies between user interactions, system functions, services and system resources. Once all dependencies are clear, the impact of a system resource outage to user functions can be calculated straightforward: if the failing resource was the only resource which delivered functions which were critical to the end user, the impact of the resource outage is very high. If there are redundant resources, services or functions, the impact is much less severe.
The dependency graph below demonstrates how end user interactions depend on functions, services and resources.

Dependability Graph

Fig. 1: Dependency Graph

The Dependability Model makes the impact of resource outages calculable. One could easily see that a Chaos Monkey test can verify such dependability graphs, since the Chaos Monkey effectively tests outage of system resources by randomly unplugging devices.  The less obvious part of the Dependability Modelling Framework is the calculation of resource outage probabilities. The probability of an outage could only be obtained by regularly measuring unavailability of resources over a long time frame. Since there is no such data available, one must estimate the probabilities and use this estimation as a parameter to calculate the dependability characteristics of resources so far. A sensitivity analysis can reveal if the proposed architecture offers a reliable and highly available solution.


Dependability Modeling on OpenStack HA Environment

Dependability Modeling could also be performed on the OpenStack HA Environment we use at ICCLab. It is obvious that we High Availability could be realized in many different ways: we could use e. g. a distributed DRBD device to store all data used in OpenStack and synchronize the DRBD device with Pacemaker. Another possible solution is to build Ceph clusters and again use Pacemaker as synchronization tool. An alternative to Pacemaker is keepalived which also offers synchronization and control mechanisms for Load Balancing and High Availability. And of course one could also think of using HAProxy for Load Balancing instead of Ceph or DRBD.
In short: different architectures can be modelled. How this is done will be subject of a further blog post.

Evaluation of HA technologies for OpenStack

As proposed in a former article different technologies must be evaluated in order to make the current MobileCloud environment suitable to High Availability (HA) requirements. The following article lists a basic evaluation of the different technologies that could be used.

Basically there are four technologies which allow to build a reliable HA-infrastructure for OpenStack:

  1. Build OpenStack on top of Corosync and use Pacemaker cluster resource manager to replicate cluster OpenStack services over multiple redundant nodes.
  2. For clustering of storage a DRBD block storage solution can be used. DRBD is a software that replicates block storage (hard disks etc.) over multiple nodes.
  3. Object storage services can be clustered via Ceph. Ceph is a clustered storage solution which is able to cluster not only block devices but also data objects and filesystems. Obviously Swift ObjectStore could be made highly available by using Ceph.
  4. OpenStack has MySQL as an underlying database system which is used to manage the different OpenStack Services. Instead of using a MySQL standalone database server one could use a MySQL Galera clustered database servers to make MySQL highly available too.

The different technologies have been evaluated according to their ability to make different OpenStack components highly available. The following table shows which technologies could be used to make the different OpenStack Services used in MobileCloud suitable to High Availability requirements.

table_ha_evaluation

Table 1.1: OpenStack Services and Clustering Technologies which make them suitable to HA requirements.

It is obvious that the different technologies can be used in different architectural setups. It is obvious that they must be used in a multi-node OpenStack Architecture. An architecture proposal will follow up in a further article.