{"id":4844,"date":"2014-04-30T15:03:51","date_gmt":"2014-04-30T13:03:51","guid":{"rendered":"http:\/\/blog.zhaw.ch\/icclab\/?p=4844"},"modified":"2014-05-02T13:52:21","modified_gmt":"2014-05-02T11:52:21","slug":"deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13","status":"publish","type":"post","link":"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13\/","title":{"rendered":"Deploy Ceph and start using it: end to end tutorial &#8211; Installation (part 1\/3)"},"content":{"rendered":"<p><a href=\"http:\/\/ceph.com\/\" target=\"_blank\">Ceph<\/a> is one of the most interesting distributed storage systems available, with a very <a href=\"http:\/\/www.ohloh.net\/p\/ceph\" target=\"_blank\">active development<\/a> and a complete set of features that make it a valuable candidate for cloud storage services. This tutorial goes through\u00a0the required steps (and some related troubleshooting), required to setup a Ceph cluster and access it with a simple client using <em>librados.<\/em> Please refer to the\u00a0<a href=\"http:\/\/ceph.com\/docs\/master\/\" target=\"_blank\">Ceph documentation<\/a>\u00a0for detailed insights on Ceph components.<\/p>\n<p>(<a title=\"Deploy Ceph and start using it: end to end tutorial \u2013 Troubleshooting (part 2\/3)\" href=\"http:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-troubleshooting-part-23\/\">Part 2\/3 &#8211; Troubleshooting<\/a>\u00a0&#8211;\u00a0<a title=\"Deploy Ceph and start using it: end to end tutorial \u2013 simple librados client (part 3\/3)\" href=\"http:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-librados-client-part-33\/\">Part 3\/3 &#8211; librados client<\/a>)<\/p>\n<h1>Assumptions<\/h1>\n<ul>\n<li>Ceph version: 0.79<\/li>\n<li>Installation with <code>ceph-deploy<\/code><\/li>\n<li>Operating system for the Ceph nodes: Ubuntu 14.04<\/li>\n<\/ul>\n<h1>Cluster\u00a0architecture<\/h1>\n<p>In a minimum Ceph deployment, a Ceph cluster includes one Ceph monitor (MON) and a number of Object Storage Devices (OSD).<\/p>\n<p>Administrative and control operations are issued from an admin node, which must not necessarily be separated from the Ceph cluster (e.g., the monitor node can also act as the admin node). Metadata server nodes (MDS) are required only for Ceph Filesystem (<span style=\"color: #3e4349\">Ceph Block Devices and Ceph Object Storage do not use MDS).<\/span><\/p>\n<h1>Preparing the storage<\/h1>\n<p><strong>WARNING:\u00a0<\/strong>preparing the storage for Ceph means to delete a disk&#8217;s partition table and lose all its data. Proceed only if you know exactly what you are doing!<\/p>\n<p>Ceph will need some physical storage to be used as Object Storage Devices (OSD) and Journal. <a href=\"http:\/\/ceph.com\/docs\/master\/rados\/deployment\/ceph-deploy-osd\/#prepare-osds\" target=\"_blank\">As the project documentation recommends<\/a>, for better performance, the Journal should be on a separate drive than the OSD. <a href=\"http:\/\/ceph.com\/docs\/master\/rados\/configuration\/filesystem-recommendations\/?highlight=btrfs#filesystems\" target=\"_blank\">Ceph supports<\/a> <em>ext4, btrfs<\/em> and <em>xfs<\/em>. I tried setting up clusters with both <em>btrfs<\/em> and <em>xfs<\/em>, however I could achieve stable results only with\u00a0<em>xfs<\/em>, so I will refer to this latter.<\/p>\n<ol>\n<li>Prepare\u00a0a <em>GPT<\/em> partition table (I have observed stability issues when using a\u00a0<em>dos<\/em> partition)\n<pre style=\"color: #000000\">$ sudo parted \/dev\/sd&lt;x&gt;\r\n<span id=\"line-2-2\" class=\"anchor\"><\/span>(parted) mklabel gpt\r\n<span id=\"line-3-2\" class=\"anchor\"><\/span>(parted) mkpart primary xfs 0 \u00ad100%\r\n<span id=\"line-4-2\" class=\"anchor\"><\/span>(parted) quit<\/pre>\n<p>if\u00a0<em>parted<\/em> complains about alignment issues (&#8220;Warning: The resulting partition is not properly aligned for best performance&#8221;), check this two links to find a solution: <a href=\"http:\/\/rainbow.chard.org\/2013\/01\/30\/how-to-align-partitions-for-best-performance-using-parted\/\" target=\"_blank\">1<\/a> and <a href=\"http:\/\/people.redhat.com\/msnitzer\/docs\/io-limits.txt\" target=\"_blank\">2<\/a>.<\/li>\n<li>Format the disk with <em>xfs<\/em> (you might need to install <em>xfs<\/em>\u00a0tools with\u00a0<code>sudo apt-get install xfsprogs<\/code>)\n<pre style=\"color: #000000\">$ sudo mkfs.xfs \/dev\/sd&lt;x&gt;1<\/pre>\n<\/li>\n<li>Create a Journal partition (raw\/unformatted)\n<pre style=\"color: #000000\">$ sudo parted \/dev\/sd&lt;y&gt;\r\n<span id=\"line-2-4\" class=\"anchor\"><\/span>(parted) mklabel gpt\r\n<span id=\"line-3-4\" class=\"anchor\"><\/span>(parted) mkpart primary 0 100%<\/pre>\n<\/li>\n<\/ol>\n<h1>\u00a0Install Ceph deploy<\/h1>\n<p>The <code>ceph-deploy<\/code> tool must only be installed on the admin node. Access to the other nodes for configuration purposes will be handled by\u00a0<code>ceph-deploy<\/code> over SSH (with keys).<\/p>\n<ol>\n<li>Add Ceph repository to your apt configuration, replace\u00a0<code>{ceph-stable-release}<\/code> with the Ceph release name that you want to install (e.g., emperor, firefly, &#8230;)\n<pre style=\"color: #222222\">$ echo deb http:\/\/ceph.com\/debian-{ceph-stable-release}\/ $(lsb_release -sc) main | sudo tee \/etc\/apt\/sources.list.d\/ceph.list<\/pre>\n<\/li>\n<li>Install the trusted key with\n<pre style=\"color: #000000\">$ wget -q -O- 'https:\/\/ceph.com\/git\/?p=ceph.git;a=blob_plain;f=keys\/release.asc' | sudo apt-key add -<\/pre>\n<\/li>\n<li>If there is no repository for your Ubuntu version, you can try to select the newest one available by manually editing the file\u00a0<code>\/etc\/apt\/sources.list.d\/ceph.list <\/code>and changing the Ubuntu codename (e.g., trusty -&gt; raring)\n<pre style=\"color: #000000\">$ deb http:\/\/ceph.com\/debian-emperor raring main<\/pre>\n<\/li>\n<li>Install ceph-deploy\n<pre style=\"color: #000000\">$ sudo apt-get update\r\n$ sudo apt-get install ceph-deploy<\/pre>\n<\/li>\n<\/ol>\n<h1>Setup the admin node<\/h1>\n<p>Each Ceph node will be setup with an user having passwordless sudo permissions and each node will store the public key of the admin node to allow for passwordless SSH access. With this configuration, <code>ceph-deploy<\/code> will be able to install and configure every node of the cluster.<\/p>\n<p><strong>NOTE:<\/strong> the hostnames (i.e., the output of <code>hostname -s<\/code>)\u00a0must match the Ceph node names!<\/p>\n<ol>\n<li>[optional] Create a dedicated user for cluster administration (this is particularly useful\u00a0if the admin node is part of the Ceph cluster)\n<pre style=\"color: #000000\">$ sudo useradd -d \/home\/cluster-admin -m cluster-admin -s \/bin\/bash<\/pre>\n<p>then set a password and switch to the new user<\/p>\n<pre style=\"color: #000000\">$ sudo passwd cluster-admin\r\n$ su cluster-admin<\/pre>\n<\/li>\n<li>Install SSH server on all the cluster nodes (even if a cluster node is also an admin node)\n<pre style=\"color: #000000\">$ sudo apt-get install openssh-server<\/pre>\n<\/li>\n<li>Add a ceph user on each Ceph cluster node\u00a0(even if a cluster node is also an admin node) and give it passwordless sudo permissions\n<pre style=\"color: #000000\">$ sudo useradd -d \/home\/ceph -m ceph -s \/bin\/bash\r\n<span id=\"line-2-5\" class=\"anchor\"><\/span>$ sudo passwd ceph\r\n<span id=\"line-3-5\" class=\"anchor\"><\/span>&lt;Enter password&gt;\r\n<span id=\"line-4-4\" class=\"anchor\"><\/span>$ echo \"ceph ALL = (root) NOPASSWD:ALL\" | sudo tee \/etc\/sudoers.d\/ceph\r\n<span id=\"line-5-3\" class=\"anchor\"><\/span>$ sudo chmod 0440 \/etc\/sudoers.d\/ceph<\/pre>\n<\/li>\n<li>Edit the <code>\/etc\/hosts<\/code> file to add mappings to the cluster nodes. Example:\n<pre style=\"color: #000000\">$ cat \/etc\/hosts\r\n<span id=\"line-2-8\" class=\"anchor\"><\/span>127.0.0.1       localhost\r\n<span id=\"line-3-8\" class=\"anchor\"><\/span>192.168.58.2    mon0\r\n<span id=\"line-4-7\" class=\"anchor\"><\/span>192.168.58.3    osd0\r\n<span id=\"line-5-5\" class=\"anchor\"><\/span>192.168.58.4    osd1<\/pre>\n<p>to enable dns resolution with the hosts file, install dnsmasq<\/p>\n<pre style=\"color: #000000\">$ sudo apt-get install dnsmasq<\/pre>\n<\/li>\n<li>Generate a public key for the admin user and install it on every ceph nodes\n<pre style=\"color: #000000\">$ ssh-keygen\r\n<span id=\"line-2-6\" class=\"anchor\"><\/span>$ ssh-copy-id ceph@mon0\r\n<span id=\"line-3-6\" class=\"anchor\"><\/span>$ ssh-copy-id ceph@osd0\r\n<span id=\"line-4-5\" class=\"anchor\"><\/span>$ ssh-copy-id ceph@osd1<\/pre>\n<\/li>\n<li>Setup an SSH access configuration by editing the\u00a0<code>.ssh\/config<\/code> file. Example:\n<pre style=\"color: #000000\">Host osd0\r\n<span id=\"line-2-7\" class=\"anchor\"><\/span>   Hostname osd0\r\n<span id=\"line-3-7\" class=\"anchor\"><\/span>   User ceph\r\n<span id=\"line-4-6\" class=\"anchor\"><\/span>Host osd1\r\n<span id=\"line-5-4\" class=\"anchor\"><\/span>   Hostname osd1\r\n<span id=\"line-6-3\" class=\"anchor\"><\/span>   User ceph\r\n<span id=\"line-7-3\" class=\"anchor\"><\/span>Host mon0\r\n<span id=\"line-8-3\" class=\"anchor\"><\/span>   Hostname mon0\r\n<span id=\"line-9-2\" class=\"anchor\"><\/span>   User ceph<\/pre>\n<\/li>\n<li>Before proceeding, check that <code>ping<\/code> and <code>host<\/code> commands work for each node\n<pre style=\"color: #000000\">$ ping mon0\r\n$ ping osd0\r\n...\r\n$ host osd0\r\n$ host osd1<\/pre>\n<\/li>\n<\/ol>\n<h1>Setup the cluster<\/h1>\n<p>Administration of the cluster is done entirely from the admin node.<\/p>\n<ol>\n<li>Move to a dedicated directory to collect the files that <code>ceph-deploy<\/code> will generate. This will be the working directory for any further use of <code>ceph-deploy<\/code>\n<pre style=\"color: #000000\">$ mkdir ceph-cluster\r\n$ cd ceph-cluster<\/pre>\n<\/li>\n<li>Deploy the monitor node(s) &#8211; replace <code>mon0<\/code> with the list of hostnames of the initial monitor nodes\n<pre style=\"color: #000000\">$ ceph-deploy new mon0\r\n<span id=\"line-2-9\" class=\"anchor\"><\/span>[ceph_deploy.cli][INFO  ] Invoked (1.4.0): \/usr\/bin\/ceph-deploy new mon0\r\n<span id=\"line-3-9\" class=\"anchor\"><\/span>[ceph_deploy.new][DEBUG ] Creating new cluster named ceph\r\n<span id=\"line-4-8\" class=\"anchor\"><\/span>[ceph_deploy.new][DEBUG ] Resolving host mon0\r\n<span id=\"line-5-6\" class=\"anchor\"><\/span>[ceph_deploy.new][DEBUG ] Monitor mon0 at 192.168.58.2\r\n<span id=\"line-6-4\" class=\"anchor\"><\/span>[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds\r\n<span id=\"line-7-4\" class=\"anchor\"><\/span>[ceph_deploy.new][DEBUG ] Monitor initial members are ['mon0']\r\n<span id=\"line-8-4\" class=\"anchor\"><\/span>[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.58.2']\r\n<span id=\"line-9-3\" class=\"anchor\"><\/span>[ceph_deploy.new][DEBUG ] Creating a random mon key...\r\n<span id=\"line-10-2\" class=\"anchor\"><\/span>[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...\r\n<span id=\"line-11-2\" class=\"anchor\"><\/span>[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...<\/pre>\n<\/li>\n<li>Add a public network entry in the <code>ceph.conf<\/code> file if you have separate public and cluster networks (check the <a href=\"http:\/\/ceph.com\/docs\/master\/rados\/configuration\/network-config-ref\/\">network configuration reference<\/a>)\n<pre style=\"color: #000000\">public network = {ip-address}\/{netmask}<\/pre>\n<\/li>\n<li>Install ceph in all the nodes of the cluster. Use the <code>--no-adjust-repos<\/code> option if you are using different apt configurations for ceph. <strong>NOTE:<\/strong> you may need to confirm the authenticity of the hosts if your accessing them on SSH for the first time!<br \/>\nExample (replace <code>mon0 osd0 osd1<\/code> with your node names):<\/p>\n<pre style=\"color: #000000\">$ ceph-deploy install --no-adjust-repos mon0 osd0 osd1<\/pre>\n<\/li>\n<li>Create monitor and gather keys\n<pre style=\"color: #000000\">$ ceph-deploy mon create-initial<\/pre>\n<\/li>\n<li>The content of the working directory after this step should look like\n<pre style=\"color: #000000\">cadm@mon0:~\/my-cluster$ ls\r\n<span id=\"line-2-10\" class=\"anchor\"><\/span>ceph.bootstrap-mds.keyring  ceph.bootstrap-osd.keyring  ceph.client.admin.keyring  ceph.conf  ceph.log  ceph.mon.keyring  release.asc<\/pre>\n<\/li>\n<\/ol>\n<h1>Prepare OSDs and OSD Daemons<\/h1>\n<p>When deploying OSDs, consider\u00a0that a single node can run multiple OSD Daemons and that the journal partition should be on a separate drive than the OSD for better performance.<\/p>\n<ol>\n<li>List disks on a node (replace <code>osd0<\/code> with the name of your storage node(s))\n<pre style=\"color: #000000\">$ ceph-deploy disk list osd0<\/pre>\n<p>This command is also useful for diagnostics: when an OSD is correctly mounted on Ceph, you should see entries similar to this one in the output:<\/p>\n<pre style=\"color: #000000\">[ceph-osd1][DEBUG ] \/dev\/sdb :\r\n[ceph-osd1][DEBUG ] \/dev\/sdb1 other, xfs, mounted on \/var\/lib\/ceph\/osd\/ceph-0<\/pre>\n<\/li>\n<li>If you haven&#8217;t already prepared your storage, or if you want to reformat a partition, use the zap command <strong>(WARNING:<\/strong> this will erase the partition)\n<pre style=\"color: #000000\">$ ceph-deploy disk zap --fs-type xfs osd0:\/dev\/sd&lt;x&gt;1<\/pre>\n<\/li>\n<li>Prepare and activate the disks (<code>ceph-deploy<\/code> also has a <code>create<\/code> command that should combine this two operations together, but for some reason it was not working for me). In this example, we are using\u00a0<code>\/dev\/sd&lt;x&gt;1<\/code> as OSD\u00a0and\u00a0<code>\/dev\/sd&lt;y&gt;2<\/code> as journal on two different nodes, <code>osd0<\/code> and <code>osd1<\/code>\n<pre style=\"color: #000000\">$ ceph-deploy osd prepare osd0:\/dev\/sd&lt;x&gt;1:\/dev\/sd&lt;y&gt;2 osd1:\/dev\/sd&lt;x&gt;1:\/dev\/sd&lt;y&gt;2\r\n$ ceph-deploy osd activate osd0:\/dev\/sd&lt;x&gt;1:\/dev\/sd&lt;y&gt;2 osd1:\/dev\/sd&lt;x&gt;1:\/dev\/sd&lt;y&gt;2<\/pre>\n<\/li>\n<\/ol>\n<h1>Final steps<\/h1>\n<p>Now we need to copy the cluster configuration to all nodes and check the operational status of our Ceph deployment.<\/p>\n<ol>\n<li>Copy keys and configuration files,\u00a0(replace\u00a0<code>mon0\u00a0osd0 osd1<\/code>\u00a0with the name of your Ceph\u00a0nodes)\n<pre style=\"color: #000000\">$ ceph-deploy admin mon0 osd0 osd1<\/pre>\n<\/li>\n<li><span style=\"color: #000000\">Ensure proper permissions for admin keyring<\/span>\n<pre style=\"color: #000000\">$ sudo chmod +r \/etc\/ceph\/ceph.client.admin.keyring<\/pre>\n<\/li>\n<li><span style=\"color: #000000\"><span style=\"color: #000000\">Check the Ceph status and health<\/span><\/span>\n<pre>$ ceph health\r\n<span id=\"line-2-12\" class=\"anchor\"><\/span>$ ceph status<\/pre>\n<p>If, at this point, the reported health of your cluster is <code>HEALTH_OK<\/code>, then most of the work is done. Otherwise, try to check the <a href=\"http:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-troubleshooting-part-23\/\">troubleshooting part<\/a> of this tutorial.<\/li>\n<\/ol>\n<h1>Revert installation<\/h1>\n<p>There are\u00a0useful commands to purge the Ceph installation and configuration from every node so that one can start over again from a clean state.<\/p>\n<p>This will remove Ceph configuration and keys<\/p>\n<pre style=\"color: #000000\">ceph-deploy purgedata {ceph-node} [{ceph-node}]\r\n<span id=\"line-2-13\" class=\"anchor\"><\/span>ceph-deploy forgetkeys<\/pre>\n<p>This will also remove Ceph packages<\/p>\n<pre style=\"color: #000000\">ceph-deploy purge {ceph-node} [{ceph-node}]<\/pre>\n<p>Before getting a healthy Ceph cluster I had to purge and reinstall many times, cycling between the &#8220;Setup the cluster&#8221;, &#8220;Prepare OSDs and OSD Daemons&#8221; and &#8220;Final steps&#8221; parts multiple times, while removing every warning that <code>ceph-deploy<\/code> was reporting.<\/p>\n<p>&nbsp;<\/p>\n<div class=\"pt-sm\">Schlagw\u00f6rter: <a href=\"https:\/\/blog.zhaw.ch\/icclab\/tag\/ceph\/\">Ceph<\/a>, <a href=\"https:\/\/blog.zhaw.ch\/icclab\/tag\/cloud-storage\/\">cloud storage<\/a>, <a href=\"https:\/\/blog.zhaw.ch\/icclab\/tag\/installation\/\">installation<\/a><br><\/div>","protected":false},"excerpt":{"rendered":"<p>Ceph is one of the most interesting distributed storage systems available, with a very active development and a complete set of features that make it a valuable candidate for cloud storage services. This tutorial goes through\u00a0the required steps (and some related troubleshooting), required to setup a Ceph cluster and access it with a simple client [&hellip;]<\/p>\n","protected":false},"author":96,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"ngg_post_thumbnail":0,"footnotes":""},"categories":[5,15],"tags":[70,413,196],"features":[],"class_list":["post-4844","post","type-post","status-publish","format-standard","hentry","category-articles","category-howtos","tag-ceph","tag-cloud-storage","tag-installation"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Deploy Ceph and start using it: end to end tutorial - Installation (part 1\/3) - Service Engineering (ICCLab &amp; SPLab)<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Deploy Ceph and start using it: end to end tutorial - Installation (part 1\/3)\" \/>\n<meta property=\"og:description\" content=\"Ceph is one of the most interesting distributed storage systems available, with a very active development and a complete set of features that make it a valuable candidate for cloud storage services. This tutorial goes through\u00a0the required steps (and some related troubleshooting), required to setup a Ceph cluster and access it with a simple client [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13\/\" \/>\n<meta property=\"og:site_name\" content=\"Service Engineering (ICCLab &amp; SPLab)\" \/>\n<meta property=\"article:published_time\" content=\"2014-04-30T13:03:51+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2014-05-02T11:52:21+00:00\" \/>\n<meta name=\"author\" content=\"piiv\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"piiv\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13\/\"},\"author\":{\"name\":\"piiv\",\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/#\/schema\/person\/b75ac4e936b5921c8a9de4fb84202703\"},\"headline\":\"Deploy Ceph and start using it: end to end tutorial &#8211; Installation (part 1\/3)\",\"datePublished\":\"2014-04-30T13:03:51+00:00\",\"dateModified\":\"2014-05-02T11:52:21+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13\/\"},\"wordCount\":1062,\"commentCount\":8,\"keywords\":[\"Ceph\",\"cloud storage\",\"installation\"],\"articleSection\":[\"Articles\",\"HowTos\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13\/\",\"url\":\"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13\/\",\"name\":\"Deploy Ceph and start using it: end to end tutorial - Installation (part 1\/3) - Service Engineering (ICCLab &amp; SPLab)\",\"isPartOf\":{\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/#website\"},\"datePublished\":\"2014-04-30T13:03:51+00:00\",\"dateModified\":\"2014-05-02T11:52:21+00:00\",\"author\":{\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/#\/schema\/person\/b75ac4e936b5921c8a9de4fb84202703\"},\"breadcrumb\":{\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Startseite\",\"item\":\"https:\/\/blog.zhaw.ch\/icclab\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Deploy Ceph and start using it: end to end tutorial &#8211; Installation (part 1\/3)\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/#website\",\"url\":\"https:\/\/blog.zhaw.ch\/icclab\/\",\"name\":\"Service Engineering (ICCLab &amp; SPLab)\",\"description\":\"A Blog of the ZHAW Zurich University of Applied Sciences\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/blog.zhaw.ch\/icclab\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/#\/schema\/person\/b75ac4e936b5921c8a9de4fb84202703\",\"name\":\"piiv\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/c7b4c6df4485c24b56af1a1a92442259dfc735b8c0dcf8d3ddcb16f88deeb723?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/c7b4c6df4485c24b56af1a1a92442259dfc735b8c0dcf8d3ddcb16f88deeb723?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/c7b4c6df4485c24b56af1a1a92442259dfc735b8c0dcf8d3ddcb16f88deeb723?s=96&d=mm&r=g\",\"caption\":\"piiv\"},\"url\":\"https:\/\/blog.zhaw.ch\/icclab\/author\/piiv\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Deploy Ceph and start using it: end to end tutorial - Installation (part 1\/3) - Service Engineering (ICCLab &amp; SPLab)","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13\/","og_locale":"en_US","og_type":"article","og_title":"Deploy Ceph and start using it: end to end tutorial - Installation (part 1\/3)","og_description":"Ceph is one of the most interesting distributed storage systems available, with a very active development and a complete set of features that make it a valuable candidate for cloud storage services. This tutorial goes through\u00a0the required steps (and some related troubleshooting), required to setup a Ceph cluster and access it with a simple client [&hellip;]","og_url":"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13\/","og_site_name":"Service Engineering (ICCLab &amp; SPLab)","article_published_time":"2014-04-30T13:03:51+00:00","article_modified_time":"2014-05-02T11:52:21+00:00","author":"piiv","twitter_card":"summary_large_image","twitter_misc":{"Written by":"piiv","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13\/#article","isPartOf":{"@id":"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13\/"},"author":{"name":"piiv","@id":"https:\/\/blog.zhaw.ch\/icclab\/#\/schema\/person\/b75ac4e936b5921c8a9de4fb84202703"},"headline":"Deploy Ceph and start using it: end to end tutorial &#8211; Installation (part 1\/3)","datePublished":"2014-04-30T13:03:51+00:00","dateModified":"2014-05-02T11:52:21+00:00","mainEntityOfPage":{"@id":"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13\/"},"wordCount":1062,"commentCount":8,"keywords":["Ceph","cloud storage","installation"],"articleSection":["Articles","HowTos"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13\/","url":"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13\/","name":"Deploy Ceph and start using it: end to end tutorial - Installation (part 1\/3) - Service Engineering (ICCLab &amp; SPLab)","isPartOf":{"@id":"https:\/\/blog.zhaw.ch\/icclab\/#website"},"datePublished":"2014-04-30T13:03:51+00:00","dateModified":"2014-05-02T11:52:21+00:00","author":{"@id":"https:\/\/blog.zhaw.ch\/icclab\/#\/schema\/person\/b75ac4e936b5921c8a9de4fb84202703"},"breadcrumb":{"@id":"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Startseite","item":"https:\/\/blog.zhaw.ch\/icclab\/"},{"@type":"ListItem","position":2,"name":"Deploy Ceph and start using it: end to end tutorial &#8211; Installation (part 1\/3)"}]},{"@type":"WebSite","@id":"https:\/\/blog.zhaw.ch\/icclab\/#website","url":"https:\/\/blog.zhaw.ch\/icclab\/","name":"Service Engineering (ICCLab &amp; SPLab)","description":"A Blog of the ZHAW Zurich University of Applied Sciences","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blog.zhaw.ch\/icclab\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/blog.zhaw.ch\/icclab\/#\/schema\/person\/b75ac4e936b5921c8a9de4fb84202703","name":"piiv","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/c7b4c6df4485c24b56af1a1a92442259dfc735b8c0dcf8d3ddcb16f88deeb723?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/c7b4c6df4485c24b56af1a1a92442259dfc735b8c0dcf8d3ddcb16f88deeb723?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/c7b4c6df4485c24b56af1a1a92442259dfc735b8c0dcf8d3ddcb16f88deeb723?s=96&d=mm&r=g","caption":"piiv"},"url":"https:\/\/blog.zhaw.ch\/icclab\/author\/piiv\/"}]}},"_links":{"self":[{"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/posts\/4844","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/users\/96"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/comments?post=4844"}],"version-history":[{"count":11,"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/posts\/4844\/revisions"}],"predecessor-version":[{"id":4925,"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/posts\/4844\/revisions\/4925"}],"wp:attachment":[{"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/media?parent=4844"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/categories?post=4844"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/tags?post=4844"},{"taxonomy":"features","embeddable":true,"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/features?post=4844"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}