{"id":4872,"date":"2014-05-02T11:30:59","date_gmt":"2014-05-02T09:30:59","guid":{"rendered":"http:\/\/blog.zhaw.ch\/icclab\/?p=4872"},"modified":"2014-05-02T13:50:33","modified_gmt":"2014-05-02T11:50:33","slug":"deploy-ceph-troubleshooting-part-23","status":"publish","type":"post","link":"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-troubleshooting-part-23\/","title":{"rendered":"Deploy Ceph and start using it: end to end tutorial \u2013 Troubleshooting (part 2\/3)"},"content":{"rendered":"<p>(<a title=\"Deploy Ceph and start using it: end to end tutorial \u2013 Installation (part 1\/3)\" href=\"http:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13\/\">Part 1\/3 &#8211; Installation<\/a>\u00a0&#8211; <a title=\"Deploy Ceph and start using it: end to end tutorial \u2013 simple librados client (part 3\/3)\" href=\"http:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-librados-client-part-33\/\">Part 3\/3 &#8211; librados client<\/a>)<\/p>\n<p>It is quite common that after the initial installation, the Ceph cluster reports health warnings. Before using the cluster for storage (e.g., allow clients to access it), a\u00a0<code>HEALTH_OK<\/code> state should be reached:<\/p>\n<pre>cluster-admin@ceph-mon0:~\/ceph-cluster$ ceph health\r\nHEALTH_OK<\/pre>\n<p>This part of the tutorial provides\u00a0some troubleshooting hints that I collected during the setup of my deployments. Other helpful resources are the Ceph IRC channel and mailing lists.<\/p>\n<h1>Useful diagnostic commands<\/h1>\n<p>A collection of diagnostic commands to check the status of the cluster is listed here. Running these commands\u00a0is\u00a0how we can understand that the Ceph cluster is not properly configured.<\/p>\n<ol>\n<li>Ceph status\n<pre>$ ceph status<\/pre>\n<p>In this example, the disk for one OSD had been physically removed, so 2 out of 3 OSDs were in and up.<\/p>\n<pre>cluster-admin@ceph-mon0:~\/ceph-cluster$ ceph status\r\n    cluster 28f9315e-6c5b-4cdc-9b2e-362e9ecf3509\r\n     health HEALTH_OK\r\n     monmap e1: 1 mons at {ceph-mon0=192.168.0.1:6789\/0}, election epoch 1, quorum 0 ceph-mon0\r\n     osdmap e122: 3 osds: 2 up, 2 in\r\n      pgmap v4699: 192 pgs, 3 pools, 0 bytes data, 0 objects\r\n            87692 kB used, 1862 GB \/ 1862 GB avail\r\n                 192 active+clean<\/pre>\n<\/li>\n<li>Ceph health\n<pre>$ ceph health\r\n$ ceph health detail<\/pre>\n<\/li>\n<li>Pools and OSDs configuration and status\n<pre>$ ceph osd dump\r\n$ ceph osd dump --format=json-pretty<\/pre>\n<p>the second version provides much more information, listing all the pools and OSDs and their configuration parameters<\/li>\n<li>Tree of OSDs reflecting the CRUSH map\n<pre>$ ceph osd tree<\/pre>\n<p>This is very useful to understand how the cluster is physically organized (e.g., which OSDs are running on which host).<\/li>\n<li>Listing the pools in the cluster\n<pre>$ ceph osd lspools<\/pre>\n<p>This is particularly useful to check clients operations (e.g., if new pools were created).<\/li>\n<li>Check the CRUSH rules\n<pre style=\"color: #000000\">$ ceph osd crush dump --format=json-pretty<\/pre>\n<\/li>\n<li>List the disks of one node from the admin node\n<pre style=\"color: #000000\">$ ceph-deploy disk list osd0<\/pre>\n<\/li>\n<li>Check the logs.<br \/>\nLog files in\u00a0<code>\/var\/log\/ceph\/<\/code> will provide a lot of information for troubleshooting. Each node of the cluster will contain logs about the Ceph components that it runs, so you may need to SSH on different hosts to have a complete diagnosis.<\/li>\n<\/ol>\n<h1>Check your firewall and network configuration<\/h1>\n<p class=\"line874\" style=\"color: #000000\">Every node of the Ceph cluster must be able to successfully run<span id=\"line-658\" class=\"anchor\"><\/span><span id=\"line-659\" class=\"anchor\"><\/span><span id=\"line-660\" class=\"anchor\"><\/span><\/p>\n<pre style=\"color: #000000\"><span id=\"line-1-45\" class=\"anchor\"><\/span>$ ceph status<\/pre>\n<p class=\"line874\" style=\"color: #000000\">If this operation times out without giving any results, it is likely that the firewall (or network configuration) is not allowing the nodes to communicate.<\/p>\n<p class=\"line862\" style=\"color: #000000\">Another symptom of this problem is that OSDs cannot be activated, i.e., the\u00a0<code>ceph-deploy\u00a0osd\u00a0activate\u00a0&lt;args&gt;<\/code>\u00a0command will timeout.<span id=\"line-665\" class=\"anchor\"><\/span><span id=\"line-666\" class=\"anchor\"><\/span><\/p>\n<p class=\"line862\" style=\"color: #000000\">Ceph monitor default port is\u00a0<tt>6789<\/tt>.\u00a0<span id=\"line-667\" class=\"anchor\"><\/span>Ceph OSDs and MDS try to get the first available ports starting at\u00a0<tt>6800<\/tt>.<span id=\"line-668\" class=\"anchor\"><\/span><span id=\"line-669\" class=\"anchor\"><\/span><\/p>\n<p class=\"line874\" style=\"color: #000000\">A typical Ceph cluster might need the following ports:<span id=\"line-670\" class=\"anchor\"><\/span><span id=\"line-671\" class=\"anchor\"><\/span><span id=\"line-672\" class=\"anchor\"><\/span><span id=\"line-673\" class=\"anchor\"><\/span><span id=\"line-674\" class=\"anchor\"><\/span><span id=\"line-675\" class=\"anchor\"><\/span><span id=\"line-676\" class=\"anchor\"><\/span><\/p>\n<pre style=\"color: #000000\"><span id=\"line-1-46\" class=\"anchor\"><\/span>Mon:  6789\r\n<span id=\"line-2-22\" class=\"anchor\"><\/span>Mds:  6800\r\n<span id=\"line-3-15\" class=\"anchor\"><\/span>Osd1: 6801\r\n<span id=\"line-4-11\" class=\"anchor\"><\/span>Osd2: 6802\r\n<span id=\"line-5-9\" class=\"anchor\"><\/span>Osd3: 6803<\/pre>\n<p class=\"line862\" style=\"color: #000000\">Depending on your security requirements, you may want to simply allow any traffic to and from the Ceph cluster nodes.<\/p>\n<p class=\"line862\" style=\"color: #000000\">References:\u00a0<a class=\"http\" style=\"color: #0044aa\" href=\"http:\/\/comments.gmane.org\/gmane.comp.file-systems.ceph.devel\/2231\">http:\/\/comments.gmane.org\/gmane.comp.file-systems.ceph.devel\/2231<\/a><\/p>\n<h1>Try restarting first<\/h1>\n<p>Without going for fine troubleshootings and log analysis, sometimes (especially after the first installation), I&#8217;ve noticed that a simple restart of the Ceph components has helped the transition from a\u00a0<code>HEALTH_WARN<\/code> to a\u00a0<code>HEALTH_OK<\/code> state.<\/p>\n<p>If some of the OSDs are not in or not up, like in the case below<\/p>\n<pre style=\"color: #000000\">    cluster 07d28faa-48ae-4356-a8e3-19d5b81e159e\r\n<span id=\"line-2-25\" class=\"anchor\"><\/span>     health HEALTH_WARN 192 pgs incomplete; 192 pgs stuck inactive; 192 pgs stuck unclean; 1\/2 in osds are down; clock skew detected on mon.1, mon.2\r\n<span id=\"line-3-17\" class=\"anchor\"><\/span>     monmap e3: 3 mons at {0=192.168.252.10:6789\/0,1=192.168.252.11:6789\/0,2=192.168.252.12:6789\/0}, election epoch 36, quorum 0,1,2 0,1,2\r\n<span id=\"line-4-13\" class=\"anchor\"><\/span>     osdmap e27: 6 osds: 1 up, 2 in\r\n<span id=\"line-5-11\" class=\"anchor\"><\/span>      pgmap v57: 192 pgs, 3 pools, 0 bytes data, 0 objects\r\n<span id=\"line-6-8\" class=\"anchor\"><\/span>            84456 kB used, 7865 MB \/ 7948 MB avail\r\n<span id=\"line-7-8\" class=\"anchor\"><\/span>                 192 incomplete<\/pre>\n<p>try to start the OSD daemons with<\/p>\n<pre id=\"CA-55dbf09fc60cbcf096f618942c5795c96b83f5a0\" style=\"color: #000000\"><span class=\"line\"># on osd0\r\n$ sudo \/etc\/init.d\/ceph -a start osd0<\/span><\/pre>\n<p>If the OSDs are in, but PGs are in weird states, like in the example below<\/p>\n<pre style=\"color: #000000\">cluster 07d28faa-48ae-4356-a8e3-19d5b81e159e\r\n<span id=\"line-2-26\" class=\"anchor\"><\/span>     health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean; clock skew detected on mon.1, mon.2\r\n<span id=\"line-3-18\" class=\"anchor\"><\/span>     monmap e3: 3 mons at {0=192.168.252.10:6789\/0,1=192.168.252.11:6789\/0,2=192.168.252.12:6789\/0}, election epoch 36, quorum 0,1,2 0,1,2\r\n<span id=\"line-4-14\" class=\"anchor\"><\/span>     osdmap e34: 6 osds: 6 up, 6 in\r\n<span id=\"line-5-12\" class=\"anchor\"><\/span>      pgmap v71: 192 pgs, 3 pools, 0 bytes data, 0 objects\r\n<span id=\"line-6-9\" class=\"anchor\"><\/span>            235 MB used, 23608 MB \/ 23844 MB avail\r\n<span id=\"line-7-9\" class=\"anchor\"><\/span>                 128 active+degraded\r\n<span id=\"line-8-6\" class=\"anchor\"><\/span>                  64 active+replay+degraded<\/pre>\n<p>try to restart the monitor(s) with<\/p>\n<pre id=\"CA-e966e1bd71014bd3b550e58641678213c584c312\" style=\"color: #000000\"><span class=\"line\"># on mon0\r\n$ sudo \/etc\/init.d\/ceph -a restart mon0\r\n<\/span><\/pre>\n<p>Unfortunately, a simple restart will be the solution in just a few rare cases. More troubleshooting will be required in the majority of the situations.<\/p>\n<h1>Unable to find keyring<\/h1>\n<p class=\"line874\" style=\"color: #000000\">During the deployment of the monitor nodes (the <code>ceph-deploy &lt;mon&gt; [&lt;mon&gt;] create-initial<\/code> step), Ceph may complain about missing keyrings:<\/p>\n<pre style=\"color: #000000\"><span id=\"line-1-47\" class=\"anchor\"><\/span>[ceph_deploy.gatherkeys][WARNIN] Unable to find\r\n<span id=\"line-2-23\" class=\"anchor\"><\/span>\/etc\/ceph\/ceph.client.admin.keyring on ['ceph-server']<\/pre>\n<p class=\"line874\" style=\"color: #000000\">If this warning is reported\u00a0(even if the message is not an error), the Ceph cluster will probably not reach an healthy state.<\/p>\n<p class=\"line874\" style=\"color: #000000\">The solution to this problem is to use exactly the same names for the hostnames (i.e., the output of\u00a0<code>hostname -s<\/code>) and the Ceph node names.<\/p>\n<p class=\"line862\" style=\"color: #000000\">This means that\u00a0the files<\/p>\n<ul>\n<li><code>\/etc\/hosts<\/code><\/li>\n<li><code>\/etc\/hostname<\/code><\/li>\n<li><code>.ssh\/config<\/code>\u00a0(only for the admin node)<\/li>\n<\/ul>\n<p>and the result of the command\u00a0<code>hostname\u00a0-s<\/code>, all should have the same names for a certain node.<\/p>\n<p>See also:<\/p>\n<ul>\n<li><a class=\"https\" style=\"color: #0044aa\" href=\"https:\/\/www.mail-archive.com\/ceph-users@lists.ceph.com\/msg03506.html\">https:\/\/www.mail-archive.com\/ceph-users@lists.ceph.com\/msg03506.html<\/a>\u00a0(problem)<\/li>\n<li><a class=\"https\" style=\"color: #0044aa\" href=\"https:\/\/www.mail-archive.com\/ceph-users@lists.ceph.com\/msg03580.html\">https:\/\/www.mail-archive.com\/ceph-users@lists.ceph.com\/msg03580.html<\/a>\u00a0(solution)<\/li>\n<\/ul>\n<h1>\u00a0Check that replication requirements can be met<\/h1>\n<p>I&#8217;ve found that most of my problems with Ceph health were related to wrong (i.e., unfeasible) replication policies.<\/p>\n<p>This is particularly likely to happen in test deployment where one doesn&#8217;t care about setting up many OSDs or separating them across different hosts.<\/p>\n<p>Some common pitfalls here may be:<\/p>\n<ol>\n<li>The number of required replicas is higher than the number of OSDs (!!)<\/li>\n<li>CRUSH is instructed to separate replicas across hosts but multiple OSDs are on the same host and there are not enough OSD hosts to satisfy this condition<\/li>\n<\/ol>\n<p>The visible effect when running diagnostic\u00a0commands is that PGs will be in wrong statuses.<\/p>\n<p><strong>CASE 1<\/strong>:\u00a0<span style=\"color: #000000\">the replication level is such that it cannot be accomplished with the current cluster (e.g., a replica size of 3 with 2 OSDs).<\/span><\/p>\n<p>Check the <code>replicated size<\/code> of pools with<\/p>\n<pre>$ ceph osd dump<\/pre>\n<p>Adjust the <code>replicated size<\/code> and <code>min_size<\/code>, if required, by running<\/p>\n<pre>$ ceph osd pool set &lt;pool_name&gt; size &lt;value&gt;\r\n$ ceph osd pool set &lt;pool_name&gt; min_size &lt;value&gt;<\/pre>\n<p><strong>CASE 2<\/strong>: the replication policy would require replicas to sit on separate hosts, but OSDs are running within the same hosts<\/p>\n<p>Check what <code>crush_ruleset<\/code> applies to a certain pool with<\/p>\n<pre style=\"color: #000000\">$ ceph osd dump --format=json-pretty<\/pre>\n<p>In the example below, the pool with <code>id 0<\/code> (&#8220;data&#8221;) is using the <code>crush_ruleset<\/code> with <code>id 0<\/code><\/p>\n<pre style=\"color: #000000\">\"pools\": [\r\n<span id=\"line-2-19\" class=\"anchor\"><\/span>        { \"pool\": 0,\r\n<span id=\"line-3-12\" class=\"anchor\"><\/span>          \"pool_name\": \"data\",\r\n<span id=\"line-4-9\" class=\"anchor\"><\/span>          [...]\r\n<span id=\"line-5-7\" class=\"anchor\"><\/span>          \"crush_ruleset\": 0,  &lt;----\r\n<span id=\"line-6-5\" class=\"anchor\"><\/span>          \"object_hash\": 2,\r\n<span id=\"line-7-5\" class=\"anchor\"><\/span>          [...]<\/pre>\n<p>then check with<\/p>\n<pre style=\"color: #000000\">$ ceph osd crush dump --format=json-pretty<\/pre>\n<p>what <code>crush_ruleset 0<\/code> is about.<\/p>\n<p>In the example below, we can observe that this rules says to replicate data by choosing the first available leaf in the CRUSH map, which is of type host.<\/p>\n<pre style=\"color: #000000\">\"rules\": [\r\n<span id=\"line-2-20\" class=\"anchor\"><\/span>        { \"rule_id\": 0,\r\n<span id=\"line-3-13\" class=\"anchor\"><\/span>          \"rule_name\": \"replicated_ruleset\",\r\n<span id=\"line-4-10\" class=\"anchor\"><\/span>          \"ruleset\": 0,\r\n<span id=\"line-5-8\" class=\"anchor\"><\/span>          \"type\": 1,\r\n<span id=\"line-6-6\" class=\"anchor\"><\/span>          \"min_size\": 1,\r\n<span id=\"line-7-6\" class=\"anchor\"><\/span>          \"max_size\": 10,\r\n<span id=\"line-8-5\" class=\"anchor\"><\/span>          \"steps\": [\r\n<span id=\"line-9-4\" class=\"anchor\"><\/span>                { \"op\": \"take\",\r\n<span id=\"line-10-3\" class=\"anchor\"><\/span>                  \"item\": -1,\r\n<span id=\"line-11-3\" class=\"anchor\"><\/span>                  \"item_name\": \"default\"},\r\n<span id=\"line-12-2\" class=\"anchor\"><\/span>                { \"op\": \"chooseleaf_firstn\",     &lt;-----------\r\n<span id=\"line-13\" class=\"anchor\"><\/span>                  \"num\": 0,\r\n<span id=\"line-14\" class=\"anchor\"><\/span>                  \"type\": \"host\"},               &lt;-----------\r\n<span id=\"line-15\" class=\"anchor\"><\/span>                { \"op\": \"emit\"}]}],<\/pre>\n<p>If not enough hosts are available, then the application of this rule will fail.<\/p>\n<p>To allow replicas to be created on different OSDs but possibly on the same host, we need to create a new ruleset:<\/p>\n<pre style=\"color: #000000\">$ ceph osd crush rule create-simple replicate_within_hosts default osd<\/pre>\n<p>After the rule has been created, it should be listed in the output of<\/p>\n<pre style=\"color: #000000\">$ ceph osd crush dump<\/pre>\n<p>from where we can not its id.<\/p>\n<p>The next step is to apply this rule to the pools as required:<\/p>\n<pre style=\"color: #000000\">$ ceph osd pool set data crush_ruleset &lt;rulesetId&gt;\r\n$ ceph osd pool set metadata crush_ruleset &lt;rulesetId&gt;\r\n$ ceph osd pool set rbd crush_ruleset &lt;rulesetId&gt;<\/pre>\n<div class=\"pt-sm\">Schlagw\u00f6rter: <a href=\"https:\/\/blog.zhaw.ch\/icclab\/tag\/ceph\/\">Ceph<\/a>, <a href=\"https:\/\/blog.zhaw.ch\/icclab\/tag\/cloud-storage\/\">cloud storage<\/a>, <a href=\"https:\/\/blog.zhaw.ch\/icclab\/tag\/troubleshooting\/\">troubleshooting<\/a><br><\/div>","protected":false},"excerpt":{"rendered":"<p>(Part 1\/3 &#8211; Installation\u00a0&#8211; Part 3\/3 &#8211; librados client) It is quite common that after the initial installation, the Ceph cluster reports health warnings. Before using the cluster for storage (e.g., allow clients to access it), a\u00a0HEALTH_OK state should be reached: cluster-admin@ceph-mon0:~\/ceph-cluster$ ceph health HEALTH_OK This part of the tutorial provides\u00a0some troubleshooting hints that I [&hellip;]<\/p>\n","protected":false},"author":96,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"ngg_post_thumbnail":0,"footnotes":""},"categories":[5,15],"tags":[70,413,412],"features":[],"class_list":["post-4872","post","type-post","status-publish","format-standard","hentry","category-articles","category-howtos","tag-ceph","tag-cloud-storage","tag-troubleshooting"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Deploy Ceph and start using it: end to end tutorial \u2013 Troubleshooting (part 2\/3) - Service Engineering (ICCLab &amp; SPLab)<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-troubleshooting-part-23\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Deploy Ceph and start using it: end to end tutorial \u2013 Troubleshooting (part 2\/3)\" \/>\n<meta property=\"og:description\" content=\"(Part 1\/3 &#8211; Installation\u00a0&#8211; Part 3\/3 &#8211; librados client) It is quite common that after the initial installation, the Ceph cluster reports health warnings. Before using the cluster for storage (e.g., allow clients to access it), a\u00a0HEALTH_OK state should be reached: cluster-admin@ceph-mon0:~\/ceph-cluster$ ceph health HEALTH_OK This part of the tutorial provides\u00a0some troubleshooting hints that I [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-troubleshooting-part-23\/\" \/>\n<meta property=\"og:site_name\" content=\"Service Engineering (ICCLab &amp; SPLab)\" \/>\n<meta property=\"article:published_time\" content=\"2014-05-02T09:30:59+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2014-05-02T11:50:33+00:00\" \/>\n<meta name=\"author\" content=\"piiv\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"piiv\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-troubleshooting-part-23\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-troubleshooting-part-23\/\"},\"author\":{\"name\":\"piiv\",\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/#\/schema\/person\/b75ac4e936b5921c8a9de4fb84202703\"},\"headline\":\"Deploy Ceph and start using it: end to end tutorial \u2013 Troubleshooting (part 2\/3)\",\"datePublished\":\"2014-05-02T09:30:59+00:00\",\"dateModified\":\"2014-05-02T11:50:33+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-troubleshooting-part-23\/\"},\"wordCount\":903,\"commentCount\":2,\"keywords\":[\"Ceph\",\"cloud storage\",\"troubleshooting\"],\"articleSection\":[\"Articles\",\"HowTos\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-troubleshooting-part-23\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-troubleshooting-part-23\/\",\"url\":\"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-troubleshooting-part-23\/\",\"name\":\"Deploy Ceph and start using it: end to end tutorial \u2013 Troubleshooting (part 2\/3) - Service Engineering (ICCLab &amp; SPLab)\",\"isPartOf\":{\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/#website\"},\"datePublished\":\"2014-05-02T09:30:59+00:00\",\"dateModified\":\"2014-05-02T11:50:33+00:00\",\"author\":{\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/#\/schema\/person\/b75ac4e936b5921c8a9de4fb84202703\"},\"breadcrumb\":{\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-troubleshooting-part-23\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-troubleshooting-part-23\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-troubleshooting-part-23\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Startseite\",\"item\":\"https:\/\/blog.zhaw.ch\/icclab\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Deploy Ceph and start using it: end to end tutorial \u2013 Troubleshooting (part 2\/3)\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/#website\",\"url\":\"https:\/\/blog.zhaw.ch\/icclab\/\",\"name\":\"Service Engineering (ICCLab &amp; SPLab)\",\"description\":\"A Blog of the ZHAW Zurich University of Applied Sciences\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/blog.zhaw.ch\/icclab\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/#\/schema\/person\/b75ac4e936b5921c8a9de4fb84202703\",\"name\":\"piiv\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/c7b4c6df4485c24b56af1a1a92442259dfc735b8c0dcf8d3ddcb16f88deeb723?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/c7b4c6df4485c24b56af1a1a92442259dfc735b8c0dcf8d3ddcb16f88deeb723?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/c7b4c6df4485c24b56af1a1a92442259dfc735b8c0dcf8d3ddcb16f88deeb723?s=96&d=mm&r=g\",\"caption\":\"piiv\"},\"url\":\"https:\/\/blog.zhaw.ch\/icclab\/author\/piiv\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Deploy Ceph and start using it: end to end tutorial \u2013 Troubleshooting (part 2\/3) - Service Engineering (ICCLab &amp; SPLab)","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-troubleshooting-part-23\/","og_locale":"en_US","og_type":"article","og_title":"Deploy Ceph and start using it: end to end tutorial \u2013 Troubleshooting (part 2\/3)","og_description":"(Part 1\/3 &#8211; Installation\u00a0&#8211; Part 3\/3 &#8211; librados client) It is quite common that after the initial installation, the Ceph cluster reports health warnings. Before using the cluster for storage (e.g., allow clients to access it), a\u00a0HEALTH_OK state should be reached: cluster-admin@ceph-mon0:~\/ceph-cluster$ ceph health HEALTH_OK This part of the tutorial provides\u00a0some troubleshooting hints that I [&hellip;]","og_url":"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-troubleshooting-part-23\/","og_site_name":"Service Engineering (ICCLab &amp; SPLab)","article_published_time":"2014-05-02T09:30:59+00:00","article_modified_time":"2014-05-02T11:50:33+00:00","author":"piiv","twitter_card":"summary_large_image","twitter_misc":{"Written by":"piiv","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-troubleshooting-part-23\/#article","isPartOf":{"@id":"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-troubleshooting-part-23\/"},"author":{"name":"piiv","@id":"https:\/\/blog.zhaw.ch\/icclab\/#\/schema\/person\/b75ac4e936b5921c8a9de4fb84202703"},"headline":"Deploy Ceph and start using it: end to end tutorial \u2013 Troubleshooting (part 2\/3)","datePublished":"2014-05-02T09:30:59+00:00","dateModified":"2014-05-02T11:50:33+00:00","mainEntityOfPage":{"@id":"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-troubleshooting-part-23\/"},"wordCount":903,"commentCount":2,"keywords":["Ceph","cloud storage","troubleshooting"],"articleSection":["Articles","HowTos"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-troubleshooting-part-23\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-troubleshooting-part-23\/","url":"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-troubleshooting-part-23\/","name":"Deploy Ceph and start using it: end to end tutorial \u2013 Troubleshooting (part 2\/3) - Service Engineering (ICCLab &amp; SPLab)","isPartOf":{"@id":"https:\/\/blog.zhaw.ch\/icclab\/#website"},"datePublished":"2014-05-02T09:30:59+00:00","dateModified":"2014-05-02T11:50:33+00:00","author":{"@id":"https:\/\/blog.zhaw.ch\/icclab\/#\/schema\/person\/b75ac4e936b5921c8a9de4fb84202703"},"breadcrumb":{"@id":"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-troubleshooting-part-23\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-troubleshooting-part-23\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/blog.zhaw.ch\/icclab\/deploy-ceph-troubleshooting-part-23\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Startseite","item":"https:\/\/blog.zhaw.ch\/icclab\/"},{"@type":"ListItem","position":2,"name":"Deploy Ceph and start using it: end to end tutorial \u2013 Troubleshooting (part 2\/3)"}]},{"@type":"WebSite","@id":"https:\/\/blog.zhaw.ch\/icclab\/#website","url":"https:\/\/blog.zhaw.ch\/icclab\/","name":"Service Engineering (ICCLab &amp; SPLab)","description":"A Blog of the ZHAW Zurich University of Applied Sciences","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blog.zhaw.ch\/icclab\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/blog.zhaw.ch\/icclab\/#\/schema\/person\/b75ac4e936b5921c8a9de4fb84202703","name":"piiv","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/c7b4c6df4485c24b56af1a1a92442259dfc735b8c0dcf8d3ddcb16f88deeb723?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/c7b4c6df4485c24b56af1a1a92442259dfc735b8c0dcf8d3ddcb16f88deeb723?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/c7b4c6df4485c24b56af1a1a92442259dfc735b8c0dcf8d3ddcb16f88deeb723?s=96&d=mm&r=g","caption":"piiv"},"url":"https:\/\/blog.zhaw.ch\/icclab\/author\/piiv\/"}]}},"_links":{"self":[{"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/posts\/4872","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/users\/96"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/comments?post=4872"}],"version-history":[{"count":8,"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/posts\/4872\/revisions"}],"predecessor-version":[{"id":4923,"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/posts\/4872\/revisions\/4923"}],"wp:attachment":[{"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/media?parent=4872"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/categories?post=4872"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/tags?post=4872"},{"taxonomy":"features","embeddable":true,"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/features?post=4872"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}