{"id":3030,"date":"2013-07-24T10:07:50","date_gmt":"2013-07-24T10:07:50","guid":{"rendered":"http:\/\/www.cloudcomp.ch\/?p=3030"},"modified":"2013-07-24T10:07:50","modified_gmt":"2013-07-24T10:07:50","slug":"distributed-file-systems-series-ceph-introduction","status":"publish","type":"post","link":"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/","title":{"rendered":"Distributed File Systems Series: Ceph Introduction"},"content":{"rendered":"<p style=\"text-align: left\">With this post we are going to start a new series on <a title=\"Distributed File Systems\" href=\"http:\/\/www.cloudcomp.ch\/research\/foundation\/themes\/initiatives\/distributed-file-systems\/\">Distributed File Systems<\/a>. We are going to start with an introduction to a file system that is enjoying a good amount of success: Ceph.<\/p>\n<p><a href=\"http:\/\/ceph.com\" target=\"_blank\">Ceph<\/a>\u00a0is a distributed parallel fault-tolerant file system that can offer object, block, and file storage from a single cluster. Ceph&#8217;s objective is to provide an open source storage platform with no Single-Point-of-Failure, highly available and highly scalable.<\/p>\n<p>A Ceph Cluster has three main components:<\/p>\n<ul>\n<li><strong>OSDs.<\/strong>\u00a0A Ceph Object Storage Devices (OSD) are the core of a Ceph cluster and are in charge of storing data, handling data replication and recovery, and data rebalancing. A Ceph Cluster requires at least two OSDs. OSDs also check other OSDs for a heartbeat and\u00a0provide this information to Ceph Monitors.<\/li>\n<li><strong>Monitors<\/strong>: A Ceph Monitor\u00a0keeps the state of the Ceph Cluster using maps, e.g.. monitors map, OSDs map and the CRUSH map. Ceph also maintains a history, also called an <em>epoch<\/em>, of each state change in the Ceph Cluster components.<\/li>\n<li><strong>MDSs<\/strong>: A Ceph MetaData Server\u00a0(MDS) stores metadata for the Ceph FileSystem client. Thanks to Ceph MDSs, POSIX file system users are able to execute basic commands such as <em>ls<\/em> and <em>find<\/em>\u00a0without overloading the OSDs. Ceph MDSs can provide both metadata high-availability, i.e. multiple MDS instances, at least one in standby &#8211; and scalability, i.e. multiple MDS instances, all active and managing different directory subtrees.<\/li>\n<\/ul>\n<figure id=\"attachment_3034\" aria-describedby=\"caption-attachment-3034\" style=\"width: 564px\" class=\"wp-caption aligncenter\"><a style=\"text-align: center\" href=\"http:\/\/blog.zhaw.ch\/icclab\/files\/2013\/07\/ceph-architecture1.png\"><img loading=\"lazy\" decoding=\"async\" class=\" wp-image-3034\" alt=\"ceph-architecture\" src=\"http:\/\/blog.zhaw.ch\/icclab\/files\/2013\/07\/ceph-architecture1.png\" width=\"564\" height=\"222\" \/><\/a><figcaption id=\"caption-attachment-3034\" class=\"wp-caption-text\"><strong>Ceph Architecture (Source: <a href=\"http:\/\/docs.openstack.org\/trunk\/openstack-compute\/admin\/content\/figures\/8\/figures\/ceph\/ceph-architecture.png\">docs.openstack.org<\/a>)<\/strong><\/figcaption><\/figure>\n<p>One of the key feature of Ceph is the way data is managed. Ceph clients and OSDs compute data locations using a pseudo random algorithm called <strong>C<\/strong>ontrolled <strong>R<\/strong>eplication <strong>U<\/strong>nder <strong>S<\/strong>calable <strong>H<\/strong>ashing (CRUSH). The CRUSH algorithm distributes the work amongst clients and OSDs, which free them from depending on a central lookup table to retrieve location information and allow for a high degree of scaling. CRUSH also uses intelligent data replication to guarantee resiliency.<\/p>\n<p>Ceph allows clients to access data through different interfaces:<\/p>\n<ul>\n<li><strong>Object Storage<\/strong>: The RADOS Gateway (RGW), the Ceph Object Storage component, provides RESTful APIs compatible with Amazon S3 and OpenStack Swift.\u00a0It sits on top of the Ceph Storage Cluster and has its own user database, authentication, and access control. The RADOS Gateway makes use of a unified namespace, this means that you can write data using one API, e.g. Amazon S3-compatible API, and read them with another API, e.g. OpenStack Swift-compatible API. Ceph Object Storage doesn&#8217;t make use fo the Ceph MetaData Servers.<\/li>\n<\/ul>\n<figure id=\"attachment_3035\" aria-describedby=\"caption-attachment-3035\" style=\"width: 564px\" class=\"wp-caption aligncenter\"><a href=\"http:\/\/blog.zhaw.ch\/icclab\/files\/2013\/07\/stack.png\"><img loading=\"lazy\" decoding=\"async\" class=\" wp-image-3035\" alt=\"stack\" src=\"http:\/\/blog.zhaw.ch\/icclab\/files\/2013\/07\/stack.png\" width=\"564\" height=\"391\" \/><\/a><figcaption id=\"caption-attachment-3035\" class=\"wp-caption-text\"><strong>Ceph Clients (Source: <a href=\"http:\/\/ceph.com\/docs\/next\/architecture\/\">ceph.com<\/a>)<\/strong><\/figcaption><\/figure>\n<ul>\n<li><strong>Block Devices<\/strong>: The RADOS Block Devices (RBD), the Ceph Block Device component, provides resizable, thin-provisioned block devices. The block devices are striped across multiple OSDs in the Ceph cluster for high performance. The Ceph Block Device component also provides image snapshotting and snapshots layering, i.e. cloning of images. Ceph RBD supports QEMU\/KVM hypervisors and can easily be integrated with OpenStack and CloudStack (or any other cloud stack that uses <em>libvirt<\/em>).<\/li>\n<li><strong>Filesystem<\/strong>: CephFS, the Ceph Filesystem component, provides a POSIX-compliant filesystem\u00a0layered on top of the Ceph Storage Cluster, meaning that files get mapped to objects in the Ceph cluster. Ceph clients can mount the Ceph Filesystem either as a Kernel object or as a Filesystem in User Space (FUSE).\u00a0CephFS separates the metadata from the data, storing the metadata in the MDSs, and storing the file data in one or more OSDs in the Ceph cluster. Thanks to this separation\u00a0the Ceph Filesystem can provide high performances without stressing the Ceph Storage Cluster.<\/li>\n<\/ul>\n<p style=\"text-align: left\">Our next topic in the Distributed File Systems Series will be and introduction to GlusterFS.<\/p>\n<div class=\"pt-sm\">Schlagw\u00f6rter: <a href=\"https:\/\/blog.zhaw.ch\/icclab\/tag\/ceph\/\">Ceph<\/a>, <a href=\"https:\/\/blog.zhaw.ch\/icclab\/tag\/distributed-file-systems-2\/\">Distributed File systems<\/a><br><\/div>","protected":false},"excerpt":{"rendered":"<p>With this post we are going to start a new series on Distributed File Systems. We are going to start with an introduction to a file system that is enjoying a good amount of success: Ceph. Ceph\u00a0is a distributed parallel fault-tolerant file system that can offer object, block, and file storage from a single cluster. [&hellip;]<\/p>\n","protected":false},"author":73,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"ngg_post_thumbnail":0,"footnotes":""},"categories":[5,20],"tags":[70,115],"features":[],"class_list":["post-3030","post","type-post","status-publish","format-standard","hentry","category-articles","category-open-source","tag-ceph","tag-distributed-file-systems-2"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Distributed File Systems Series: Ceph Introduction - Service Engineering (ICCLab &amp; SPLab)<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Distributed File Systems Series: Ceph Introduction\" \/>\n<meta property=\"og:description\" content=\"With this post we are going to start a new series on Distributed File Systems. We are going to start with an introduction to a file system that is enjoying a good amount of success: Ceph. Ceph\u00a0is a distributed parallel fault-tolerant file system that can offer object, block, and file storage from a single cluster. [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/\" \/>\n<meta property=\"og:site_name\" content=\"Service Engineering (ICCLab &amp; SPLab)\" \/>\n<meta property=\"article:published_time\" content=\"2013-07-24T10:07:50+00:00\" \/>\n<meta property=\"og:image\" content=\"http:\/\/blog.zhaw.ch\/icclab\/files\/2013\/07\/ceph-architecture1.png\" \/>\n<meta name=\"author\" content=\"strp\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"strp\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/\"},\"author\":{\"name\":\"strp\",\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/#\/schema\/person\/d7955ea228a04e754bad7f721febf73a\"},\"headline\":\"Distributed File Systems Series: Ceph Introduction\",\"datePublished\":\"2013-07-24T10:07:50+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/\"},\"wordCount\":624,\"commentCount\":2,\"image\":{\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/#primaryimage\"},\"thumbnailUrl\":\"http:\/\/blog.zhaw.ch\/icclab\/files\/2013\/07\/ceph-architecture1.png\",\"keywords\":[\"Ceph\",\"Distributed File systems\"],\"articleSection\":[\"Articles\",\"Open Source\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/\",\"url\":\"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/\",\"name\":\"Distributed File Systems Series: Ceph Introduction - Service Engineering (ICCLab &amp; SPLab)\",\"isPartOf\":{\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/#primaryimage\"},\"thumbnailUrl\":\"http:\/\/blog.zhaw.ch\/icclab\/files\/2013\/07\/ceph-architecture1.png\",\"datePublished\":\"2013-07-24T10:07:50+00:00\",\"author\":{\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/#\/schema\/person\/d7955ea228a04e754bad7f721febf73a\"},\"breadcrumb\":{\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/#primaryimage\",\"url\":\"http:\/\/blog.zhaw.ch\/icclab\/files\/2013\/07\/ceph-architecture1.png\",\"contentUrl\":\"http:\/\/blog.zhaw.ch\/icclab\/files\/2013\/07\/ceph-architecture1.png\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Startseite\",\"item\":\"https:\/\/blog.zhaw.ch\/icclab\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Distributed File Systems Series: Ceph Introduction\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/#website\",\"url\":\"https:\/\/blog.zhaw.ch\/icclab\/\",\"name\":\"Service Engineering (ICCLab &amp; SPLab)\",\"description\":\"A Blog of the ZHAW Zurich University of Applied Sciences\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/blog.zhaw.ch\/icclab\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/blog.zhaw.ch\/icclab\/#\/schema\/person\/d7955ea228a04e754bad7f721febf73a\",\"name\":\"strp\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/4bd7dfc3a47734d2255bff73fdc2f40d8899db7726711977f95ae35e0ee3d2ac?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/4bd7dfc3a47734d2255bff73fdc2f40d8899db7726711977f95ae35e0ee3d2ac?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/4bd7dfc3a47734d2255bff73fdc2f40d8899db7726711977f95ae35e0ee3d2ac?s=96&d=mm&r=g\",\"caption\":\"strp\"},\"url\":\"https:\/\/blog.zhaw.ch\/icclab\/author\/strp\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Distributed File Systems Series: Ceph Introduction - Service Engineering (ICCLab &amp; SPLab)","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/","og_locale":"en_US","og_type":"article","og_title":"Distributed File Systems Series: Ceph Introduction","og_description":"With this post we are going to start a new series on Distributed File Systems. We are going to start with an introduction to a file system that is enjoying a good amount of success: Ceph. Ceph\u00a0is a distributed parallel fault-tolerant file system that can offer object, block, and file storage from a single cluster. [&hellip;]","og_url":"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/","og_site_name":"Service Engineering (ICCLab &amp; SPLab)","article_published_time":"2013-07-24T10:07:50+00:00","og_image":[{"url":"http:\/\/blog.zhaw.ch\/icclab\/files\/2013\/07\/ceph-architecture1.png","type":"","width":"","height":""}],"author":"strp","twitter_card":"summary_large_image","twitter_misc":{"Written by":"strp","Est. reading time":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/#article","isPartOf":{"@id":"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/"},"author":{"name":"strp","@id":"https:\/\/blog.zhaw.ch\/icclab\/#\/schema\/person\/d7955ea228a04e754bad7f721febf73a"},"headline":"Distributed File Systems Series: Ceph Introduction","datePublished":"2013-07-24T10:07:50+00:00","mainEntityOfPage":{"@id":"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/"},"wordCount":624,"commentCount":2,"image":{"@id":"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/#primaryimage"},"thumbnailUrl":"http:\/\/blog.zhaw.ch\/icclab\/files\/2013\/07\/ceph-architecture1.png","keywords":["Ceph","Distributed File systems"],"articleSection":["Articles","Open Source"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/","url":"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/","name":"Distributed File Systems Series: Ceph Introduction - Service Engineering (ICCLab &amp; SPLab)","isPartOf":{"@id":"https:\/\/blog.zhaw.ch\/icclab\/#website"},"primaryImageOfPage":{"@id":"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/#primaryimage"},"image":{"@id":"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/#primaryimage"},"thumbnailUrl":"http:\/\/blog.zhaw.ch\/icclab\/files\/2013\/07\/ceph-architecture1.png","datePublished":"2013-07-24T10:07:50+00:00","author":{"@id":"https:\/\/blog.zhaw.ch\/icclab\/#\/schema\/person\/d7955ea228a04e754bad7f721febf73a"},"breadcrumb":{"@id":"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/#primaryimage","url":"http:\/\/blog.zhaw.ch\/icclab\/files\/2013\/07\/ceph-architecture1.png","contentUrl":"http:\/\/blog.zhaw.ch\/icclab\/files\/2013\/07\/ceph-architecture1.png"},{"@type":"BreadcrumbList","@id":"https:\/\/blog.zhaw.ch\/icclab\/distributed-file-systems-series-ceph-introduction\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Startseite","item":"https:\/\/blog.zhaw.ch\/icclab\/"},{"@type":"ListItem","position":2,"name":"Distributed File Systems Series: Ceph Introduction"}]},{"@type":"WebSite","@id":"https:\/\/blog.zhaw.ch\/icclab\/#website","url":"https:\/\/blog.zhaw.ch\/icclab\/","name":"Service Engineering (ICCLab &amp; SPLab)","description":"A Blog of the ZHAW Zurich University of Applied Sciences","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blog.zhaw.ch\/icclab\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/blog.zhaw.ch\/icclab\/#\/schema\/person\/d7955ea228a04e754bad7f721febf73a","name":"strp","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/4bd7dfc3a47734d2255bff73fdc2f40d8899db7726711977f95ae35e0ee3d2ac?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/4bd7dfc3a47734d2255bff73fdc2f40d8899db7726711977f95ae35e0ee3d2ac?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/4bd7dfc3a47734d2255bff73fdc2f40d8899db7726711977f95ae35e0ee3d2ac?s=96&d=mm&r=g","caption":"strp"},"url":"https:\/\/blog.zhaw.ch\/icclab\/author\/strp\/"}]}},"_links":{"self":[{"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/posts\/3030","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/users\/73"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/comments?post=3030"}],"version-history":[{"count":0,"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/posts\/3030\/revisions"}],"wp:attachment":[{"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/media?parent=3030"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/categories?post=3030"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/tags?post=3030"},{"taxonomy":"features","embeddable":true,"href":"https:\/\/blog.zhaw.ch\/icclab\/wp-json\/wp\/v2\/features?post=3030"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}