{"id":1018,"date":"2020-02-05T12:43:11","date_gmt":"2020-02-05T10:43:11","guid":{"rendered":"https:\/\/blog.zhaw.ch\/datascience\/?p=1018"},"modified":"2020-02-05T12:43:48","modified_gmt":"2020-02-05T10:43:48","slug":"algorithmic-fairness-algorithms-and-social-justice","status":"publish","type":"post","link":"https:\/\/blog.zhaw.ch\/datascience\/algorithmic-fairness-algorithms-and-social-justice\/","title":{"rendered":"Algorithmic Fairness &#8211; Algorithms and Social Justice"},"content":{"rendered":"\n<p>By Christoph Heitz (ZHAW)<\/p>\n\n\n\n<p><em>translated from <a href=\"https:\/\/www.inside-it.ch\/de\/post\/dsi-insights-algorithmic-fairness-algorithmen-und-soziale-gerechtigkeit-20200110\">original German language version<\/a> published at <a href=\"http:\/\/www.inside-it.ch\/\">Inside IT<\/a><\/em><\/p>\n\n\n\n<p>\nCan a prisoner be released early, or released on\nbail? A judge who decides this should also consider the risk of\nrecidivism of the person to be released. Wouldn\u2019t it be an\nadvantage to be able to assess this risk objectively and reliably?\nThis was the idea behind the COMPAS system developed by the US\ncompany Northpoint.<\/p>\n\n\n\n<p>The\nsystem makes an individual prediction of the chance of recidivism for\nimprisoned offenders, based on a wide range of personal data. The\nresult is a risk score between 1 and 10, where 10 corresponds to a\nvery high risk of recidivism. This system has been used for many\nyears in various U.S. states to support decision making of judges &#8211;\nmore than one million prisoners have already been evaluated using\nCOMPAS. The advantages are obvious: the system produces an objective\nrisk prediction that has been developed and validated on the basis of\nthousands of cases.<\/p>\n\n\n\n<p>In\nMay 2016, however, the journalists&#8217; association ProPublica published\nthe results of research suggesting that this software systematically\ndiscriminates against black people and overestimates their risk\n(Angwin et al. 2016): 45 percent of black offenders who did not\nreoffend after their release were identified as high-risk. In the\ncorresponding group of whites, however, only 23 percent were\nattributed a high risk by the algorithm. This means that the\nprobability of being falsely assigned a high risk of recidivism is\ntwice as high for a black person as for a white person.<\/p>\n\n\n\n<!--more-->\n\n\n\n<p>Algorithm-based\nsoftware systems like COMPAS are having an impact on more and more\nareas of our lives &#8211; often in the background, without those affected\nbeing aware of it. They make decisions independently, or, as in the\ncase of COMPAS, they support human decision-makers. Algorithms\ninfluence whose application is read by the Human Resource manager,\nwho gets a loan for buying a house, in which urban areas police\nsurveillance is intensified, which unemployed people receive support\nand which do not, who sees or does not see what kind of information.\nThe basis for these decisions or recommendations is always data &#8211; in\nmost cases personal data. \n<\/p>\n\n\n\n<p>There\nare good reasons for the increasing use of algorithms: In many cases,\nalgorithms consistently make better decisions than humans. As\ncomputer programs they are objective, incorruptible, can be trained\non millions of data sets, have no prejudices, and their decisions are\nreproducible.<\/p>\n\n\n\n<p>Nevertheless,\nthe example of COMPAS shows that important social values such as\njustice, equal opportunity, and freedom from discrimination can be at\nrisk. Algorithms can create social injustice.\nCOMPAS is one of the most cited examples, but there are many others.\nA study published by the anti-discrimination office of the German\nfederal government from September 2019 describes no less than 47\ndocumented cases of discrimination based on algorithms (Orwat 2019).<\/p>\n\n\n\n<p>This\ntype of discrimination is particularly critical because it is usually\nnot intentionally built into the algorithms, and is often detected\nonly much later \u2013 or not at all. Thus, the problem of &#8220;Algorithmic\nFairness&#8221; has been on the radar of science and society only\nsince few years.<\/p>\n\n\n\n<p>For\nmany years, discussions about Big Data focused on the problem of data\nprotection \u2013 who is allowed to do what with personal data, and how\ncan it be protected against unauthorized use? The European Data\nProtection Regulation (GDPR) from 2018, for example, was developed in\nthis spirit.<\/p>\n\n\n\n<p>In\nthe meantime, however, data-based decision-making systems have spread\nmassively and are rapidly penetrating more and more areas of life,\nwhere they have a very concrete impact on the lives of countless\npeople. In recent years, the issue has therefore been the subject of\nincreasing debate: how are people and our society as a whole affected\nwhen such algorithms increasingly control our lives? How do we ensure\nthat social achievements and values are not thrown overboard suddenly\nand perhaps even unnoticed? \n<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Can\nalgorithms be fair?<\/h2>\n\n\n\n<p>An\nalgorithm is a rule that calculates an output from input data, for\nexample the probability of becoming a criminal. Such a calculation\nrule can be derived using different methods, for example, by\nstatistical modeling or machine learning methods. The calculation\nrule is usually optimized using training data so that the result is\nan &#8220;optimal&#8221; calculation rule, for example, one that makes\nthe best possible prediction for a new crime. In this form, the\noutputs of an algorithm are then used as the basis for a decision.<\/p>\n\n\n\n<p>An\nalgorithm is objective, incorruptible, unemotional, and always works\nthe same way. But does that make it fair or just? The sobering answer\nis: No, data-based algorithms are usually not fair! The reason is\nsimple: The goal of developers is not to produce fairness, but good\npredictions. And the algorithm that makes the best prediction is at\nbest fair by chance, but usually unfair &#8211; as countless concrete\nexamples show. So we are right to be seriously concerned.<\/p>\n\n\n\n<p>This\nleads to a second question. If fairness does not come automatically:\nCan fairness be implemented into algorithms? This is indeed possible.\nIn recent years, there has been intensive worldwide research on this\ntopic, and there is now a great deal of knowledge on how to develop\nfair decision algorithms.<\/p>\n\n\n\n<p>As\na matter of fact, fairness of algorithms can be measured. This is\nexactly what the ProPublica reporters have done with the COMPAS\nsystem: What percentage of blacks who have not committed crimes was\nrated &#8220;high risk&#8221; by the algorithm, and what was the ratio\nof whites? Unlike human decision makers, algorithms can be put to the\ntest and tested with a high number of cases to determine their\ncharacteristics, including their fairness characteristics.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">What\nexactly is fairness?<\/h2>\n\n\n\n<p>It\nis precisely at this point that it becomes ambiguous, because it is\nnot clear how exactly fairness should actually be measured. Arvind\nNarayanan of Princeton University has named no less than 21 different\nfairness criteria used in the technical literature (Narayanan 2018).\nCrucially, it can be proven mathematically that many of these\ncriteria are mutually exclusive: It is not possible to meet all of\nthem at the same time (Chouldechova 2017).<\/p>\n\n\n\n<p>A\ncloser analysis shows that this is not a technical problem, but an\nethical one. Indeed, the various statistically defined and measurable\nfairness criteria correspond to different notions of social justice.\nFor example: \u00abfairness\u00bb can mean that the same rules must apply to\neveryone. But \u00abfairness\u00bb can also mean that everyone should have\nthe same opportunities, which is not the same.<\/p>\n\n\n\n<p>Taking\nthe example of school grades in sport as an example, one can see that\nthese two ideas of fairness are mutually exclusive: If girls and boys\nget a 6 for the same distance in long throw (same rules), girls\nobviously have worse chances of getting a good grade. In this case we\ncannot demand equal rules for all and at the same time equal\nopportunity for all. This applies to all decision-making mechanisms,\nincluding algorithms.<\/p>\n\n\n\n<p>Someone\nmust therefore decide what kind of fairness or social justice an\nalgorithm should ensure. This requires an ethical debate in which\nopposing values usually have to be weighed against each other. In the\npolitical arena we are used to such discussions. But this is\ndifferent in the field of algorithmic fairness. Here, the connection\nbetween a concrete discussion of values on the one hand and technical\nimplementation in the form of decision algorithms on the other hand\nis still in its infancy. How can we tie the ethical discourse with\nengineering? Where do ethicists and engineers find the common ground\non which socially acceptable algorithms can be developed?<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Interdisciplinary\ncollaboration in Zurich<\/h2>\n\n\n\n<p>Initial approaches are currently being developed in an interdisciplinary research collaboration between ZHAW (School of Engineering) and the University of Zurich (Ethics). This research will help to ensure that the undeniable advantages of modern data-based decision algorithms create practical benefits without harming our social values.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">References<\/h2>\n\n\n\n<ul class=\"wp-block-list\"><li>Angwin, Julia; Larson, Jeff; Mattu, Surya; Kirchner,  Lauren (2016): Machine bias: There\u2019s software used across the country to  predict future criminals. And it\u2019s biased against blacks, in:  ProPublica, <a href=\"https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing\">online<\/a>.\u00a0<\/li><li>Chouldechova,  Alexandra (2017): Fair prediction with disparate impact: A study of  bias in recidivism prediction instruments, in: Big data, 5. Jg., H. 2,  S. 153-163.<\/li><li>Narayanan, Arvind (2018): FAT* tutorial: 21 fairness definitions and their politics.<\/li><li>Orwat, Carsten (2019): Diskriminierungsrisiken durch Verwendung  von Algorithmen. Berlin: Antidiskriminierungsstelle des Bundes.<\/li><\/ul>\n\n\n\n<p><\/p>\n<div class=\"pt-sm\">Schlagw\u00f6rter: <a href=\"https:\/\/blog.zhaw.ch\/datascience\/tag\/algorithmic-fairness\/\">Algorithmic Fairness<\/a>, <a href=\"https:\/\/blog.zhaw.ch\/datascience\/tag\/discrimination\/\">Discrimination<\/a>, <a href=\"https:\/\/blog.zhaw.ch\/datascience\/tag\/ethics\/\">Ethics<\/a>, <a href=\"https:\/\/blog.zhaw.ch\/datascience\/tag\/fairness\/\">Fairness<\/a><br><\/div>","protected":false},"excerpt":{"rendered":"<p>By Christoph Heitz (ZHAW) translated from original German language version published at Inside IT Can a prisoner be released early, or released on bail? A judge who decides this should also consider the risk of recidivism of the person to be released. Wouldn\u2019t it be an advantage to be able to assess this risk objectively [&hellip;]<\/p>\n","protected":false},"author":265,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"ngg_post_thumbnail":0,"footnotes":""},"categories":[1,7,30,21,66,9],"tags":[76,77,74,75],"features":[],"class_list":["post-1018","post","type-post","status-publish","format-standard","hentry","category-allgemein","category-blog","category-law","category-philosophy","category-privacy","category-research","tag-algorithmic-fairness","tag-discrimination","tag-ethics","tag-fairness"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Algorithmic Fairness - Algorithms and Social Justice - Data Science made in Switzerland<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blog.zhaw.ch\/datascience\/algorithmic-fairness-algorithms-and-social-justice\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Algorithmic Fairness - Algorithms and Social Justice\" \/>\n<meta property=\"og:description\" content=\"By Christoph Heitz (ZHAW) translated from original German language version published at Inside IT Can a prisoner be released early, or released on bail? A judge who decides this should also consider the risk of recidivism of the person to be released. Wouldn\u2019t it be an advantage to be able to assess this risk objectively [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blog.zhaw.ch\/datascience\/algorithmic-fairness-algorithms-and-social-justice\/\" \/>\n<meta property=\"og:site_name\" content=\"Data Science made in Switzerland\" \/>\n<meta property=\"article:published_time\" content=\"2020-02-05T10:43:11+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2020-02-05T10:43:48+00:00\" \/>\n<meta name=\"author\" content=\"mild\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"mild\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/blog.zhaw.ch\/datascience\/algorithmic-fairness-algorithms-and-social-justice\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/blog.zhaw.ch\/datascience\/algorithmic-fairness-algorithms-and-social-justice\/\"},\"author\":{\"name\":\"mild\",\"@id\":\"https:\/\/blog.zhaw.ch\/datascience\/#\/schema\/person\/64f2a57e0efd0aa4c73f45df76618116\"},\"headline\":\"Algorithmic Fairness &#8211; Algorithms and Social Justice\",\"datePublished\":\"2020-02-05T10:43:11+00:00\",\"dateModified\":\"2020-02-05T10:43:48+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/blog.zhaw.ch\/datascience\/algorithmic-fairness-algorithms-and-social-justice\/\"},\"wordCount\":1383,\"commentCount\":0,\"keywords\":[\"Algorithmic Fairness\",\"Discrimination\",\"Ethics\",\"Fairness\"],\"articleSection\":[\"Allgemein\",\"Blog\",\"Law\",\"Philosophy\",\"Privacy\",\"Research\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/blog.zhaw.ch\/datascience\/algorithmic-fairness-algorithms-and-social-justice\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/blog.zhaw.ch\/datascience\/algorithmic-fairness-algorithms-and-social-justice\/\",\"url\":\"https:\/\/blog.zhaw.ch\/datascience\/algorithmic-fairness-algorithms-and-social-justice\/\",\"name\":\"Algorithmic Fairness - Algorithms and Social Justice - Data Science made in Switzerland\",\"isPartOf\":{\"@id\":\"https:\/\/blog.zhaw.ch\/datascience\/#website\"},\"datePublished\":\"2020-02-05T10:43:11+00:00\",\"dateModified\":\"2020-02-05T10:43:48+00:00\",\"author\":{\"@id\":\"https:\/\/blog.zhaw.ch\/datascience\/#\/schema\/person\/64f2a57e0efd0aa4c73f45df76618116\"},\"breadcrumb\":{\"@id\":\"https:\/\/blog.zhaw.ch\/datascience\/algorithmic-fairness-algorithms-and-social-justice\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/blog.zhaw.ch\/datascience\/algorithmic-fairness-algorithms-and-social-justice\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/blog.zhaw.ch\/datascience\/algorithmic-fairness-algorithms-and-social-justice\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Startseite\",\"item\":\"https:\/\/blog.zhaw.ch\/datascience\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Algorithmic Fairness &#8211; Algorithms and Social Justice\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/blog.zhaw.ch\/datascience\/#website\",\"url\":\"https:\/\/blog.zhaw.ch\/datascience\/\",\"name\":\"Data Science made in Switzerland\",\"description\":\"Ein Blog der ZHAW Z\u00fcrcher Hochschule f\u00fcr Angewandte Wissenschaften\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/blog.zhaw.ch\/datascience\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/blog.zhaw.ch\/datascience\/#\/schema\/person\/64f2a57e0efd0aa4c73f45df76618116\",\"name\":\"mild\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/3c38b532abe81ed471e1e6559571ef62f075b055ca6520f8c29ee603a233e272?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/3c38b532abe81ed471e1e6559571ef62f075b055ca6520f8c29ee603a233e272?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/3c38b532abe81ed471e1e6559571ef62f075b055ca6520f8c29ee603a233e272?s=96&d=mm&r=g\",\"caption\":\"mild\"},\"url\":\"https:\/\/blog.zhaw.ch\/datascience\/author\/mild\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Algorithmic Fairness - Algorithms and Social Justice - Data Science made in Switzerland","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blog.zhaw.ch\/datascience\/algorithmic-fairness-algorithms-and-social-justice\/","og_locale":"en_US","og_type":"article","og_title":"Algorithmic Fairness - Algorithms and Social Justice","og_description":"By Christoph Heitz (ZHAW) translated from original German language version published at Inside IT Can a prisoner be released early, or released on bail? A judge who decides this should also consider the risk of recidivism of the person to be released. Wouldn\u2019t it be an advantage to be able to assess this risk objectively [&hellip;]","og_url":"https:\/\/blog.zhaw.ch\/datascience\/algorithmic-fairness-algorithms-and-social-justice\/","og_site_name":"Data Science made in Switzerland","article_published_time":"2020-02-05T10:43:11+00:00","article_modified_time":"2020-02-05T10:43:48+00:00","author":"mild","twitter_card":"summary_large_image","twitter_misc":{"Written by":"mild","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/blog.zhaw.ch\/datascience\/algorithmic-fairness-algorithms-and-social-justice\/#article","isPartOf":{"@id":"https:\/\/blog.zhaw.ch\/datascience\/algorithmic-fairness-algorithms-and-social-justice\/"},"author":{"name":"mild","@id":"https:\/\/blog.zhaw.ch\/datascience\/#\/schema\/person\/64f2a57e0efd0aa4c73f45df76618116"},"headline":"Algorithmic Fairness &#8211; Algorithms and Social Justice","datePublished":"2020-02-05T10:43:11+00:00","dateModified":"2020-02-05T10:43:48+00:00","mainEntityOfPage":{"@id":"https:\/\/blog.zhaw.ch\/datascience\/algorithmic-fairness-algorithms-and-social-justice\/"},"wordCount":1383,"commentCount":0,"keywords":["Algorithmic Fairness","Discrimination","Ethics","Fairness"],"articleSection":["Allgemein","Blog","Law","Philosophy","Privacy","Research"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/blog.zhaw.ch\/datascience\/algorithmic-fairness-algorithms-and-social-justice\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/blog.zhaw.ch\/datascience\/algorithmic-fairness-algorithms-and-social-justice\/","url":"https:\/\/blog.zhaw.ch\/datascience\/algorithmic-fairness-algorithms-and-social-justice\/","name":"Algorithmic Fairness - Algorithms and Social Justice - Data Science made in Switzerland","isPartOf":{"@id":"https:\/\/blog.zhaw.ch\/datascience\/#website"},"datePublished":"2020-02-05T10:43:11+00:00","dateModified":"2020-02-05T10:43:48+00:00","author":{"@id":"https:\/\/blog.zhaw.ch\/datascience\/#\/schema\/person\/64f2a57e0efd0aa4c73f45df76618116"},"breadcrumb":{"@id":"https:\/\/blog.zhaw.ch\/datascience\/algorithmic-fairness-algorithms-and-social-justice\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blog.zhaw.ch\/datascience\/algorithmic-fairness-algorithms-and-social-justice\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/blog.zhaw.ch\/datascience\/algorithmic-fairness-algorithms-and-social-justice\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Startseite","item":"https:\/\/blog.zhaw.ch\/datascience\/"},{"@type":"ListItem","position":2,"name":"Algorithmic Fairness &#8211; Algorithms and Social Justice"}]},{"@type":"WebSite","@id":"https:\/\/blog.zhaw.ch\/datascience\/#website","url":"https:\/\/blog.zhaw.ch\/datascience\/","name":"Data Science made in Switzerland","description":"Ein Blog der ZHAW Z\u00fcrcher Hochschule f\u00fcr Angewandte Wissenschaften","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blog.zhaw.ch\/datascience\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/blog.zhaw.ch\/datascience\/#\/schema\/person\/64f2a57e0efd0aa4c73f45df76618116","name":"mild","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/3c38b532abe81ed471e1e6559571ef62f075b055ca6520f8c29ee603a233e272?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/3c38b532abe81ed471e1e6559571ef62f075b055ca6520f8c29ee603a233e272?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/3c38b532abe81ed471e1e6559571ef62f075b055ca6520f8c29ee603a233e272?s=96&d=mm&r=g","caption":"mild"},"url":"https:\/\/blog.zhaw.ch\/datascience\/author\/mild\/"}]}},"_links":{"self":[{"href":"https:\/\/blog.zhaw.ch\/datascience\/wp-json\/wp\/v2\/posts\/1018","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.zhaw.ch\/datascience\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.zhaw.ch\/datascience\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.zhaw.ch\/datascience\/wp-json\/wp\/v2\/users\/265"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.zhaw.ch\/datascience\/wp-json\/wp\/v2\/comments?post=1018"}],"version-history":[{"count":5,"href":"https:\/\/blog.zhaw.ch\/datascience\/wp-json\/wp\/v2\/posts\/1018\/revisions"}],"predecessor-version":[{"id":1023,"href":"https:\/\/blog.zhaw.ch\/datascience\/wp-json\/wp\/v2\/posts\/1018\/revisions\/1023"}],"wp:attachment":[{"href":"https:\/\/blog.zhaw.ch\/datascience\/wp-json\/wp\/v2\/media?parent=1018"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.zhaw.ch\/datascience\/wp-json\/wp\/v2\/categories?post=1018"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.zhaw.ch\/datascience\/wp-json\/wp\/v2\/tags?post=1018"},{"taxonomy":"features","embeddable":true,"href":"https:\/\/blog.zhaw.ch\/datascience\/wp-json\/wp\/v2\/features?post=1018"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}