Can a prisoner be released early, or released on
bail? A judge who decides this should also consider the risk of
recidivism of the person to be released. Wouldn’t it be an
advantage to be able to assess this risk objectively and reliably?
This was the idea behind the COMPAS system developed by the US
system makes an individual prediction of the chance of recidivism for
imprisoned offenders, based on a wide range of personal data. The
result is a risk score between 1 and 10, where 10 corresponds to a
very high risk of recidivism. This system has been used for many
years in various U.S. states to support decision making of judges –
more than one million prisoners have already been evaluated using
COMPAS. The advantages are obvious: the system produces an objective
risk prediction that has been developed and validated on the basis of
thousands of cases.
May 2016, however, the journalists’ association ProPublica published
the results of research suggesting that this software systematically
discriminates against black people and overestimates their risk
(Angwin et al. 2016): 45 percent of black offenders who did not
reoffend after their release were identified as high-risk. In the
corresponding group of whites, however, only 23 percent were
attributed a high risk by the algorithm. This means that the
probability of being falsely assigned a high risk of recidivism is
twice as high for a black person as for a white person.
We explain here, step by step, how to reproduce results of the approach and discuss parts of the paper. The approach was aimed at building a strong baseline for the task, which should be beaten by deep learning approaches, but we did not achieve that, so we submitted this baseline, and got second in the flat problem and 1st in the hierarchical task (subtask B). This baseline builds on strong placements in different shared tasks, and although it only is a clever way for keyword spotting, it performs a very good job. Code and data can be accessed in the repository GermEval_2019
This year, the SpinningBytes team participated in the VarDial competition, where we achieved second place in the German Dialect Identification shared task. The task’s goal was to identify, which region the speaker of a given sentence is from, based on the dialect he or she speaks. Dialect identification is an important NLP task; for instance, it can be used for automatic processing in a speech-to-text context, where identifying dialects enables to load a specialized model. In this blog post, we do a step by step walkthrough how to create the model in Python, while comparing it to previous years’ approaches.
Kurt Stockinger was invited to contribute a blog to ACM SIGMOD – the leading world-wide community of database research. The blog discusses recent technological advances of natural language interfaces to databases. The ultimate goal is to talk to a database (almost) like to a human.
The full blog can be found on the following ACM SIGMOD link:
In a lecture for the Fair Data Forum, I dealt with the question “What value does data protection have for individuals and what are they willing to pay for it?”
The three data privacy types
As always, there is not one “individual”, as everyone has different data protection preferences and thus, attributes different value to having personal data safeguarded. Therefore, in order to classify individuals, there are different “typologies”. For example, Westin distinguishes between data protection fundamentalists, data protection pragmatists and completely unconcerned individuals. In 2002, Sheehan (2002) selected 889 persons in the USA and classified them with a questionnaire. Conclusion: 16% of the respondents were completely unconcerned about data protection, 81% were classified as pragmatists, and 3% as fundamentalists.
As part of “Zürich meets San Francisco – A Festival Of Two Cities”, the ZHAW Datalab co-organized the event Data Science and Beyond: Technical, Economic and Societal Challenges, which took place at the campus of San José State University (SJSU) – in the heart of Silicon Valley. One interesting fact about SJSU is that it has the highest number of graduates among all US universities that get jobs either at Apple or Cisco.
The aim of the PhD Network in Data Science is to offer students with a master degree (including degrees from an university of applied sciences) the opportunity to obtain a PhD in cooperation between a university of applied sciences and a university.
The PhD Network in Data Science is supported by Swissuniversities. It is a cooperation between three departments of ZHAW Zurich University of Applied Sciences (School of Management and Law, Life Science and Facility Management, School of Engineering), three departments of the University of Zurich (Faculty of Science, Faculty of Business, Economics and Informatics, Faculty of Arts and Social Sciences), the Faculty of Science at the University of Neuchatel and the Department of Innovative Technologies at SUPSI University of Applied Sciences and Arts of Southern Switzerland.
PhD students work in applied research projects at the university of applied sciences and are supervised jointly by a supervisor at the university and a co-supervisor at the university of applied sciences. They are enrolled in the regular PhD programs of the partner universities and have to go through the standard admission procedure. After successful completion they receive the doctorate of the respective partner university (UZH or UNIBE). The PhD Network is also open to students with a master’s degree from a university of applied sciences. They, however, have to go through convergence programs (specific to the respective faculty) for admission to the partner universities.
Paul. D. Ellis, The Essential Guide to Effect Sizes. Statistical Power, Meta-Analysis and the Interpretation of Research Results. Cambridge University Press, Cambridge 2010. Link to book on publisher’s website.
In the last few years, statistical hypothesis testing – with the p-value still being THE standard for reporting results in many fields of science – has increasingly been criticized. Many researchers have even called for abandoning the “NHST” (Null Hypothesis Significance Testing) approach all together. I think this is going too far as many problems are due to misapplication of the techniques and – perhaps even more importantly – misinterpretation of the results. There is also no consensus on how to replace hypothesis testing with a better methodology – some of the more moderate critics suggest using confidence intervals, but while these are often more informative they are essentially equivalent to hypothesis tests and share some of the problems. This makes it all the more important to highlight difficulties in the correct application and interpretation of statistical methodology. Continue reading
The final results of an interdisciplinary study funded by „TA Swiss“ on „Quantified Self“ with participation of the Datalab have been published. The study was performed by three ZHAW departments (School of Health Professions, School of Management and Law, School of Engineering) in cooperation with the Institute for Futures Studies and Technology Assessment, Berlin. The focus of the Datalab was on legal and Big Data aspects of quantified self.