The Rise of Natural Language Interfaces to Databases

Kurt Stockinger was invited to contribute a blog to ACM SIGMOD – the leading world-wide community of database research. The blog discusses recent technological advances of natural language interfaces to databases. The ultimate goal is to talk to a database (almost) like to a human.

The full blog can be found on the following ACM SIGMOD link:

What is the value of data privacy?

By Nico Ebert (ZHAW)

The original version of this post was published in German on Privacy Bits and English on vetri.global

In a lecture for the Fair Data Forum, I dealt with the question “What value does data protection have for individuals and what are they willing to pay for it?”

The three data privacy types

As always, there is not one “individual”, as everyone has different data protection preferences and thus, attributes different value to having personal data safeguarded. Therefore, in order to classify individuals, there are different “typologies”. For example, Westin distinguishes between data protection fundamentalists, data protection pragmatists and completely unconcerned individuals. In 2002, Sheehan (2002) selected 889 persons in the USA and classified them with a questionnaire. Conclusion: 16% of the respondents were completely unconcerned about data protection, 81% were classified as pragmatists, and 3% as fundamentalists.

Willingness to pay vs willingness to accept

Numerous experiments were carried out to find out which types of people are willing to pay for data privacy and which ones are willing to share their data freely with third parties. In these experiments, the “willingness to accept” or the “willingness to pay” were analyzed. Willingness to accept describes the reward that must be offered to an individual, so that she/he is willing to share personal data (usually monetary compensation or services). The willingness to pay examines how much the individual is willing to pay for data protection (compensation for data protection).

What the studies say

In 2017, an experiment with 3000 students from a US university concluded that a pizza was a sufficient incentive for them to share e-mail addresses of three fellow students. Conversely, the students were rarely willing to accept a small additional expense for better data protection.

A different result was produced by an experiment published by Tsai et al. in 2011, where the willingness to pay was investigated. In the experiment, 272 participants were recruited from the population. The individuals received a sum of money to be used to shop in a laboratory setting. By using a search engine provided by the University, participants were asked to select a suitable provider for a) batteries (a good with little privacy concerns) and b) sex toys (a good with stronger privacy concerns) and to purchase an item. The search engine displayed the available products and their prices in the form of a list. However, some participants were additionally presented with a data protection rating of the seller (e.g. 4/4 stars or 1/4 stars). Conclusion: many participants chose a seller with a better rating and were willing to pay more for products with a high data protection rating. These results could be replicated in two further studies.

A third experiment was published in 2013, in which the willingness to pay was examined. In an American women’s clothing store, 349 women were randomly interviewed for a survey on spending behavior. As a reward, the participants were offered a voucher which they could use to shop. Two types of voucher were offered: an anonymous USD 10 voucher (A) and a USD 12 voucher (B), where the purchases were not anonymous, i.e. could be traced by third parties. Groups of participants were presented the two vouchers in different orders (A first or B first). The experiment showed that the willingness to pay (i.e. to accept the USD 2 reduction from B to A for an increase in privacy) strongly depends on the sequence in which the two options are presented, suggesting the existence of a strong endowment effect.

Main findings

The main finding from the three studies is that willingness to pay depends on various factors. Summarized, it depends very much on a) who is addressed but also b) how the individual is addressed. Individuals which are more sensitive about their data are probably willing to pay more than insensitive individuals. However, many individuals will not have a clear data protection preference that holds true in all situations. Under certain circumstances, influencing factors are whether a) the added value of data protection is communicated in an understandable way (e.g. via simple ratings or a “non-traceable” USD 10 voucher) or b) whether data protection is the standard option. If data protection is the default setting, individuals may forgo compensation, which they would receive for abandoning data protection.


Sources

Athey, S., Catalini, C., & Tucker, C. (2017). The digital privacy paradox: small money, small costs, small talk (No. w23488). National Bureau of Economic Research.

Acquisti, A., Taylor, C., & Wagman, L. (2016). The economics of privacy. Journal of Economic Literature, 54(2), 442-92.

Acquisti, A., John, L. K., & Loewenstein, G. (2013). What is privacy worth?. The Journal of Legal Studies, 42(2), 249-274.

Beresford, Alastair R., Dorothea Kübler, and Sören Preibusch. 2012. Unwillingness to Pay for Privacy: A Field Experiment. Economics Letters 117:25–27.

Jentzsch, Nicola, Sören Preibusch, and Andreas Harasser. 2012. Study on Monetising Privacy: An Economic Model for Pricing Personal Information. Report for the European Network and Information Security Agency. Heraklion: European Network and Information Security Agency.

Kumaraguru, P., & Cranor, L. F. (2005). Privacy indexes: a survey of Westin’s studies (pp. 368-394). Carnegie Mellon University, School of Computer Science, Institute for Software Research International.

Sheehan, K. B. (2002). Toward a typology of Internet users and online privacy concerns. The Information Society, 18(1), 21-32.

Tsai, J. Y., Egelman, S., Cranor, L., & Acquisti, A. (2011). The effect of online privacy information on purchasing behavior: An experimental study. Information Systems Research, 22(2), 254-268.

PhD Network in Data Science: Website launched

The aim of the PhD Network in Data Science is to offer students with a master degree (including degrees from an university of applied sciences) the opportunity to obtain a PhD in cooperation between a university of applied sciences and a university.

The PhD Network in Data Science is supported by Swissuniversities. It is a cooperation between three departments of ZHAW Zurich University of Applied Sciences (School of Management and Law, Life Science and Facility Management, School of Engineering), three departments of the University of Zurich (Faculty of Science, Faculty of Business, Economics and Informatics, Faculty of Arts and Social Sciences), the Faculty of Science at the University of Neuchatel and the Department of Innovative Technologies at SUPSI University of Applied Sciences and Arts of Southern Switzerland.

PhD students work in applied research projects at the university of applied sciences and are supervised jointly by a supervisor at the university and a co-supervisor at the university of applied sciences. They are enrolled in the regular PhD programs of the partner universities and have to go through the standard admission procedure. After successful completion they receive the doctorate of the respective partner university (UZH or UNIBE). The PhD Network is also open to students with a master’s degree from a university of applied sciences. They, however, have to go through convergence programs (specific to the respective faculty) for admission to the partner universities.

You can find more information on our new website!

Study on “Quantified Self” Published: Links to Book and Summary

By Kurt Stockinger (ZHAW)

The final results of an interdisciplinary study funded by „TA Swiss“ on „Quantified Self“ with participation of the Datalab have been published. The study was performed by three ZHAW departments (School of Health Professions, School of Management and Law, School of Engineering) in cooperation with the Institute for Futures Studies and Technology Assessment, Berlin. The focus of the Datalab was on legal and Big Data aspects of quantified self.

The results are available in various forms:

Enjoy reading and maybe you get encouraged to “quantify yourself” a bit better 😉

Conference review of SDS|2017, Kursaal Bern, June 16

by Thilo Stadelmann (ZHAW)

In 2014, ZHAW Datalab started the SDS conference series. It was the year with only one Swiss data scientist identifiable on LinkedIn (at Postfinance…). The year where we talked about “Big Data”, and not “Digitization”. The year where we were unsure if such a thing as a Swiss data science community would exist, and if it actually would come to such an event.

SDS grew from a local workshop to a conference with over 200 participants and international experts as keynote speakers in 2016. This was the year where finally a Swiss-wide network of strong partners form academia and industry emerged to push innovation in data-driven value creation: the Swiss Alliance for Data-Intensive Services (www.data-service-alliance.ch). We as datalabbers have been instrumental in founding this alliance, and then found it to be the perfect partner to take this event to the next level of professionalism.

Continue reading

OpenAI Gym environment for Modelica models

By Gabriel Eyyi (ZHAW)

In this blog post I will show how to combine dynamic models from Modelica with reinforcement learning.

As part of one of my master projects a software environment was developed to examine reinforcement learning algorithms on existing dynamic models from Modelica in order to solve control tasks. Modelica is a non-proprietary, object-oriented, equation based language to conveniently model complex physical systems [1].

The result is the Python library Dymola Reinforcement Learning (dymrl) which allows you to explore reinforcement learning algorithms for dynamical systems.

The code of this project can be found at github.

Continue reading

Review of SDS|2016

By Amrita Prasad (ZHAW)

It’s already been a month since we met as the Swiss Data Science community at our 3rd Swiss Conference on Data Science (SDS|2016), pushed again by ZHAW’s Datalab group and presented by SAP Switzerland.

Several additional organisations sponsored and supported the conference to give it a successful execution – the organising committee thanks IT Logix & Microsoft, PwC, Google, Zühlke, SGAICO, Hasler Stiftung and the Swiss Alliance for Data-Intensive Services for their support in bringing together a successful event! Continue reading

When A Blind Shot Does Not Hit

missed_shotIn this article, I recount measures and approaches used to deal with a relatively small data set that, in turn, has to be covered “perfectly”. In current academic research and large-scale industrial applications, datasets contain millions to billions (or even more) of documents. While this burdens implementers with considerations of scale at the level of infrastructure, it may make matching comparatively easy: if users are content with a few high quality results, good retrieval effectiveness is simple to attain. Larger datasets are more likely to contain any requested information, linguistically encoded in many different ways, i.e., using different spellings, sentences, grammar, languages, etc.: a “blind shot” will hit a (one of many) target.

However, there are business domains whose main entities’ count will never reach the billions, therefore inherently limiting the document pool. We have recently added another successfully finished CTI-funded project to our track record, which dealt in such a business domain. Dubbed “Stiftungsregister 2.0”[1], the aim of the project was to create an application which enables users to search for all foundations in existence in Switzerland.

StiftungSchweizLogoRZ

Continue reading

Data Anonymization

COLOURBOX4752687_smallI’m glad that Thilo mentioned Security & Privacy as part of the data science skill set in his recent blog post. In my opinion, the two most interesting questions with respect to security & privacy in data science are the following:

  • Data science for security: How can data science be used to make security-relevant statements, e.g. predicting possible large scale cyber attacks based on analysing communication patterns?
  • Privacy for data science: how can data that contains personal identifiable information (PII) be anonymized before providing them to the data scientists for analysis, such that the analyst cannot link data back to individuals? This is typically identified with data anonymization.

This post deals with the second question. I’ll first show why obvious approaches to anonymize data typically don’t offer true anonymity and will then introduce two approaches that provide better protection.

Continue reading