by Thilo Stadelmann (ZHAW)

In 2014, ZHAW Datalab started the SDS conference series. It was the year with only one Swiss data scientist identifiable on LinkedIn (at Postfinance…). The year where we talked about “Big Data”, and not “Digitization”. The year where we were unsure if such a thing as a Swiss data science community would exist, and if it actually would come to such an event.

SDS grew from a local workshop to a conference with over 200 participants and international experts as keynote speakers in 2016. This was the year where finally a Swiss-wide network of strong partners form academia and industry emerged to push innovation in data-driven value creation: the Swiss Alliance for Data-Intensive Services ( We as datalabbers have been instrumental in founding this alliance, and then found it to be the perfect partner to take this event to the next level of professionalism.

Thus, SDS|2017, the 4th Swiss Conference on Data Science, presented by D|ONE, has already been organized by Data+Service (as we call the alliance in short), and from this year on will be the central event of the alliance. We as Datalab will be happy contributors to its success, and assume we will be joined by many others. We are very happy with this maturing process – and on a very personal note, as general chair of the last SDS’s:


This year’s conference hosted a special track on «AI in Industry» that was co-organized with SGAICO, the Swiss AI society ( It had talks on robotic soccer (by the “coach” of the 5-fold world champion, DFKI’s Thomas Röfer), the quality of commercially available cognitive services (by HSLU’s Jana Köhler) and different applications of machine learning in industry and medicine. Even two datalabbers presented some of our interdissciplinary efforts: Thilo and Oliver showed how deep learning can successfully be applied to very different real use cases, essentially making the point that “everything a human can see, we can teach a computer to see, too”.

The take home message of the conference for me however was formulated in the four keynote talks that framed the talks of the parallel tracks. Claudia Perlich of Dstillery started with a brilliant overview of how our browsing and cell phone usage behavior is used by advertisement companies. It made the efforts of infamous intelligence agencies look less dramatic in comparison. Clemens Cap of University of Rostock built upon this by asking, “do we need an ethics for data science”? Many smart thoughts he used to construct an argument for “yes”: Greed, as the driver behind every-increasing optimization, is seldom a good single counselor to humans, but is the ultimate driver behind data science (the science behind automated perfect rational decisions). Infinite optimization, as “promised” by algorithms, this creates its own problem – nobody wants to live in such a world. This has to be overcome with solutions outside from what data science itself can offer: with a new ethics our societies can agree upon.

The late afternoon reinforced this topic with the talks of Gregory Grefenstette on how we might regain all the personal data that exists on us “on the web”, in order to use it for our own purposes; and with David Kriesel’s exposition of his “SpiegelMining” exploit: the results and predictions on just the metadata (!) of 3 years’ worth of crawling of David reminded us both of the power (and possible misleadedness) of our results (and hence the caution with which we should present them) as well as what others might be able to do with the traces that we leave online daily.

This theme driven by the keynotes was a frequent topic of discussion among the ca. 270 participants during the networking breaks. I am positive that we will see a resurge of ethical considerations on the effects our work has as AI researchers and data scientists has on the way we live. As the conference suggested, the biggest problem might neither be governmental surveillance, nor unemployment caused by automation; but traces of data we as individual leave consensually on the web, forcing our ever-optimizing economy to use it in ways that will affect our choices – less freedom in the name of rational improvement. We have yet to find an answer for this.

Slides and videos of most talks will soon be available on the conference website,