Conference review of SDS|2017, Kursaal Bern, June 16

by Thilo Stadelmann (ZHAW)

In 2014, ZHAW Datalab started the SDS conference series. It was the year with only one Swiss data scientist identifiable on LinkedIn (at Postfinance…). The year where we talked about “Big Data”, and not “Digitization”. The year where we were unsure if such a thing as a Swiss data science community would exist, and if it actually would come to such an event.

SDS grew from a local workshop to a conference with over 200 participants and international experts as keynote speakers in 2016. This was the year where finally a Swiss-wide network of strong partners form academia and industry emerged to push innovation in data-driven value creation: the Swiss Alliance for Data-Intensive Services (www.data-service-alliance.ch). We as datalabbers have been instrumental in founding this alliance, and then found it to be the perfect partner to take this event to the next level of professionalism.

Continue reading

OpenAI Gym environment for Modelica models

By Gabriel Eyyi (ZHAW)

In this blog post I will show how to combine dynamic models from Modelica with reinforcement learning.

As part of one of my master projects a software environment was developed to examine reinforcement learning algorithms on existing dynamic models from Modelica in order to solve control tasks. Modelica is a non-proprietary, object-oriented, equation based language to conveniently model complex physical systems [1].

The result is the Python library Dymola Reinforcement Learning (dymrl) which allows you to explore reinforcement learning algorithms for dynamical systems.

The code of this project can be found at github.

Continue reading

Review of SDS|2016

By Amrita Prasad (ZHAW)

It’s already been a month since we met as the Swiss Data Science community at our 3rd Swiss Conference on Data Science (SDS|2016), pushed again by ZHAW’s Datalab group and presented by SAP Switzerland.

Several additional organisations sponsored and supported the conference to give it a successful execution – the organising committee thanks IT Logix & Microsoft, PwC, Google, Zühlke, SGAICO, Hasler Stiftung and the Swiss Alliance for Data-Intensive Services for their support in bringing together a successful event! Continue reading

When A Blind Shot Does Not Hit

missed_shotIn this article, I recount measures and approaches used to deal with a relatively small data set that, in turn, has to be covered “perfectly”. In current academic research and large-scale industrial applications, datasets contain millions to billions (or even more) of documents. While this burdens implementers with considerations of scale at the level of infrastructure, it may make matching comparatively easy: if users are content with a few high quality results, good retrieval effectiveness is simple to attain. Larger datasets are more likely to contain any requested information, linguistically encoded in many different ways, i.e., using different spellings, sentences, grammar, languages, etc.: a “blind shot” will hit a (one of many) target.

However, there are business domains whose main entities’ count will never reach the billions, therefore inherently limiting the document pool. We have recently added another successfully finished CTI-funded project to our track record, which dealt in such a business domain. Dubbed “Stiftungsregister 2.0”[1], the aim of the project was to create an application which enables users to search for all foundations in existence in Switzerland.

StiftungSchweizLogoRZ

Continue reading

Data Anonymization

COLOURBOX4752687_smallI’m glad that Thilo mentioned Security & Privacy as part of the data science skill set in his recent blog post. In my opinion, the two most interesting questions with respect to security & privacy in data science are the following:

  • Data science for security: How can data science be used to make security-relevant statements, e.g. predicting possible large scale cyber attacks based on analysing communication patterns?
  • Privacy for data science: how can data that contains personal identifiable information (PII) be anonymized before providing them to the data scientists for analysis, such that the analyst cannot link data back to individuals? This is typically identified with data anonymization.

This post deals with the second question. I’ll first show why obvious approaches to anonymize data typically don’t offer true anonymity and will then introduce two approaches that provide better protection.

Continue reading

Selecting Customers by their Lifetime Value

Back view of businesswomanIn business, especially marketing, it is often necessary to perform customer selection. The typical task involves defining a set of customers for upselling, cross-selling or retention actions. Traditionally, the selection criterion is the positive response probability. While this identifies customers who are the most likely to respond, it does not necessarily provide the optimal solution from which the business profits the most.

Over the last years, the concept of customer lifetime value (CLV) has attracted increasing attention. It suggests rating and selecting the customers by the present value of all future revenues that are attributed to their relationship. Hence, the focus lies on customer quality, rather than customer quantity. While this may sound logical, it provides a huge analytical challenge: the CLV is driven by the future behavior of a customer, which of course is not known in advance. Predictive analytics can and needs to be used for modeling customer dynamics, and often big data is involved.

Continue reading

SODA (Search Over Data Warehouse) Reloaded

sodaIntroduction

SODA (Search over Data Warehouse) provides a Google-like search interface for querying an enterprise data warehouse.  The tool enables non-tech savvy users, who do not have technical knowledge of the underlying database system or the query language SQL, to intuitively explore complex data warehouses. The main idea is to use metadata information about the data model as well as inverted indexes about the base data to generate executable SQL. SODA thus combines methods from database systems, information retrieval and semantic web technology to enable self-service business intelligence.

SODA was originally developed as a joint research project between Credit Suisse and ETH Zurich as part of the Enterprise Computing Center (http://www.ecc.ethz.ch/research/semdwhsearch). At Zurich University of Applied Sciences we will continue the research jointly with ETH Zurich.

Continue reading

Recall-Oriented Expert Search Using Relevance Feedback

The information engineering group at InIT has recently successfully concluded a CTI funded project. The project is called “expert-match” and has been conducted in cooperation with expert group ag, a professional recruitment and consulting business head-quartered in Zurich, Switzerland.

Expert group’s recruiting focus consists of staffing high-profile expert positions (e.g. specialized senior software engineers, senior management, IT architects). Usually, there are at most a handful of people qualified for a recruiting mandate within the relevant geographical region they operate in. The problem is further compounded by the fact that qualifications are rarely obvious, requiring insight into candidates’ former positions and overall skill sets. Exact matching of job descriptions with candidate profiles, such as in a database environment, is thus unlikely to find the desired experts. By implementing an information retrieval (IR) application, we were able to mitigate the problem. The application fully supports iterative searches, especially focussing on relevance feedback.

expert_match_process

Continue reading

User-centric Learning to Rank

In a recent CTI project with our industry partner Nektoon AG we were involved in the development of the context intelligence application Squirro. In Squirro, users can create topics that consist of various text streams such as RSS feeds, blogs and Facebook accounts (see for example the following marketing video from Nektoon):

One particular problem was to design and implement a method to identify text documents in a stream that a user might be interested to read. For example in an RSS feed of a company, a user might only be interested in a specific product of this particular company. Thus, he will generally ignore documents about other topics and would prefer to not seeing these anymore. The chosen approach is to infer the future interests of a user based on his past interactions with documents. From these actions we can determine a set of documents which the user is expected to be interested in and create a profile for each user using state of the art text feature selection methods. This allows us to calculate how well a document matches the usual interest of a user. According to this ranking we sort the documents and thus documents matching the user’s interest profile most closely rise to the top ranks.

Continue reading