R: Reduce() – apply’s lesser known brother

By Thoralf Mildenberger (ZHAW)

Everybody who knows a bit about R knows that in general loops are said to be evil and should be avoided, both for efficiency reasons and code readability, although one could argue about both.

The usual advice is to use vector operations and apply() and its relatives. sapply(), vapply() and lapply() work by applying a function on each element of a vector or list and return a vector, matrix, array or list of the results. apply() applies a function on one of the dimensions of a matrix or array and returns a vector, matrix or array. These are very useful, but they only work if the function to be applied to the data can be applied to each element independently of each other.

There are cases, however, where we would still use a for loop because the result of applying our operation to an element of the list depends on the results for the previous elements. The R base package provides a function Reduce(), which can come in handy here. Of course it is inspired by functional programming, and actually does something similar to the Reduce step in MapReduce, although it is not inteded for big data applications. Since it seems to be little known even to long-time R users, we will look at a few examples in this post. Continue reading

Conference review of SDS|2017, Kursaal Bern, June 16

by Thilo Stadelmann (ZHAW)

In 2014, ZHAW Datalab started the SDS conference series. It was the year with only one Swiss data scientist identifiable on LinkedIn (at Postfinance…). The year where we talked about “Big Data”, and not “Digitization”. The year where we were unsure if such a thing as a Swiss data science community would exist, and if it actually would come to such an event.

SDS grew from a local workshop to a conference with over 200 participants and international experts as keynote speakers in 2016. This was the year where finally a Swiss-wide network of strong partners form academia and industry emerged to push innovation in data-driven value creation: the Swiss Alliance for Data-Intensive Services (www.data-service-alliance.ch). We as datalabbers have been instrumental in founding this alliance, and then found it to be the perfect partner to take this event to the next level of professionalism.

Continue reading

Who is scared of the big bad robot?

By Lukas Tuggener (ZHAW)
Twitter: https://twitter.com/ltuggener/

Reposted from https://medium.com/@ltuggener/who-is-scared-of-the-big-bad-robot-dc203e2cd7c4

There currently is much talk about the recent developments of AI and how it is going to affect the way we live our lives. While fears of super intelligent robots, which want to end all human live on earth, are mostly held by laymen. There are other concerns, however, which are also very common amongst insiders of the field. Most AI experts agree that the short to midterm impact of AI developments will mostly revolve around automating complex tasks, rather than “artificially intelligent beings”. The well-known AI researcher Andrew Ng put it this way: Continue reading

OpenAI Gym environment for Modelica models

By Gabriel Eyyi (ZHAW)

In this blog post I will show how to combine dynamic models from Modelica with reinforcement learning.

As part of one of my master projects a software environment was developed to examine reinforcement learning algorithms on existing dynamic models from Modelica in order to solve control tasks. Modelica is a non-proprietary, object-oriented, equation based language to conveniently model complex physical systems [1].

The result is the Python library Dymola Reinforcement Learning (dymrl) which allows you to explore reinforcement learning algorithms for dynamical systems.

The code of this project can be found at github.

Continue reading

Review of SDS|2016

By Amrita Prasad (ZHAW)

It’s already been a month since we met as the Swiss Data Science community at our 3rd Swiss Conference on Data Science (SDS|2016), pushed again by ZHAW’s Datalab group and presented by SAP Switzerland.

Several additional organisations sponsored and supported the conference to give it a successful execution – the organising committee thanks IT Logix & Microsoft, PwC, Google, Zühlke, SGAICO, Hasler Stiftung and the Swiss Alliance for Data-Intensive Services for their support in bringing together a successful event! Continue reading

Data scientists type A & B? Nonsense.

By Thilo Stadelmann (ZHAW)

Reposted from https://dublin.zhaw.ch/~stdm/?p=350#more-350

I recently came about the notion of “type A” and “type B” data scientists. While the “type A” is basically a trained statistician that has broadened his field towards modern use cases (“data science for people”), the same is true for “type B” (B for “build”, “data science for software”) that has his roots in programming and contributes stronger to code and systems in the backend.

Frankly, I haven’t come about a practically more useless distinction since the inception of the term “data science”. Data science is the name for a new discipline that is in itself interdisciplinary [see e.g. here – but beware of German text]. The whole point of interdisciplinarity, and by extension of data science, is for proponent to think outside the box of his or her original discipline (which might be be statistics, computer science, physics, economics or something completely different), and acquire skills in the neighboring disciplines in order to tackle problems outside of intellectual silos. Encouraging practitioners to stay in their silos, as this A/B typology suggests, is counterproductive at best, fatal at worst. Continue reading

Deep learning for lazybones

By Oliver Dürr (ZHAW)

Reposted from http://oduerr.github.io/blog/2016/04/06/Deep-Learning_for_lazybones

In this blog I explore the possibility to use a trained CNN on one image dataset (ILSVRC) as feature extractor for another image dataset (CIFAR-10). The code using TensorFlow can be found at github. Continue reading

SDS|2015 Conference Review

The Swiss Data Science community recently met at SDS|2015, the 2nd Swiss Workshop on Data Science. It was a full day event organized by ZHAW Datalab, with inspiring talks, hands-on data expeditions, and an excellent provision of space and atmosphere for fruitful networking. The conference took place on the 12th of June at the premises of ZHAW in Winterthur. It attracted people with a wide range of skills, expertise, and levels from doers to managers, and had very strong support from industry, hence showing the huge potential and scope of the subject.

Review

Jan-Egbert SturmAfter having the workshop kicked-off by the President of ZHAW, Dr. Jean Egbert Sturm, Professor of Microeconomics & Director of KOF Swiss Economic Institute at ETH, gave an insightful keynote talk on The use of ever increasing datasets in Macroeconomic forecasting. He explained to the audience the way to do economic forecasting using simple and standard analytical techniques. It was specifically very interesting for data analytics experts to see such a methodology that successfully uses down-to-earth analytical techniques integrated with in-depth knowledge of Economics. Continue reading

When A Blind Shot Does Not Hit

missed_shotIn this article, I recount measures and approaches used to deal with a relatively small data set that, in turn, has to be covered “perfectly”. In current academic research and large-scale industrial applications, datasets contain millions to billions (or even more) of documents. While this burdens implementers with considerations of scale at the level of infrastructure, it may make matching comparatively easy: if users are content with a few high quality results, good retrieval effectiveness is simple to attain. Larger datasets are more likely to contain any requested information, linguistically encoded in many different ways, i.e., using different spellings, sentences, grammar, languages, etc.: a “blind shot” will hit a (one of many) target.

However, there are business domains whose main entities’ count will never reach the billions, therefore inherently limiting the document pool. We have recently added another successfully finished CTI-funded project to our track record, which dealt in such a business domain. Dubbed “Stiftungsregister 2.0”[1], the aim of the project was to create an application which enables users to search for all foundations in existence in Switzerland.

StiftungSchweizLogoRZ

Continue reading

Big Data Query Processing with Mixed Workloads

As part of a recent project called Big Data Query Processing we have evaluated complex query workloads using modern Big Data systems. In particular, we have performed benchmarks of Cloudera Impala using a business intelligence use case provided by an industry partner. The results can be found on the following blog post hosted by Cloudera:

How Impala Supports Mixed Workloads in Multi-User Environments