MDFA-Legacy: What’s New at Last?!?

I had Chris Blakely here this week for working on the mixed-frequency approach:

  • Tackling signal extraction problems with data sampled at different frequencies: for example daily market data, weekly and monthly macro-data (say jobless claims and IPI, for example).
  • This way one can target mid-term trading applications based on fundamental (macro-) economic data.
  • The filter explicitly accounts for the `freshness’ of the low-frequency (macro) data by reestimating optimal coefficients each day: heading away from the release time point will `automatically’ downsize the importance of the macro-series.
  • And of course you can combine this new feature with customization and regularization…
  • And cointegration…

I’m playing with first prototypical code right now and I can tell’ya: exciting stuff! And of course the `exotic’ optimization criteria: all new, completely fresh. 2015 will be an exciting millesime!

Replication and Customization of Arima- and Unobserved Components Models

Due to a stimulating calendar filled with running and prospective new research projects I changed the schedule of my MDFA-Legacy project. The next topic (chapter) will be about replicating classic model-based approaches (ARIMA and state-space) in the generic DFA-framework. Once replicated, nothing stands in the way of customization, obviously. For the unobserved components (state space) models the empirical framework emphasizes quarterly (log, real) US-GDP. Various time spans ending before and after the great recession are analyzed as well as different models with various integration orders and/or cycle lengths (freely determined or imposed). I’ll introduce new packages: state-space (dlm package) and quandl, noticeably (there is also a nice graphical feature with NBER-recessions). Quandl is used because the data is downloaded directly from the corresponding site: the book works with fresh data…

Release-time: 2014.

ATS-Trilemma and Nesting of the Classic Mean-Square Paradigm

Let me cut-and-paste from the summary in MDFA-Legacy, p.112 as posted in my previous entry:

  • The MSE (mean-Square Error)-norm can be split into Accuracy, Timeliness and Smoothness components.
  • The MSE-criterion is replicated by weighting ATS-components equally. Equal-weighting reflects one particular `diffuse’ research priority.
  • A strict MSE-approach is unable, per definition, to address Timeliness and Smoothness, either separately or all together.
  • The ATS-trilemma shrinks to a AT-dilemma in the case of classic (allpass) forecasting. Stated otherwise: classic — quasi maximum likelihood — forecast approaches have a blind spot.
  • Curvature and Peak-Correlation performances can be addressed simultaneously by customized designs.

R-code for verifying the above claims is posted in my previous entry.

Teaser on Customization

I’m currently working on the customization chapter in the Legacy-project and here’s a short teaser:

  • I show that customized designs outperform the best theoretical mean-square error (MSE-) filter, assuming knowledge of the true data-generating process (DGP), in terms of speed (smaller time-shift) AND noise-suppression (smoother output), both in-sample as well as out-of-sample. To be fair, this has been shown in McElroy and Wildi (ATS-trilemma paper), already, but the main `added-value’ in the book is that I reconciled different code sources i.e. results are safeguard.
  • Going beyond, I show that a customized univariate filter also outperforms a bivariate MSE-design relying on an anticipative leading-indicator (lead by one time unit) in terms of speed and noise suppression. This is of course a stronger claim because the multivariate (MSE) design is `cheating’.

PS: I forgot to link the GDP-data in my previous entry so here it is: GDP1  and GDP2   . This data is called by the R-code of the MDFA-legacy project.