I had Chris Blakely here this week for working on the mixed-frequency approach:
- Tackling signal extraction problems with data sampled at different frequencies: for example daily market data, weekly and monthly macro-data (say jobless claims and IPI, for example).
- This way one can target mid-term trading applications based on fundamental (macro-) economic data.
- The filter explicitly accounts for the `freshness’ of the low-frequency (macro) data by reestimating optimal coefficients each day: heading away from the release time point will `automatically’ downsize the importance of the macro-series.
- And of course you can combine this new feature with customization and regularization…
- And cointegration…
I’m playing with first prototypical code right now and I can tell’ya: exciting stuff! And of course the `exotic’ optimization criteria: all new, completely fresh. 2015 will be an exciting millesime!
I finished writing about replication and customization of univariate model-based (ARIMA and unobserved-components state-space models) approaches. I did not treat classic HP- or CF-filters yet. Neither did I write something about multivariate models, yet.
Due to a stimulating calendar filled with running and prospective new research projects I changed the schedule of my MDFA-Legacy project. The next topic (chapter) will be about replicating classic model-based approaches (ARIMA and state-space) in the generic DFA-framework. Once replicated, nothing stands in the way of customization, obviously. For the unobserved components (state space) models the empirical framework emphasizes quarterly (log, real) US-GDP. Various time spans ending before and after the great recession are analyzed as well as different models with various integration orders and/or cycle lengths (freely determined or imposed). I’ll introduce new packages: state-space (dlm package) and quandl, noticeably (there is also a nice graphical feature with NBER-recessions). Quandl is used because the data is downloaded directly from the corresponding site: the book works with fresh data…
Let me cut-and-paste from the summary in MDFA-Legacy, p.112 as posted in my previous entry:
- The MSE (mean-Square Error)-norm can be split into Accuracy, Timeliness and Smoothness components.
- The MSE-criterion is replicated by weighting ATS-components equally. Equal-weighting reflects one particular `diffuse’ research priority.
- A strict MSE-approach is unable, per definition, to address Timeliness and Smoothness, either separately or all together.
- The ATS-trilemma shrinks to a AT-dilemma in the case of classic (allpass) forecasting. Stated otherwise: classic — quasi maximum likelihood — forecast approaches have a blind spot.
- Curvature and Peak-Correlation performances can be addressed simultaneously by customized designs.
R-code for verifying the above claims is posted in my previous entry.
I finished writing chapters 5 and 6 on the ATS-trilemma and customization; previous chapters are slightly revised. Here we got the relevant links
I’m currently working on the customization chapter in the Legacy-project and here’s a short teaser:
- I show that customized designs outperform the best theoretical mean-square error (MSE-) filter, assuming knowledge of the true data-generating process (DGP), in terms of speed (smaller time-shift) AND noise-suppression (smoother output), both in-sample as well as out-of-sample. To be fair, this has been shown in McElroy and Wildi (ATS-trilemma paper), already, but the main `added-value’ in the book is that I reconciled different code sources i.e. results are safeguard.
- Going beyond, I show that a customized univariate filter also outperforms a bivariate MSE-design relying on an anticipative leading-indicator (lead by one time unit) in terms of speed and noise suppression. This is of course a stronger claim because the multivariate (MSE) design is `cheating’.
PS: I forgot to link the GDP-data in my previous entry so here it is: GDP1 and GDP2 . This data is called by the R-code of the MDFA-legacy project.
In the current `state of affairs’ it is impossible to up-load R-code to the Blog. Therefore I had to find another solution. You now have access to the
R-code is ready but I’m unable to upload the files because of security reasons. I’ll find a solution… In the meantime here’s an updated version of the book MDFA-Legacy. You may have a look at the new sections in chapters 2-4. I finally managed to tackle the tedious i1=F,i2=T case in my code.
Here’s a first version of the MDFA-Legacy book with chapters about
- Filter revisions
- Filter Constraints
see MDFA-Legacy. The book is generated in the Sweave environment and I’ll post the R-code for replicating results (tables, graphs) soon.
I’m currently working on the MSE-section of the MDFA book-project. A first draft will be released soon. I make extensive usage of my DFA-manuscript: I urge interested readers to review section 4.1 and, in particular, exercises 1 and 2 (in section 4.1.1). This DFA-material will be `copy-pasted’ and generalized to a multivariate 2-dim setting. Ideas and concepts developed in these exercises will assumed to be known.