Topic: What is the point of forward testing?
After several weeks of playing around with FSB/FST, I've come up with some procedures to carry a strategy through it's lifecycle, essentially from inception to death. I like to write and refine procedures for these types of things because I feel that they help me to be more consistent. I would like to believe that a disciplined procedural approach to managing a legion of tradebots will translate to a consistent flow of live success as well.
Part of my approach has always been to 'test drive' the bots on my demo accounts before letting them graduate to their own live accounts. But - I have been thinking about this recently and questioning the purpose of this stage. For example, I would normally generate and optimize a new bot using 30% OOS as part of the process. The OOS visual performance allows me to make a discretionary value judgment between all the rookies, and naturally a great screener for duds. From the cream-of-crop rookies, I would then let them perform in the wild using FST on a demo account. After the demo period I would be able to graduate the best-of-breed to join the legion, each with their own live accounts.
It occurred to me the other day that perhaps the entire stage of demo forward testing is unnecessary. I already trust that FST performs equally well trading demo or live, so there's nothing to prove in that regard. Seems like I could just increase the amount of OOS data to effectively accomplish the same effect I get from real-life forward testing, except that it could be instantaneous.
My understanding about the OOS sample percent is that it is only treated as OOS (ie ignored by FSB) during the Generator stage. That means further manipulations occuring after Generator could be skewed by incorporating the OOS portion. For example I manually set the SL/TP/BE for all rookies by hand. For reasons such as these, it makes sense to physically separate and control OOS data.
So my idea is to use the Generator's OOS% as an initial screen only, but then followup with an instant forward test by swapping the datafile. I could Generate as usual, but using a dataset that is outdated by the same period I would normally forward test with (in my case, 1 full week of M1 data). Then with the best-of-breed bots, I could swap-in the latest dataset and study the further OOS performance curve. Based on this, a bot could be approved to join the legion immediately I think.
Am I missing something here? Why didn't I think of this before? Do you think it's a bad idea to eliminate forward testing on demo acct altogether? if so, why?