Topic: Interpreting the Optimizer's result

I have a strategy that I thought was pretty good, and then when it ran through the Optimizer (I told it to optimize everything), it appeared to become much better. I understand that this is somewhat to be expected, because it is now "curve fit" to the most ideal conditions for this data.

Well, what surprised me is that the most dramatic improvement came AFTER the OOS line (the final 30%). I did tell it both during the Generator and Optimizer steps to set aside 30% for OOS. My understanding is that this 30% would not be used during the calculation of the strategy, just as a forward test.

After the Optimizer ran, the portion specifically after the OOS line because much better looking than the portion before the OOS line, in my opinion.

Is this normal?  Or unusual?  I would have thought that the OOS portion would be more likely to perform inline with the pre-OOS portion, or even sideways.

I'm attaching a screenshot here of the graph after the Optimizer ran. Thanks

Post's attachments

optimizer.png
optimizer.png 2.93 kb, file has never been downloaded. 

You don't have the permssions to download the attachments of this post.

Re: Interpreting the Optimizer's result

It looks like you have developed a good system.

This happens to me at times, it just means you have better characterised the market in the strategy and is a good thing.

I use 2 sets of data for my testing and place the second set in a data directory called "walkforward".  This directory contains data unseen by FSB so there is no way the generator or optimizer can use it in generating the strategy. I think that is the only way you can really be sure of the final strategy.

There was a discussion on here about the use of "filter non-linear balance pattern" because by using that feature you have effectively contaminated your in sample data with OOS data by rejecting certain  strategies. This is why a walkforward test is the only way to be sure of what you have generated.

I am sure the forum would appreciate it if you were to publish your strategy.  We may be able to suggest ways of improving it further.

Re: Interpreting the Optimizer's result

I use 2 sets of data for my testing and place the second set in a data directory called "walkforward".  This directory contains data unseen by FSB so there is no way the generator or optimizer can use it in generating the strategy.

I'm experimenting with data profiles for FSB. Each set has it's own data directory and data settings (symbols, spread, slippage...). This can be used to switch between two brokers data or, as you said, forward testing. The data profiles will be easily changed from a menu next to the data periods. 
This is not a preority features for the moment but someday it will be included in FSB.

Other possibility is to use "Data Horizon". It efectively cuts of the data macing it invisible for FSB.

4

Re: Interpreting the Optimizer's result

Popov wrote:

... I'm experimenting with data profiles for FSB...

What about some form of basic 'messaging interface' that allows two instances of FSB to communicate, eg:

- Instance A runs from \FSB\Development
  - Uses a subset of data

- Instance B runs from \FSB\Testing
  - Uses all of the data

Strategy development would be something like:
- Development (generator or otherwise) on Instance A
- Optimize on Instance A
- The results are passed to Instance B, which would backtest against the full data set.

The message interface could just use a drag-drop between the two instances (so instead of just being able to drag a strategy IN, we could drag a strategy OUT and drop it on Instance B.

As each instance has their own data sets and settings, there would be no need to develop anything other than the messaging interface.  The installer could be modified to support the installation of 2 copies of FSB (into their own directories with their own settings and data).

I personally would never want to optimize against the full data set, and am only really interested in strategies that behave the same in a blind walk-forward test, so this is very similar to how I use the product now, just I am saving the files from one instance, and opening them in a second instance to test (but not optimize) against a large data set.

ab

Re: Interpreting the Optimizer's result

The message interface could just use a drag-drop between the two instances (so instead of just being able to drag a strategy IN, we could drag a strategy OUT and drop it on Instance B.

You can do this now.
Make a new data folder and put another data set in it.
Open two copies of FSB.
On the second copy change the data directory to the second data set.
When generate a strategy in the first FSB, copy the strategy (Ctrl+C) and paste it (Ctrl-V) on the second FSB.
Do the testing in the second FSB.

6

Re: Interpreting the Optimizer's result

Copy/Paste ... awesome. 

Thanks for the tip.

ab

Re: Interpreting the Optimizer's result

SpiderMan wrote:

I am sure the forum would appreciate it if you were to publish your strategy.  We may be able to suggest ways of improving it further.

Yep yep I plan to do this! I actually have 3 M1 strategies I'm happy with now and hoping for a fourth by the end of the weekend. I can't really take credit for inventing any of these as it was really FSB who found them all! But I would love to get some expert feedback and opinions to know when these bots will be ready for a live account. I'm only forward testing at the moment but my goal is to systematically manage a growing legion of tradebots that would be continually tracked: those that make the grade will be allowed to keep evolving, while those that underperform would be swiftly replaced with fresh new rookies churning out of FSB. I plan to launch with 4 mini accounts across 2 different brokers and expand to additional accounts as the bots grow. Right now I'm focusing only on EURUSD but will start researching another market once I can get this one up-and-running, and prove that my ongoing intervention steps are working.

Re: Interpreting the Optimizer's result

I plan to launch with 4 mini accounts across 2 different brokers and expand to additional accounts as the bots grow.

The first test you have to do for a broker is to open a demo account and to run FST with the included TestTrade strategy. Or even better, start 4-5 copies on different charts. It trades every minute and shows how the broker corresponds to FST (at least on a demo).