Topic: Definition of insanity...looking for some guidance
They say the definition of insanity is doing the same thing over and over again and expecting a different result. I'm getting there...
So, I'm asking forum members to try to point out what I must be doing wrong when generating a portfolio of strategies, because as soon as I put them into market, they start to lose money.
The Portfolios I use are all focused on AUDUSD, 15m, and generally have between 50-80 strategies in them. Let me also add that I just love this software - it makes it all so quick to generate and test things. But...
Here's my current workflow:
1) Download latest data on M15 AUDUSD from my broker, which generally has a Zero spread, although I set the spread manually to 2 (I've also tested 10).
2) Ensure the commission is set to 3.5, per lot, per side.
3) Set the data horizon to anywhere between Sept 2021 & June 2022 - I've used different start dates so that I can backtest properly and get a decent OOS sample as well. Shorter time horizons doesn't give enough time (IMHO) for decent OOS.
3) Set up Reactor with the following:
a) Strategy: Long and Short, Ignore or Reverse, Always use stop loss, fixed or trailing (35/250), take profit, may use (10/1000)
b) Generator: Net Balance, 20% OOS, Entry=4, Exit=2
c) Full Data Opt: Balance Line Stability, 20% OOS, Opt SL/TP
d) Normalisation: everything ticked, Net Balance, 20% OOS
Once it's then generated 200-300 strategies, I then filter the collection with:
a) Min Profit Factor: 2
b) Min Balance Stability: 85
c) Min Win/Loss: 0.65
d) SQN: 3
- I'll play around with those numbers initially to get a list of approx 70/80 acceptable strategies.
Then, taking a leaf from @sleytus approach, I look through each strategy to look at the slope of balance curve, to ensure it continues into the OOS as a similar slope as the IS data. I'll also check that during the OOS, the balance curve doesn't "top out". I'll remove those strategies that I don't like the look of.
Once I'm happy with them, I'll add them to the Portfolio, recheck the metrics for the portfolio, and then export it to MT5. So it's all good up to this point.
I'll then open in MT5, and run a backtest to confirm its got similar results, which I usually do.
I'll then export the deals into a spreadsheet I have, and run a number of other tests across it to look at portfolio metrics from IS to OOS, looking at daily metrics, OOS vs IS.
Once again, if all checks out, and it usually does, I'll push the portfolio (of maybe 50-80 strategies) to my VPS and run it on a demo account for a while. I have compared back tests from my VPS to my desktop and they are reasonably similar...so the issue isn't there.
But, this is generally when things start to fall over. Typically what happens is the balance curve starts to flatten off pretty quickly (within a couple of days) and then deteriorates, and I'm trying to figure out what I'm doing wrong. It's easy to look at the curve and pick the point where the OOS date actually ended as well, because from that point on, it starts to decline.
Clearly my process is overfitting, but I'm not sure where. I've tried using different data horizons, but I get similar results. I've also tried running the data horizon to a date about 4 weeks back (so I might use 5 months of IS, then 1 month OOS, ending a month ago, which is about 20% OOS) and then running it in back test on MT5 to see what happens over the last month...and I get the same flattening of the curve to the point where it loses money. I've also tried using different OOS percentages.
Things that I've checked:
1) make sure broker data is the same across Live and Demo, and that the spreads are the same
2) make sure that the broker data is loaded consistently into EAS
3) Tried different approaches to the Opt/Norm stage, using different "search best..." settings
4) Tried really short data horizons, but the OOS doesn't tend to generate enough results for me to be comfortable with them (only a few days in OOS)
5) Ensured my latency is good - around 70ms.
6) checked in MQL signals for slippage, doesn't appear to be anything worth worrying about
I'm curious how others are developing strategies and putting them into market, whereby the strategies last at least a few weeks before they need to change... I fully understand that no strategy is perfect or will last forever, but by my thinking, if they've been generated on a reasonable length of time in sample (seen a few different market conditions) and tested on out of sample data (again a few different market conditions) (which is why I like the 12 months of data, 20/30% OOS), then they should at least last a little while, as they've seen a few conditions and been optimised for them...?
I know the market conditions are always changing, but this is why we test to generate robust strategies...although clearly I'm doing something wrong. Please help fix my insanity.
So I'm wondering if anyone has any pearls of wisdom that they'd be comfortable sharing with me - so I can get off the merry-go-round of insanity.
Many thanks.