1 (edited by ironhak 2025-12-07 13:40:19)

Topic: How does the "experts" avoid overfitting?

I've been messing around many weeks with the concept. I tried everything but ultimately I always end up with overfitted strategies. Take a look at this.

This is a portfolio of top 50 strategies:

  • From 1 Jan 2022 to 1 June 2025

  • 30% of the generation is OOS, meaning that from 1 June 2024 to 1 June 2025 it's OOS data

https://i.imgur.com/b58SWtD.png

It seems to be pretty good, 1 year of OOS solid results. Now suppose we run live this portfolio from 1 June 2025 to 5 Dec 2025:
https://i.imgur.com/LqwjiLK.png

These are the classic results you get from a stategy that was overfitted during the In-Sample. Actually, the act of choosing the best OOS strategies is just a another way of overfitting, no different than tuning the periods of an indicator to make the equity curve look better.

I've tried every combination of IS-OOS that came to my mind, but ultimately trough EA studio I only came up with overfitted strategies that starts to perform bad as soon as they are introduced to true OOS data.

Does anyone have the same problem? How do you solved?
Thanks

Re: How does the "experts" avoid overfitting?

I wrote my opinion about OOS several years ago.  Basically, it only heats the universe.

Please test the strategies with the new "Max Spread Protection" option set to a meaningful value. Let's say about 30-50 points.

Then look at the strategies one by one and see if the logical rules are meaningful (which is subjective, of course).

Re: How does the "experts" avoid overfitting?

I wrote my opinion about OOS several years ago.  Basically, it only heats the universe.

Thanks, care to explain what you mean with that? Or maybe if you can point me to the post where you talked about it?

Thanks

Re: How does the "experts" avoid overfitting?

> Thanks, care to explain what you mean with that?

A) Let's run the generator with 30% OOS and generate 100 strategies. Then enable all data and validate the strategies again. Let's have 20 strategies show good performance.

B) Run the Generator on the complete data set and stop it when it finds 20 strategies.

Result: In both cases, we have 20 strategies that perform well on the complete data set.

Question: Are the A-strategies better than the B-strategies?

Let's assume A-strategies are better and think of the reasons for that:
- they are battle-scarred
- they are the best 25% and received after eliminating the failed
- we used more complex software to find them
- we worked more and used more knowledge to find them

What if it happens that we have the same strategies in both collections?
I'm pretty sure all the B-startegies will pass the OOS criteria (except if we have some "U" shaped curves smile )

...

What I do (not advice!)
- run the generator
- look at the strategies one by one to see if the indicator rules are in sync
- play with the numeric parameters with the mouse wheel to see how the balance changes
- test on data from a different broker
- put on MetaTrader to see what will happen without any expectations. If it works, well.

Re: How does the "experts" avoid overfitting?

If that is your approach, why on Ea studio there is no way to see a chart where:

x-axis : indicator parameter
y-axis : balance

the idea is that a good parameter should be located on a zone where changing it a bit would not change drastically the outcome of the strategy. If the strategy performs good only on a certain parameter, then most likely overfit.