Topic: The Extra OOS Trick in EA Studio With Real Examples
This is something I don’t see many people talk about, but for me this was one of the biggest shifts in my whole process. And you can literally see it in the screenshots.
What I did here is actually very simple, but the impact is huge.
In the #2 screenshot, you see the strategy built on data from 2009 until 2020.
This is close to how I was actually doing it back then in early 2024.
Back then I was using slightly different starting ranges as well, sometimes more like 2007–2019 and then extending forward, but the idea itself was exactly the same.
That is the important part.
The principle never changed:
build first, then push the same strategy into extra unseen data.
This is the normal build phase. This is where the strategy is generated and optimized. At that stage, the strategy already looks solid. The equity curve is good, the net profit is there, the trade count is decent, and the SQN is already 3.45.
For many people, that would already be enough to say the strategy looks good.
But this is where I started thinking differently.
Because for me, it is not enough that a strategy looks good only on the period where it was built and optimized.
That is exactly where many people stop too early.
They build, they check the normal out of sample inside EA Studio, and then they move on.
But I wanted to push the strategy further.
I wanted to see what really happens when you expose it to data it has never seen.
So what I started doing back then was this:
after generating and optimizing the strategy, I do not touch the strategy itself anymore.
I do not change the rules
I do not reoptimize
I do not tweak anything
I only change one thing:
the data end.
So the strategy stays exactly the same, but now I extend the end date forward.
And this is exactly what you see in the #1 screenshot.
The same strategy is now extended from 2009–2020 to 2009–2024.
And that is the key point.
That extra period was never optimized.
So what you are looking at there is pure unseen data.
Real out of sample.
No fitting
no tweaking
no hidden optimization
Just the strategy being pushed forward into new market conditions.
And this is exactly where weak strategies usually break.
That is why this became such an important step in my workflow.
Because if a strategy looks good until 2020 but then falls apart when I extend it to 2024, then I already know enough.
I don’t care how nice it looked before.
It is not stable.
But if I extend it and it keeps going, stays stable, and even improves, then now we are talking about something completely different.
And that is exactly what you see here.
The strategy holds.
The equity curve keeps moving.
The structure stays intact.
That is what you want to see.
For me, this is one of the clearest ways to separate:
strategies that look good
from strategies that are actually stronger
Because now you are not looking at in-sample anymore.
You are forcing the strategy through a completely new period it has never seen.
That is real testing.
That is why I always say:
strategies break or survive on out of sample.
And this is just one more layer of that idea.
Simple concept.
Huge difference.
One important thing I want to add here about the build process itself.
When I build strategies, I use full data optimization with 40 steps.
And there is a reason for that.
A lot of people are afraid of heavy optimization because of overfitting.
But that is exactly where Monte Carlo comes in.
Monte Carlo is not there to replace optimization.
It is there to test it.
So I actually push the optimization harder on purpose.
I give the strategy room to explore the data.
And then I use Monte Carlo to check if that structure is still stable under randomness.
This strategy you see here was built with 40 optimization steps.
And then you can clearly see what happens when it goes through heavy Monte Carlo testing.
That is the balance.
Strong optimization → then strong validation.
And on top of that, the extra out of sample period is not optimized at all.
So that is pure OOS.
Also during the build itself, I already use 50% out of sample in EA Studio.
So from the start there is already a separation between in-sample and out-of-sample data.
But the extra OOS trick is what really pushes it further.
That is where things get exposed.
After that, I move to the next step.
Monte Carlo.
Normally, like I explained before, I use:
50 runs
then 100 runs as confirmation
But here I pushed it much further.
I ran 1500 simulations.
And this is where things become very clear.
Because now you are not looking at one equity curve anymore.
You are stress testing the same strategy across a massive number of variations.
Randomized history
randomized spread
randomized slippage
different starting points
Everything gets pushed.
And if a strategy is weak, this is where it shows immediately.
Now look at the confidence table.
Even under heavy randomization, the structure holds.
The performance stays consistent
the SQN remains solid
the degradation is controlled
Even at higher confidence levels, the strategy is still standing.
That is not normal for weak strategies.
And if you look at the editor, you can see the magic number.
This is EA 165.
So again, this is not a random example.
This is a real strategy.
And this strategy has been running live for around a year.
That is also why I was confident enough to push it to 1500 Monte Carlo simulations.
Because by that point, I already had real live data behind it.
But the important part is this:
back then, when I first built it, I was not doing 1500 runs.
I was doing 50 → then 100 runs as confirmation.
Exactly like I explained before.
The 1500 runs you see here are just an extra layer on top of something that was already proven step by step.
So what you are really seeing here is this:
a strategy built on historical data
then extended into real unseen data (extra OOS)
then stress tested through massive Monte Carlo
And it still holds.
That is what I call confirmation.
Not one good backtest.
Not one lucky curve.
But multiple layers, all pointing in the same direction.
That is the difference.
That is also why for me this is not optional.
This is a filter.
If a strategy fails here → it is out.
If it survives → it earns the right to move forward.
That is the whole idea.
Data
out of sample
extra out of sample
Monte Carlo
Stack those layers.
And you will see very quickly what is real and what is not.
Simple idea.
Huge difference.
For the live results of EA 165, you can check here:
https://forexsb.com/forum/topic/10057/building-robust-eas-on-xauusd-a-structured-workflow-that-works/
One last thing to make this even more clear.
After everything you just saw, I extended the same strategy even further.
From 2009 all the way to 2026.
Again:
no changes
no reoptimization
same logic
Just more unseen data.
And this is what matters.
Because at this point, you are not just looking at a strategy that survived a small extension.
You are looking at a strategy that continues to hold its structure over an even longer period.
The equity curve keeps moving
the behavior stays consistent
the structure remains intact
That is not something you fake.
And if you compare this with the previous topic where I shared the live performance, you can start connecting the dots.
This is exactly the point.
Not one test.
Not one phase.
But consistency across:
build
extra out of sample
extended data
Monte Carlo
demo
and live performance
That is where real confidence comes from.
Everything has to align.
And when it does, this is the result.