1 (edited by algotrader21 2026-03-30 13:52:30)

Topic: Why I Don’t Use Monte Carlo Inside Reactor And What Most Builders Miss

Why I don’t rely on Monte Carlo inside the Reactor

I know many builders in EA Studio use Monte Carlo directly inside the Reactor phase.

And I understand why.

It makes the workflow faster.
It automates filtering.
It removes weak systems early.

But there is one important limitation that most people overlook:

You cannot actually see the structure of the Monte Carlo result.

When Monte Carlo is used inside the Reactor:

You only see that a strategy passes or fails.

But you do NOT see:
- the simulation chart
- the equity curve variations
- the confidence table behavior

And that matters more than people think.

Why this is a problem

From experience, I have seen strategies that:

pass Monte Carlo in the Reactor 
but are structurally weak when you actually look deeper

For example:
- unstable equity distribution
- inconsistent degradation
- hidden weaknesses in higher confidence levels

These are things you only recognize with a trained eye.

And you cannot develop that eye if you never look at the actual Monte Carlo output.

What I do instead

I use the Reactor for:
- generation
- optimization
- initial filtering

Then I take selected strategies into the Collection.

And only there I run Monte Carlo manually.

Why?

Because then I can actually SEE:

- the simulation chart
- the spread between curves
- the structure of degradation
- the confidence table behavior

That visual feedback is critical.

Key point

Monte Carlo is not just a filter.

It is a diagnostic tool.

If you only use it inside the Reactor,
you are using it blindly.

Final thought

Automation is useful.

But understanding structure is what makes the difference.

If you want to become better at building robust systems,
you need to look at Monte Carlo not just pass it.

Re: Why I Don’t Use Monte Carlo Inside Reactor And What Most Builders Miss

Hi algotrader21,

I appreciate you sharing this.
Posts like this are always valuable because they explain the reasoning behind the workflow, not just the tool itself.

My perspective is a bit different, though not necessarily in contradiction with yours. I personally prefer a more fully automated process, mainly to speed up the work when dealing with large EA volumes and to reduce excessive subjective judgment on each individual strategy.

For me, the key point is not only whether Monte Carlo inside the Reactor is used blindly or not, but how deeply the whole workflow has been understood and validated. That includes knowledge of EA Studio itself, but also of each validation step such as Monte Carlo, WFA, and the other robustness checks.

What I try to measure is less the quality of a single strategy in isolation, and more the quality of the workflow through the incubation process. By incubation, I mean running a large number of selected EAs forward in parallel for a meaningful period of time, under the same conditions, and then observing which ones remain active, stable, and profitable after enough months or trades. For me, that is where a workflow really proves itself.

So I tend to put more weight on process validation at scale and workflow reproducibility than on visual judgment at single-strategy level.

One metric I find useful is the workflow success rate, for example:

Success rate = (number of profitable EAs according to a defined KPI / total number of selected EAs) × 100

Of course, the KPI has to be defined clearly in advance.

Just out of curiosity, have you ever validated your workflow in that way, meaning by measuring the actual success rate of the EAs it produces after X months of incubation or after X forward trades?

I would be genuinely interested to know, because I think that says a lot about the workflow itself.

thx
Vincenzo

Re: Why I Don’t Use Monte Carlo Inside Reactor And What Most Builders Miss

Vincenzo wrote:

Hi algotrader21,

I appreciate you sharing this.
Posts like this are always valuable because they explain the reasoning behind the workflow, not just the tool itself.

My perspective is a bit different, though not necessarily in contradiction with yours. I personally prefer a more fully automated process, mainly to speed up the work when dealing with large EA volumes and to reduce excessive subjective judgment on each individual strategy.

For me, the key point is not only whether Monte Carlo inside the Reactor is used blindly or not, but how deeply the whole workflow has been understood and validated. That includes knowledge of EA Studio itself, but also of each validation step such as Monte Carlo, WFA, and the other robustness checks.

What I try to measure is less the quality of a single strategy in isolation, and more the quality of the workflow through the incubation process. By incubation, I mean running a large number of selected EAs forward in parallel for a meaningful period of time, under the same conditions, and then observing which ones remain active, stable, and profitable after enough months or trades. For me, that is where a workflow really proves itself.

So I tend to put more weight on process validation at scale and workflow reproducibility than on visual judgment at single-strategy level.

One metric I find useful is the workflow success rate, for example:

Success rate = (number of profitable EAs according to a defined KPI / total number of selected EAs) × 100

Of course, the KPI has to be defined clearly in advance.

Just out of curiosity, have you ever validated your workflow in that way, meaning by measuring the actual success rate of the EAs it produces after X months of incubation or after X forward trades?

I would be genuinely interested to know, because I think that says a lot about the workflow itself.

thx
Vincenzo

Hi Vincenzo,

I understand your point about validating workflows at scale and measuring success rate over time I agree that incubation and forward performance are important.

At the same time, my point was slightly different.

It’s not about rejecting automation, but about understanding what Monte Carlo is actually showing before strategies even enter that incubation phase.

When Monte Carlo is used only as a pass/fail filter inside the Reactor, you lose visibility on:

curve dispersion
degradation structure
confidence behavior

That’s additional diagnostic information, not just a visual preference.

Even if a workflow is later validated at scale, that does not change the fact that this information is not being used during selection.

In other words, scale tests the output it doesn’t replace understanding of the structure behind it.

To give some context to what I mean, here are a few examples from my own live-forwarded strategies:

strategies with 70–80+ trades
stable equity development over time
controlled drawdown relative to return

These are not single snapshots, but behavior observed over time after selection.

From my experience, the initial structural quality including how Monte Carlo behaves in detail — tends to have a strong influence on how strategies perform later during incubation.

So for me, it’s really about combining both:

structural understanding at selection level
and validation at scale over time

Not replacing one with the other.


If Monte Carlo is not reviewed at a structural level including the simulation chart and confidence table then its role becomes very similar to a simple pass/fail filter.

And in that case, a significant part of the diagnostic value is effectively lost.

I’ve also seen cases where strategies pass Monte Carlo inside the Reactor, but when running a full Monte Carlo analysis afterwards and actually reviewing the structure, weaknesses become visible:

unstable distributions
inconsistent degradation
or fragile behavior at higher confidence levels

So for me, the key difference is not whether Monte Carlo is used or not, but whether its output is actually interpreted in detail.

Post's attachments

expert advisor 165 equity curve main account.png 99.96 kb, file has never been downloaded. 

expert advisor 165 overview main account.png 80 kb, file has never been downloaded. 

expert advisor 628 equitycurve main account.png 94.1 kb, file has never been downloaded. 

expert advisor 628 overview main account.png 81.25 kb, file has never been downloaded. 

expert advisor 712 equity curve live main account.png 99.67 kb, file has never been downloaded. 

You don't have the permssions to download the attachments of this post.

Re: Why I Don’t Use Monte Carlo Inside Reactor And What Most Builders Miss

Hi Algotrader21,

thanks for sharing,

Yes, and this is exactly why I keep framing the issue at workflow level.

I do not disagree that deeper Monte Carlo interpretation can provide additional diagnostic insight. I'm fully with you!

My point is that, in my case, the workflow is not left unmonitored afterwards. I know exactly what enters incubation, what exits within a long pipeline of around 900 active EAs with a measurable output target.

Over a 2-year process, I am targeting roughly a 70% EA success rate. At the moment, I am still around 40–50%, and that itself acts as a red-flag mechanism for workflow quality. I am also testing new generator settings to benchmark the old versus the new approach.

- So if the workflow is weak, the output tells me.
- If the selection logic is degrading, the pipeline will show it.
- If the gates are not good enough, the success rate will move in the wrong direction.
- And I can see and measure this with a Python process (automatically) for each single EA.

That is why my key question remains slightly different from yours.

It is not whether deeper Monte Carlo reading is informative.
I agree that it is.

It is whether that extra interpretation materially improves the workflow enough to justify its time and effort and complexity, given that I already have an output-based control framework that warns me when the workflow is underperforming. And Monte Carlo is only one of the tools, with many possible variations.

I also see what you see: some EAs can look very good initially and then fail dramatically in live trading. In my experience, about half of them fail sooner or later.

That is exactly why my objective is not to find a few good-looking EAs, but to build a process — from generation, to incubation, to EA promotion to real accounts — that can consistently produce a subset of strategies with a statistical success rate above 70%.

In practice, my goal is to have a fully automated process that applies all rules strictly, including already tested and locked-in settings for Monte Carlo, WFA, strategy settings, and backtest settings — pass or fail, with no subjective interpretation — so that at the end I can select around 10 to 20 EAs with that profile.

That is what helps me manage the real portfolio at the end of each month: I typically have 20–30 EAs eligible to play a role in the live portfolio, and I know exactly how each of them has behaved over the previous 12+ months of incubation.

Because I do not believe in running an account with only 1 or 2 EAs, my approach is based on diversified and balanced portfolio construction, with at least 10 EAs combined. That's why I need the pipeline.

So for me, the real benchmark is not diagnostic richness in isolation, but whether it improves actual pipeline outcomes.
If you have measured the effectivness of your workflow, doesn't matter how, it would be great to see it.

Then it might be interesting to measure the impact on my workflow.

That is the part I still think remains open.

Good discussion, let's keep it alive.

Vincenzo

Re: Why I Don’t Use Monte Carlo Inside Reactor And What Most Builders Miss

Hi Vincenzo,

I understand your point, and I think the workflow discussion is a valid one.

At the same time, I think we may be moving slightly away from the original point I was making in this topic.

My point here was actually much simpler.

Right now, when Monte Carlo is used inside the Reactor, it works as a pass/fail filter. What you do not get there is the visual output itself: the simulation chart, the curve spread, and the confidence table behavior.

That was really the core of my post.

So my point was not mainly about whether a workflow should be measured by success rate, pipeline output, or automation level. Those are interesting questions, but they belong more to the broader workflow discussion.

What I wanted to highlight here is a practical limitation in the current way Reactor Monte Carlo works.

If someone wants to inspect Monte Carlo structurally and visually, they still need to run it again manually after the Reactor phase. That is simply the current reality.

And for builders who care about the visual side of Monte Carlo, that matters, because a pass result alone does not show the full picture. You do not see how the simulations are distributed, how degradation behaves, or what the higher confidence levels are doing.

For that reason, I see manual Monte Carlo not as a contradiction to automation, but as the only way, at the moment, to access the full diagnostic layer that Reactor itself does not display.

That is why my suggestion was very practical: if you want to understand Monte Carlo more deeply, do not stop at pass/fail inside the Reactor. Run it manually as well and inspect the output.

If one day Reactor Monte Carlo also allows full visual inspection of the simulation chart and confidence table, then the limitation I described would largely disappear, and my point would become much less relevant.

I’ve already shared my broader workflow approach in a separate topic:

https://forexsb.com/forum/topic/10057/building-robust-eas-on-xauusd-a-structured-workflow-that-works/

But for this specific topic, my intention was simply to highlight the current limitation of Reactor Monte Carlo and why, for now, manual testing and inspection adds value.

Re: Why I Don’t Use Monte Carlo Inside Reactor And What Most Builders Miss

okay, got it.
will take a look at your workflow as well.

thx
Vincenzo