Grid Trading with EA Studio Experts (Averaging Down)

Grid Portfolio – 37 Trading Days Update

Hi everyone,

here’s a structured update after 37 trading sessions on the grid portfolio.

⸻ 
Performance Snapshot

* Total Return: +1.8% 
* Monthly Return: +2.2% 
* Profit Factor: 2.94 
* Trades: 374 
* Win Rate: 72.7% 

⸻ 
Execution Profile

* Avg Win / Avg Loss: ~1:1 
* Expectancy: ~$5.9 per trade 
* Trades per day: ~20 
* Avg duration: ~27h 

Performance is driven by consistency and frequency, not large winners.

⸻ 
Risk (early observations)

* Floating DD: ~0.4% 
* No meaningful drawdown phase observed yet 
* Track record still short → risk profile not fully expressed 

⸻ 
EA-Level View (Incubator Perspective)

Total EAs (magic numbers): 18 
EAs currently in profit: 13 

EA success rate (early read): ~72%

⸻ 
How to read this number

* This is a raw indicator (profit vs loss per EA) 
* It does not account for trade count or maturity

Important context:

* Several EAs have very low number of trades (<5) 
  → in some cases, the grid cycle has not activated yet 

* Many EAs are still in the early incubation phase
This should be read as:
early directional signal, not a robust metric

⸻ 
Remark on maturity

A more meaningful success rate should consider:
* EAs with >30 trades

At the moment, the sample is still too small for that filter 
→ this will be evaluated in future updates 

⸻ 
Symbol Insight

* Performance currently driven mainly by XAUUSD 
* Other symbols still in early-stage validation

This reflects edge emerging under specific market conditions

⸻ 
Gold – Strategy vs Grid (EA-level comparison)

To isolate the effect of the grid, I compared the same EA with and without the grid overlay.

Original EA (single trades):
* Trades: 81 
* Profit Factor: 1.50 
* Win Rate: 65% 
* Net Profit: ~11k 
→ Higher volatility and more pronounced equity swings 

Grid version (same EA):
* Trades: 129 
* Profit Factor: 3.10 
* Win Rate: 79% 
* Net Profit: ~1.7k (current test window) 
→ Smoother equity curve and more stable progression 

⸻ 
Key takeaway

The underlying strategy remains the same, but the execution changes:

The grid does not change the edge 
It changes how the edge is extracted from price

Specifically:

* Transforms fewer large trades into multiple smaller realizations 
* Uses pair closing to capture oscillations 
* Increases trade frequency and win rate 
* Reduces observed equity volatility 

⸻ 
Trade-off

* Smoother performance in ranging conditions 
* BUT potential risk accumulation in directional phases still to evaluate

This remains the key validation step going forward 

⸻ 
Full stats available here:  https://www.fxblue.com/users/i70_Eastudio_grid

Survey: Incubation, Yes or Not?

Longevity Index — From Concept to Portfolio Reality

Hi everyone,

following up on the Longevity Index idea, I wanted to move beyond the concept and test it in a more practical, real-world setting.

The starting point was a simple question:
What actually happens after a strategy reaches Top Band and is treated as “ready for live”?

Framing
This is not about how to build or validate strategies. It assumes that step is already done. The focus here is different: what happens to strategies after they reach Top Band and are deployed at portfolio level. What do we need to do to grant >70% of high performing EA?

The context
This analysis comes from a live environment where:

* ~900–1,000 EAs are running
* across ~30 MT4 instances
* continuously generated, validated, and tracked
* running since 06/2024

Our python End of Month check point labels each EA in:

- EB=Earth Birds, newly incubated
- OgI=On going Incubation, after x trades just for monitoring
- PwL=Promoting Watching List, after y trades very good performance
- RfL=Ready for Live, the best in class

Down into the appendix all KPIs listed.

This is not a curated or optimized portfolio.

On purpose, we are running strategies across:

* different assets
* different timeframes
* different logics
* different parameter sets

The goal is not to find the “perfect strategy”.
The goal is to observe what happens at scale, under real conditions.

What the data shows
Only ~8–10% of EAs reach Top Band (RfL + PwL)

Top Band is not a soft label — it is defined by strict KPI thresholds
(PF, Win%, SQN, sample size, max consecutive losses, recovery factor, etc.)

→ Only statistically high-performance strategies are promoted.

System nature
We are effectively operating a: high-volume, low-conversion system that produces a limited number of statistically validated high-performance EAs.

The test
* 19 EAs (all first-time Top Band entries during the observation period)
* first entry into Top Band (Sep–Oct 2025)
* tracked ~5–6 months

These are not optimized results, but observations from a deliberately broad and unfiltered environment, designed to reflect real operating conditions rather than ideal scenarios.

In practice, this comes down to one question: what would have happened if, 6 months ago, we had built a portfolio using the 19 EAs promoted by the incubation pipeline?

Baseline results after (6 months)
* Top Band: ~21% still high performing
* OgI: ~26% neutral, not good not bad and not damaging the portfolio
* PB: ~53% heavy degradation, most probably will die soon

More than half degrade into failure state, but can recover.

Behavior along the time
* PwL duration ≈ 1 month (median)
* RfL duration ≈ 3 months (median)

So, the Top Band is not a stable state — it is transient. This creates a structural need for continuous replacement.

Constraint
* conversion to Top Band ≈ 9%
* EA generation inflow ≈ 4–14/month
Supply is limited.

Scenarios tested (what happens if we manage or not the decay)
1. No action → portfolio collapses (soon or later)
2. PB only replacement → ~63% Top Band
3. Replacing everything immediately (PB + OgI monthly)
            → works in theory
            → breaks in practice (not enough inflow)
4. PB immediate + OgI with ~1 month tolerance (only viable configuration)
           → ~74% Top Band
             → 0% PB
             → stable portfolio size

Operational reality
* only ~3–10 replacements/month
* not every month
   
    → the system behaves in an event-driven way, not a linear or continuous process

The Re-entry
What really surprised me is that ~30–50% of Top Band inflow comes from re-entry, typically after 1–3 months.

    → the system is cyclical, high performing strategies cyclically move up and down within different performing areas.

Conclusion
The limiting factor is not strategy selection. It is the system’s ability to replace decaying strategies fast enough to sustain portfolio quality.

Even with strict statistical filters, strategies do not remain stable indefinitely.
At portfolio level, performance becomes a function of decay rate vs replacement capacity.

The Real asset
The real asset is not the individual strategy. It is the pipeline. I’ll never regret the incubation implementation as this is per sé the most important move we did to enable learning journey.

Open question
Curious how others approach this:

    * Do you treat Top Band strategies as something to hold?
    * Or something to rotate?

And:

* How do you measure performances and system effectiveness?

Appendix — Definitions & KPIs

RfL (Ready for Live)
* PF > 1.5
* and Win% > 60%
* and SQN ≥ 2
* and Net Profit > 0
* and Recovery Factor > 0.5
* and Max consecutive losses ≤ 5
* and Trades ≥ 50

PwL (Promotion Watchlist)
* PF: 1.3–1.5
* and Win%: 55–60%
* and SQN: 1.6–2

Top Band (TB)
* RfL + PwL

HB (High Band)
* RfL only

OgI (On going Incubation)
* neutral state

PB (Pruning Box)
* PF < 1.1 OR Win% < 45% OR Recovery < 0.5
* OR consecutive losses > 5
* OR SQN < 0.5

EB (Earth Birds)
* Trades < 10

Final note
All this work is possible because of the Team Effort. Thanks to the invitation in this forum and other groups, we are now 5 people (Hez, Fabio, Eliseo, Alessandro and myself) managing the whole system.

And not only: ~30 MT4 instances running 24/7 on high performing & reliable VPSs to properly get at market 1000 EA need to be monitored, managed, and paid.

Thank you in advance
Vincenzo

The Extra OOS Trick in EA Studio With Real Examples

Got it, that makes sense.

Out of curiosity, how do you actually define “good metrics” in your workflow?

For example, do you have specific thresholds or ranges you expect to hold from build (backtest) to live (like SQN, drawdown, etc.), or is it more based on overall behavior and experience?


algotrader21 wrote:
Vincenzo wrote:

Just to make sure I understood you correctly.

When you say you’re satisfied with the outcome, it’s not mainly because of the absolute metrics, but because the strategy shows consistent behavior across all validation stages, from build, to extra out-of-sample, to Monte Carlo, and then also in live.

So in your view, that alignment across layers is more important than the standalone numbers at any given stage.

Did I get that right?



algotrader21 wrote:

Yes, I’m satisfied with it.

Not because of the numbers alone, but because of the consistency across all layers:

build phase 
extra out-of-sample extension 
Monte Carlo under heavy stress 
and most importantly, real live performance over time 

For me, that combination is what defines whether a system meets expectations or not.

At that point it’s no longer about a single result, but about consistent behavior across different validation stages and real conditions.

Also, considering the metrics and the fact that this EA has been running across multiple live accounts with the same behavior, I think the outcome speaks for itself.


I never said standalone metrics are not important. If metrics were not important, I would not be posting screenshots of them in the first place.

For me, everything has to be good as a whole. Metrics matter a lot, because they already tell me whether a system is strong, mediocre, or just weak. But when I talk about robust systems, I’m looking at more than just numbers at one stage.

What matters to me is that the same quality keeps showing up across all layers: build phase, extra out-of-sample, Monte Carlo, demo performance, and then live performance over time.

And when I say it all has to be good as a whole, or that I focus a lot on structure, then metrics are obviously important as well. Without metrics, I cannot even judge performance properly, and I also cannot see whether the equity curve, drawdown, and overall behavior are actually strong or not.

So for me, it is not metrics versus alignment. It is the combination. Good metrics make a system interesting, but consistency across all validation stages is what tells me whether it is actually robust or not.

The Extra OOS Trick in EA Studio With Real Examples

Vincenzo wrote:

Just to make sure I understood you correctly.

When you say you’re satisfied with the outcome, it’s not mainly because of the absolute metrics, but because the strategy shows consistent behavior across all validation stages, from build, to extra out-of-sample, to Monte Carlo, and then also in live.

So in your view, that alignment across layers is more important than the standalone numbers at any given stage.

Did I get that right?



algotrader21 wrote:
Vincenzo wrote:

And are you satisfied with this outcome ?
Does it cover your expectations?


Yes, I’m satisfied with it.

Not because of the numbers alone, but because of the consistency across all layers:

build phase 
extra out-of-sample extension 
Monte Carlo under heavy stress 
and most importantly, real live performance over time 

For me, that combination is what defines whether a system meets expectations or not.

At that point it’s no longer about a single result, but about consistent behavior across different validation stages and real conditions.

Also, considering the metrics and the fact that this EA has been running across multiple live accounts with the same behavior, I think the outcome speaks for itself.


I never said standalone metrics are not important. If metrics were not important, I would not be posting screenshots of them in the first place.

For me, everything has to be good as a whole. Metrics matter a lot, because they already tell me whether a system is strong, mediocre, or just weak. But when I talk about robust systems, I’m looking at more than just numbers at one stage.

What matters to me is that the same quality keeps showing up across all layers: build phase, extra out-of-sample, Monte Carlo, demo performance, and then live performance over time.

And when I say it all has to be good as a whole, or that I focus a lot on structure, then metrics are obviously important as well. Without metrics, I cannot even judge performance properly, and I also cannot see whether the equity curve, drawdown, and overall behavior are actually strong or not.

So for me, it is not metrics versus alignment. It is the combination. Good metrics make a system interesting, but consistency across all validation stages is what tells me whether it is actually robust or not.

The Extra OOS Trick in EA Studio With Real Examples

Just to make sure I understood you correctly.

When you say you’re satisfied with the outcome, it’s not mainly because of the absolute metrics, but because the strategy shows consistent behavior across all validation stages, from build, to extra out-of-sample, to Monte Carlo, and then also in live.

So in your view, that alignment across layers is more important than the standalone numbers at any given stage.

Did I get that right?



algotrader21 wrote:
Vincenzo wrote:

And are you satisfied with this outcome ?
Does it cover your expectations?


algotrader21 wrote:

Yes, that is EA 165 from my main live account.

The symbol and magic number are visible in the screenshot as well.


Yes, I’m satisfied with it.

Not because of the numbers alone, but because of the consistency across all layers:

build phase 
extra out-of-sample extension 
Monte Carlo under heavy stress 
and most importantly, real live performance over time 

For me, that combination is what defines whether a system meets expectations or not.

At that point it’s no longer about a single result, but about consistent behavior across different validation stages and real conditions.

Also, considering the metrics and the fact that this EA has been running across multiple live accounts with the same behavior, I think the outcome speaks for itself.

The Extra OOS Trick in EA Studio With Real Examples

Vincenzo wrote:

And are you satisfied with this outcome ?
Does it cover your expectations?


algotrader21 wrote:
Vincenzo wrote:

Is this EA 165 the account you are referring to as outcome  ?


Yes, that is EA 165 from my main live account.

The symbol and magic number are visible in the screenshot as well.


Yes, I’m satisfied with it.

Not because of the numbers alone, but because of the consistency across all layers:

build phase 
extra out-of-sample extension 
Monte Carlo under heavy stress 
and most importantly, real live performance over time 

For me, that combination is what defines whether a system meets expectations or not.

At that point it’s no longer about a single result, but about consistent behavior across different validation stages and real conditions.

Also, considering the metrics and the fact that this EA has been running across multiple live accounts with the same behavior, I think the outcome speaks for itself.

The Extra OOS Trick in EA Studio With Real Examples

And are you satisfied with this outcome ?
Does it cover your expectations?


algotrader21 wrote:
Vincenzo wrote:

Is this EA 165 the account you are referring to as outcome  ?




algotrader21 wrote:

Because this EA has already been running live for about a year, I decided to push the validation much further.

So this is not a random test.


This is a proven strategy being pushed to its limits.


I increased the stress deliberately:

history data heavily randomized (80% changed bars)
ATR variation increased to 40%
slippage increased up to 40 points
and 1500 Monte Carlo simulations

Not because I needed to “prove” it.

But because I already knew this strategy could handle it.


Why?

Because this is not a fresh build.

This is an EA that has already:

passed the build process
survived extra out-of-sample
and proven itself in live conditions

So at this stage, the goal is different.

Not discovery.

Confirmation under pressure.

Results



Even under heavy stress:

the structure holds
performance remains consistent
degradation stays controlled
SQN remains solid across confidence levels

This is exactly what you want to see when you push a proven system.



Key point

No changes were made to the strategy.

No reoptimization.
No adjustments.

Just pure stress testing.

At this level, you are no longer looking at a backtest.

You are looking at behavior under extreme conditions.

And it still holds.


Yes, that is EA 165 from my main live account.

The symbol and magic number are visible in the screenshot as well.

The Extra OOS Trick in EA Studio With Real Examples

aaronpriest wrote:

Yup, you nailed it. They are very short lived and have to get replaced often, that is a huge downside. And quite likely some good strategies are replaced prematurely, but most short term ones are bad strategies in the long run. I've been running a mix of both short and long term strategies on demo accounts, and keeping track of their performance via magic numbers, swapping out short term ones frequently and only swapping out long term ones when they clearly break their pattern--that part takes more effort to track, but it's easy to see if one has been unprofitable for three months in a row for example to flag it for testing.





Yeah, honestly I think what you’re doing right now is probably one of the best ways to really understand what works and what doesn’t.

That hybrid approach combining short-term strategies with longer-data builds gives you a lot of insight through actual experience, not just theory.

For me personally, I lean more towards the long-data side. I build on larger datasets with 50% OOS, allow full optimization during the build phase (even quite heavy, like 40 steps), and then rely on Monte Carlo to validate whether the structure holds or breaks under stress. That’s exactly why I’m comfortable pushing optimization  because MC is there to test if it’s real or not.

But at the same time, I completely agree with you that experimenting like you’re doing is key. In the end, everyone has to go through that process and figure out what fits their own mindset and workflow.

Short-term strategies can definitely work, but like you said, they tend to require constant rotation and replacement. It becomes a much more active process. The longer-data approach is slower and honestly a bit more “boring”, but for me it fits better because it’s more focused on long-term behavior rather than constant switching.

For example, at this point I have a core group of around 12 strong systems that have been running live well for over a year, plus a few others that are still profitable but not top-tier. Because of that, my building frequency has dropped a lot. I mostly focus now on managing, validating, and slowly expanding what already works, rather than constantly generating new strategies.

On the execution side, I also try to keep things structured. Ideally, I want to see around 6 months on demo before moving a robot to live. But if a robot already reaches around 60 trades within 4 months and the metrics still look strong, that can also be enough for me. In the end, it always depends on the overall behavior and structure.

At the same time, I do agree with your point that after enough experience, you start recognizing patterns faster. You can often see earlier when something is off. But even then, I still try to balance that with giving strategies enough room, because sometimes it really depends on the market phase they start in.

That’s also something I’ve noticed a lot certain strategies just need the right conditions. For example, breakout systems can struggle in ranging periods at the start, even if they perform well over time.

So yeah, overall I think both approaches can work. It really comes down to what kind of process you want to run and what fits your mindset best.

Curious to see how your results evolve over time with that mix.

The Extra OOS Trick in EA Studio With Real Examples

algotrader21 wrote:

And yes, what you are doing can definitely work. Shorter data can work.

But one thing I’ve noticed with that approach is that it tends to push the process into a much faster and more rotational workflow.

You need to keep building new strategies regularly, because you need fresh ones for the next cycle.

And because of that constant rotation, everything becomes more reactive.

I also think one of the risks there is that some potentially good strategies might be rotated out too early, simply because they did not have enough time yet to show their behavior.

Strategies often need time to play out.

Yup, you nailed it. They are very short lived and have to get replaced often, that is a huge downside. And quite likely some good strategies are replaced prematurely, but most short term ones are bad strategies in the long run. I've been running a mix of both short and long term strategies on demo accounts, and keeping track of their performance via magic numbers, swapping out short term ones frequently and only swapping out long term ones when they clearly break their pattern--that part takes more effort to track, but it's easy to see if one has been unprofitable for three months in a row for example to flag it for testing.

The Extra OOS Trick in EA Studio With Real Examples

Vincenzo wrote:

Is this EA 165 the account you are referring to as outcome  ?




algotrader21 wrote:

Because this EA has already been running live for about a year, I decided to push the validation much further.

So this is not a random test.


This is a proven strategy being pushed to its limits.


I increased the stress deliberately:

history data heavily randomized (80% changed bars)
ATR variation increased to 40%
slippage increased up to 40 points
and 1500 Monte Carlo simulations

Not because I needed to “prove” it.

But because I already knew this strategy could handle it.


Why?

Because this is not a fresh build.

This is an EA that has already:

passed the build process
survived extra out-of-sample
and proven itself in live conditions

So at this stage, the goal is different.

Not discovery.

Confirmation under pressure.

Results



Even under heavy stress:

the structure holds
performance remains consistent
degradation stays controlled
SQN remains solid across confidence levels

This is exactly what you want to see when you push a proven system.



Key point

No changes were made to the strategy.

No reoptimization.
No adjustments.

Just pure stress testing.

At this level, you are no longer looking at a backtest.

You are looking at behavior under extreme conditions.

And it still holds.


Yes, that is EA 165 from my main live account.

The symbol and magic number are visible in the screenshot as well.