Topic: What if you can learn more than you expected with EA Studio?

EA Studio isn’t just a powerful engine to generate, test, and optimize strategies.
It’s also one of the best environments to learn how metrics, filters, and settings truly shape your EA’s success rate.

Experiment #1 — Can the filters you use to select EAs impact Walk-Forward Analysis (WFA) success?
I took the same collection of 233 EAs, the same dataset, and the same tool.
This collection was built early in my EA Studio journey — nothing special at all.

Then I ran it over and over, each time applying a different combination of filters before validation.
After filtering, the remaining strategies were tested on unseen data (WFA).
In total, I ran 91 combinations, each representing one complete backtest + WFA cycle.

The results: (attachment)

Some combinations achieved 100 % WFA success, others dropped below 20 % — all with identical data and strategies (Example: 50 profitable in WFA out of 50 that passed backtest = 100 %.
20 profitable in WFA out of 100 that passed = 20 %.).

The takeaway:

- Even small variations in settings and filters can drastically change the outcome.
- EA Studio lets you run structured learning experiments — showing with data how your choices affect robustness and performance.
- It’s not the real market — but each improvement that reduces failure from backtest → demo → live moves you closer to a sustainable, winning portfolio.

All the best
Vincenzo

Post's attachments

Experiment n1.jpg 921.47 kb, file has never been downloaded. 

You don't have the permssions to download the attachments of this post.

Re: What if you can learn more than you expected with EA Studio?

It is surprising how seemingly very small adjustments can make a big difference. I only recently figured out how to export data from my broker and import into EA Studio. I can't believe how just changing the datasets really impacts some strategies.

Also the Walk Forward is great. I have a lot of strategies from Generator that pass through all my criteria but sometimes they just look like crap on the WFA. So I use that Walk Forward as a filter for all top strategies now.

I've also learned a lot about trading and understanding statistics by studying the reports and then asking AI chatbots questions like:
In EA Studio, what does the column SQN represent? What should I look for in the Win/Loss category? What about Return/DD?

3 (edited by Blaiserboy 2025-10-07 04:05:30)

Re: What if you can learn more than you expected with EA Studio?

You are not alone in relying on WFA, it can open your eyes re strategies.

Interesting discussion, Thanks.

Re: What if you can learn more than you expected with EA Studio?

And how many of these robots are actually performing live, Vincenzo?
Because I’ve seen your live accounts during the Zoom call you even admitted that almost all of your profitable accounts are grid systems.
No need to defend it, amigo.
You know it, and I know it that’s enough.”

Re: What if you can learn more than you expected with EA Studio?

Exactly — it’s wild how small tweaks or switching datasets can change everything. Using your broker’s data is a great move; it really shows which strategies are stable.

If you want to go deeper, try running the same existing collection through a few different filter combinations to select strategies before the WFA. It doesn’t take long, but you’ll see huge differences — sometimes 20 % vs 100 % success rates.

AI can open a world we couldn’t imagine before — but only if you build the right environment and ask the right questions. That’s when it truly becomes a learning tool, not just a helper.
For example, I’m fine-tuning a workflow to select promising strategies running on demo and promote them to live, as well as a systematic process to maintain my live portfolios on a monthly basis.

I also switched to brokers already integrated into EA Studio to skip manual data downloads.
Right now, I’m testing what works best: creating on Premium Data and validating on BlackBull + Darwinex, or the other way around.

So far, I’ve noticed that strategies built on Darwinex often don’t perform as well on BlackBull or Premium — still collecting data on that.

Did you also eperience it?

dusktrader wrote:

It is surprising how seemingly very small adjustments can make a big difference. I only recently figured out how to export data from my broker and import into EA Studio. I can't believe how just changing the datasets really impacts some strategies.

Also the Walk Forward is great. I have a lot of strategies from Generator that pass through all my criteria but sometimes they just look like crap on the WFA. So I use that Walk Forward as a filter for all top strategies now.

I've also learned a lot about trading and understanding statistics by studying the reports and then asking AI chatbots questions like:
In EA Studio, what does the column SQN represent? What should I look for in the Win/Loss category? What about Return/DD?

6 (edited by Vincenzo 2025-10-07 06:51:32)

Re: What if you can learn more than you expected with EA Studio?

Hi Blaiserboy,

WFA can really open your eyes. It’s often the moment you realize which strategies are truly robust and which ones just look good on paper. Or, the moment you get unexpected surprises.

But, I also run 20+ demo accounts (the oldest over 500 days, three just opened this week) with 1,000+ EAs running at any time to validate workflow, setup, and criteria changes. The demo stage is another gatekeeper — and then, of course, the live environment puts everything under discussion again.

I treat this as a pipeline with gatekeepers:

Gate 1: Generation + Robustness tests

Gate 2: Walk-Forward Analysis in EA Studio

Gate 3: Demo (≥180 days, 50+ trades)

Gate 4: Live (≥30 trades)

My view is, if you create rubbish at Gate 1, your success rate at Gate 4 becomes unmanageable. With only 20–30 % EAs success rate, it’s almost impossible to maintain a profitable portfolio long-term.

But if you filter properly along the way, you only need around a 60–70 % success rate. In a 10-EA portfolio, that’s usually enough for 6–7 to perform well — and that’s what makes the difference.

The road is long and not easy.
I’m sharing what I do not to teach, but to be constructively challenged and learn from others’ experience too.

I’ve been trading for over 10 years, almost 5 of them working with bought, selected, and “domesticated” EAs. What I’ve learned (also thanks to others sharing) is that data sustain results.
The time you invest in studying and testing is much larger than the time needed to manage portfolios.

If we all share a bit of experience — supported by data — we could all learn faster and make this journey more sustainable. Or we could even work together to validate ideas.

Thanks
Vincenzo

Blaiserboy wrote:

You are not alone in relying on WFA, it can open your eyes re strategies.

Interesting discussion, Thanks.

7 (edited by Vincenzo 2025-10-11 10:56:11)

Re: What if you can learn more than you expected with EA Studio?

Dear Jurgen, I know we had a negative exchange in the past, and I have no intention of repeating that here.
This space should stay focused on methods, data, and constructive exchange — not personal issues.

Just to be clear: in none of my posts have I ever shared trading results or performance claims.
What I share here is about developing topics and workflows, not promoting systems or selling anything.

Since I bought EA Studio, I’ve reached out to many users to learn from their experience — including Popov, the creator of EA Studio, with whom I’ve have regular exchanges about ideas and workflow development.
That’s the purpose of being active here: sharing, learning, and improving through open discussion — nothing commercial, nothing hidden.

Regarding your accusation about “manipulating results” or “moving accounts between brokers” — that’s simply false.
I use multiple brokers intentionally for cross-validation and robustness testing, which is a standard and transparent practice in algorithmic research. Nothing is hidden or altered; everything I post can be verified or replicated by anyone who wants to test it.

My Darwinex account is fully public, with a four-year live track record that includes both investor capital and platform allocations. The goal isn’t to “look good” — that would be impossible to fake without being delisted — but to trade consistently and within a defined risk framework.

In live trading, the ability to pause, replace, or add EAs to hedge or manage drawdowns isn’t manipulation; it’s one of the core skills of responsible risk management.

I’m sharing my current journey focused on building a repeatable, data-driven process with EA Studio, while my older profitable accounts were based on third-party EAs, which I’ve always acknowledged openly.
They’re two different paths — one past and still running, one experimental and evolving.

If you want to discuss methodology, validation logic, or testing processes, I’m open to that.
But I won’t engage in personal arguments — they don’t help anyone and only distract from what really matters.

Let’s keep the conversation technical and useful for the community.

All the best
Vincenzo



Jurgen2100 wrote:

Vincenzo, you’re not stupid but right now you’re making a fool of yourself in front of everyone.

The fact that you keep ignoring the truth I wrote about your grid accounts and the way you manipulate your Darwin curves says enough.
People are not blind, and they’re not stupid. They can see exactly what you’re doing.

You talk like an expert, but anyone who has seen your live setups knows the reality:
almost all of your profitable accounts are grid systems, and when things start failing, you move them around between brokers to protect the image.

That’s not transparency, that’s theater.
You can talk about data, gates, and validation all you want but words don’t build robustness, results do.

At this point, you’d honestly be better off saying nothing and removing this topic altogether.
Because every post you write just exposes you more.

And let’s be real if anything I said was false, you would’ve defended it immediately.
The fact that you’ve ignored it twice tells everyone everything they need to know.
Silence always betrays guilt.

Re: What if you can learn more than you expected with EA Studio?

And to prove that you’re not an expert and that you actually proved my point of what I said in my topic, that short data isn’t robust I asked ChatGPT about the exact same experiment as if it were mine.
I did that deliberately, presenting it as my own experiment, to get the most neutral and objective answer possible.
I already knew the outcome when I saw your post, but since you keep saying that ChatGPT “opens a whole new world” for you, I asked it myself.

Look and learn, Vincenzo the expert.

NO SUGARCOATHING FULL TRUTH I WANT THE HARD TRUTH IS THIS THE ROBUST WAY ON DATA 2016 TO 2023 Experiment #1 — Can the filters you use to select EAs impact Walk-Forward Analysis (WFA) success?
I took the same collection of 233 EAs, the same dataset, and the same tool.
This collection was built early in my EA Studio journey — nothing special at all.

Then I ran it over and over, each time applying a different combination of filters before validation.
After filtering, the remaining strategies were tested on unseen data (WFA).
In total, I ran 91 combinations, each representing one complete backtest + WFA cycle.

The results: (attachment)

Some combinations achieved 100 % WFA success, others dropped below 20 % — all with identical data and strategies (Example: 50 profitable in WFA out of 50 that passed backtest = 100 %.
20 profitable in WFA out of 100 that passed = 20 %.).
ChatGPT zei:

Alright bro, no sugarcoating — let’s get surgical with this.

First: What you actually tested

You didn’t test robustness of strategies.
You tested robustness of your filtering logic.
Meaning: you proved that filter choice alone can completely distort your perception of robustness, even when the underlying data and EAs are identical. That’s a powerful insight — but it also exposes a critical weakness.

The hard truth

This experiment does not prove robustness.
It actually proves the opposite: that your results can be manipulated — consciously or not — through selective filtering. Here’s why:

You used one fixed EA dataset (2016–2023).

That’s only one market era.
You have no independent data segment from before 2016 (e.g., 2008–2015) to verify true generalization.
Without pre-2016 exposure, you can’t see how those systems handle different volatility regimes, spreads, or liquidity conditions.

Walk-Forward (WFA) on 2016–2023 is still “in-sample-ish” because all EAs were born from that structure.

Your sample (233 EAs) is too narrow and too correlated.

If most were generated from the same timeframe, symbol, or parameter pool, they share structural biases.
You’re not testing 233 independent edges — you’re testing variations of the same bias.
So your 100 % WFA success pockets can easily be statistical mirages.

91 filter combinations = 91 different curve fits.

Each filter set changes the data-mining bias.
A few will accidentally align with the validation set — giving fake “100 % WFA success”.
That doesn’t mean those filters are robust; it just means they fit the validation noise better in that run.

WFA success ≠ future performance.

WFA can validate the process, not guarantee robustness.
Even perfect WFA results collapse live if your generation set lacked diverse conditions.
You can get 100 % WFA pass and still fail OOS (out-of-sample) if your entire data span is homogeneous.

The deeper reality

Robustness only exists when your strategy creation → validation → live pipeline consistently survives multiple uncorrelated stress environments.

Here’s the real robust structure (truth-level setup):

Phase    Data    Purpose
Build    2008–2019    Learn every environment possible (crash, recovery, QE, COVID)
Validate (WFA)    2020–2023    Forward stability check
Test (OOS)    2024–2025    Proof of generalization
Live monitor    current    Reality check

You only covered the middle phase — not the full robustness chain.

Brutal verdict

No, this is not the robust way.

It’s a methodology stress test, not a robustness test.
You’ve demonstrated how fragile robustness appears when filter design is arbitrary.
True robustness means proving consistency across:

Different data spans

Different market conditions

Different generation sessions

Different brokers & spreads

Different filter sets (and still similar outcomes)

Until your process can do that — you’re seeing artifacts, not truth.

What is the robust way then

Generate new sets for each data regime (e.g. 2008–2015, 2016–2020, 2021–2025).
Cross-validate them. A robust logic will keep showing up across all eras.

Randomize and stress test each with Monte Carlo on:

Data shifts

Spread variations

Execution noise

Filter only on universal properties (PF, DD, SQN, #trades, smooth equity).
Avoid narrow or aesthetic filters — they create false robustness.

Validate on unseen broker data (not just unseen time).
True robustness = cross-broker stability.

TL;DR

2016–2023 WFA alone = not robust.

91 filter combos = proof of bias, not strength.

100 % success = likely curve-fit filter alignment.

To prove robustness, your system must survive different market DNA, not different filters.

Truth: you just proved how easy it is to fool yourself.
And that’s the most valuable lesson you could’ve extracted.

Would you like me to show you the exact pipeline structure (data splits + validation layers + stress tests) that would make this experiment truly robust?

Re: What if you can learn more than you expected with EA Studio?

Vincenzo,
I appreciate the calm tone, but let’s stay real for a moment.

You say you’ve never shared performance claims  yet in our past exchanges and during the Zoom call, you showed your live accounts, your Darwin metrics, and explained how you move systems between brokers to manage drawdowns.
Those weren’t “theoretical workflows.” That was performance, directly tied to your own systems.

You also admitted your older profitable accounts were built on third-party EAs.
That already confirms what I said: your current workflow is still experimental, while your real profits came from someone else’s work.

And about “cross-validation,” come on, Vincenzo we both know that shifting systems between brokers after a drawdown isn’t robustness testing, it’s reputation management.
Cross-validation happens with data, not with live accounts.

Everything you’re saying now is image defense, not truth.
I’m not arguing  I’m exposing.
People deserve honesty, not polished words.

Truth doesn’t need defending.
Silence and rationalization always reveal more than numbers ever will.

All the best.

Re: What if you can learn more than you expected with EA Studio?

Reminder to All Participants

In order to keep our discussion constructive and focused on ideas rather than individuals, please avoid referencing personal qualifications, credentials, or experiences in your responses.

Re: What if you can learn more than you expected with EA Studio?

Re WFA, I have used 5 out 7, 3 out 5, and recently AI said to use 3 out 12.

Do you have a standard that you use?

Re: What if you can learn more than you expected with EA Studio?

Hi Blaiserboy,

I’m not sure if I got your question exactly right, but let me explain how I use WFA — maybe this helps.

I actually do different things depending on the phase:

Reactor (generation phase) →
When I generate strategies, I apply a very light optimization (only TP/SL, no indicator parameters) with 20% OOS.
I also always leave out the last 6–12 months completely.

Validation phase →
When I validate existing collections or single strategies, I backtest from 2016–2024, then filter to pick the top performers and run a full 2025 validation (WFA). Here there’s no need to split into segments, since that data was never used before.

Reactor (new setup) →
I recently launched a new creation batch using Walk-Forward Optimization (WFO and no additional optimization) with 10 segments and 40% OOS. This one will run until the end of the week — then we’ll see the results.

Did that answer your question?

Blaiserboy wrote:

Re WFA, I have used 5 out 7, 3 out 5, and recently AI said to use 3 out 12.

Do you have a standard that you use?

Re: What if you can learn more than you expected with EA Studio?

Blaiserboy’s question was actually very simple — he wasn’t asking for a workflow breakdown or a mentoring session.

He was just referring to the segment ratio inside the Walk-Forward setup (like 3/5, 5/7, 3/12), not the entire generation–validation pipeline.

Anyone who has used EA Studio for a while knows exactly what he meant.

Sometimes we don’t need to over-explain things that are already clear especially to people who already understand the tool.

Re: What if you can learn more than you expected with EA Studio?

That gives me an idea or two

Thanks very much

Re: What if you can learn more than you expected with EA Studio?

Nice, thx for the feedback.
BR
Vincenzo

Blaiserboy wrote:

That gives me an idea or two

Thanks very much