<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
	<title type="html"><![CDATA[Forex Software — The Extra OOS Trick in EA Studio With Real Examples]]></title>
	<link rel="self" href="https://forexsb.com/forum/feed/atom/topic/10061/" />
	<updated>2026-04-16T18:18:07Z</updated>
	<generator>PunBB</generator>
	<id>https://forexsb.com/forum/topic/10061/the-extra-oos-trick-in-ea-studio-with-real-examples/</id>
		<entry>
			<title type="html"><![CDATA[Re: The Extra OOS Trick in EA Studio With Real Examples]]></title>
			<link rel="alternate" href="https://forexsb.com/forum/post/83254/#p83254" />
			<content type="html"><![CDATA[<p>Got it, that makes sense.</p><p>Out of curiosity, how do you actually define “good metrics” in your workflow?</p><p>For example, do you have specific thresholds or ranges you expect to hold from build (backtest) to live (like SQN, drawdown, etc.), or is it more based on overall behavior and experience?</p><br /><div class="quotebox"><cite>algotrader21 wrote:</cite><blockquote><div class="quotebox"><cite>Vincenzo wrote:</cite><blockquote><p>Just to make sure I understood you correctly.</p><p>When you say you’re satisfied with the outcome, it’s not mainly because of the absolute metrics, but because the strategy shows consistent behavior across all validation stages, from build, to extra out-of-sample, to Monte Carlo, and then also in live.</p><p>So in your view, that alignment across layers is more important than the standalone numbers at any given stage.</p><p>Did I get that right?</p><br /><br /><div class="quotebox"><cite>algotrader21 wrote:</cite><blockquote><br /><p>Yes, I’m satisfied with it.</p><p>Not because of the numbers alone, but because of the consistency across all layers:</p><p>build phase&nbsp; <br />extra out-of-sample extension&nbsp; <br />Monte Carlo under heavy stress&nbsp; <br />and most importantly, real live performance over time&nbsp; </p><p>For me, that combination is what defines whether a system meets expectations or not.</p><p>At that point it’s no longer about a single result, but about consistent behavior across different validation stages and real conditions.</p><p>Also, considering the metrics and the fact that this EA has been running across multiple live accounts with the same behavior, I think the outcome speaks for itself.</p></blockquote></div></blockquote></div><br /><p>I never said standalone metrics are not important. If metrics were not important, I would not be posting screenshots of them in the first place.</p><p>For me, everything has to be good as a whole. Metrics matter a lot, because they already tell me whether a system is strong, mediocre, or just weak. But when I talk about robust systems, I’m looking at more than just numbers at one stage.</p><p>What matters to me is that the same quality keeps showing up across all layers: build phase, extra out-of-sample, Monte Carlo, demo performance, and then live performance over time.</p><p>And when I say it all has to be good as a whole, or that I focus a lot on structure, then metrics are obviously important as well. Without metrics, I cannot even judge performance properly, and I also cannot see whether the equity curve, drawdown, and overall behavior are actually strong or not.</p><p>So for me, it is not metrics versus alignment. It is the combination. Good metrics make a system interesting, but consistency across all validation stages is what tells me whether it is actually robust or not.</p></blockquote></div>]]></content>
			<author>
				<name><![CDATA[Vincenzo]]></name>
				<uri>https://forexsb.com/forum/user/14930/</uri>
			</author>
			<updated>2026-04-16T18:18:07Z</updated>
			<id>https://forexsb.com/forum/post/83254/#p83254</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: The Extra OOS Trick in EA Studio With Real Examples]]></title>
			<link rel="alternate" href="https://forexsb.com/forum/post/83253/#p83253" />
			<content type="html"><![CDATA[<div class="quotebox"><cite>Vincenzo wrote:</cite><blockquote><p>Just to make sure I understood you correctly.</p><p>When you say you’re satisfied with the outcome, it’s not mainly because of the absolute metrics, but because the strategy shows consistent behavior across all validation stages, from build, to extra out-of-sample, to Monte Carlo, and then also in live.</p><p>So in your view, that alignment across layers is more important than the standalone numbers at any given stage.</p><p>Did I get that right?</p><br /><br /><div class="quotebox"><cite>algotrader21 wrote:</cite><blockquote><div class="quotebox"><cite>Vincenzo wrote:</cite><blockquote><p>And are you satisfied with this outcome ?<br />Does it cover your expectations?</p></blockquote></div><br /><p>Yes, I’m satisfied with it.</p><p>Not because of the numbers alone, but because of the consistency across all layers:</p><p>build phase&nbsp; <br />extra out-of-sample extension&nbsp; <br />Monte Carlo under heavy stress&nbsp; <br />and most importantly, real live performance over time&nbsp; </p><p>For me, that combination is what defines whether a system meets expectations or not.</p><p>At that point it’s no longer about a single result, but about consistent behavior across different validation stages and real conditions.</p><p>Also, considering the metrics and the fact that this EA has been running across multiple live accounts with the same behavior, I think the outcome speaks for itself.</p></blockquote></div></blockquote></div><br /><p>I never said standalone metrics are not important. If metrics were not important, I would not be posting screenshots of them in the first place.</p><p>For me, everything has to be good as a whole. Metrics matter a lot, because they already tell me whether a system is strong, mediocre, or just weak. But when I talk about robust systems, I’m looking at more than just numbers at one stage.</p><p>What matters to me is that the same quality keeps showing up across all layers: build phase, extra out-of-sample, Monte Carlo, demo performance, and then live performance over time.</p><p>And when I say it all has to be good as a whole, or that I focus a lot on structure, then metrics are obviously important as well. Without metrics, I cannot even judge performance properly, and I also cannot see whether the equity curve, drawdown, and overall behavior are actually strong or not.</p><p>So for me, it is not metrics versus alignment. It is the combination. Good metrics make a system interesting, but consistency across all validation stages is what tells me whether it is actually robust or not.</p>]]></content>
			<author>
				<name><![CDATA[algotrader21]]></name>
				<uri>https://forexsb.com/forum/user/19926/</uri>
			</author>
			<updated>2026-04-16T18:07:56Z</updated>
			<id>https://forexsb.com/forum/post/83253/#p83253</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: The Extra OOS Trick in EA Studio With Real Examples]]></title>
			<link rel="alternate" href="https://forexsb.com/forum/post/83252/#p83252" />
			<content type="html"><![CDATA[<p>Just to make sure I understood you correctly.</p><p>When you say you’re satisfied with the outcome, it’s not mainly because of the absolute metrics, but because the strategy shows consistent behavior across all validation stages, from build, to extra out-of-sample, to Monte Carlo, and then also in live.</p><p>So in your view, that alignment across layers is more important than the standalone numbers at any given stage.</p><p>Did I get that right?</p><br /><br /><div class="quotebox"><cite>algotrader21 wrote:</cite><blockquote><div class="quotebox"><cite>Vincenzo wrote:</cite><blockquote><p>And are you satisfied with this outcome ?<br />Does it cover your expectations?</p><br /><div class="quotebox"><cite>algotrader21 wrote:</cite><blockquote><br /><p>Yes, that is EA 165 from my main live account.</p><p>The symbol and magic number are visible in the screenshot as well.</p></blockquote></div></blockquote></div><br /><p>Yes, I’m satisfied with it.</p><p>Not because of the numbers alone, but because of the consistency across all layers:</p><p>build phase&nbsp; <br />extra out-of-sample extension&nbsp; <br />Monte Carlo under heavy stress&nbsp; <br />and most importantly, real live performance over time&nbsp; </p><p>For me, that combination is what defines whether a system meets expectations or not.</p><p>At that point it’s no longer about a single result, but about consistent behavior across different validation stages and real conditions.</p><p>Also, considering the metrics and the fact that this EA has been running across multiple live accounts with the same behavior, I think the outcome speaks for itself.</p></blockquote></div>]]></content>
			<author>
				<name><![CDATA[Vincenzo]]></name>
				<uri>https://forexsb.com/forum/user/14930/</uri>
			</author>
			<updated>2026-04-16T17:40:23Z</updated>
			<id>https://forexsb.com/forum/post/83252/#p83252</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: The Extra OOS Trick in EA Studio With Real Examples]]></title>
			<link rel="alternate" href="https://forexsb.com/forum/post/83251/#p83251" />
			<content type="html"><![CDATA[<div class="quotebox"><cite>Vincenzo wrote:</cite><blockquote><p>And are you satisfied with this outcome ?<br />Does it cover your expectations?</p><br /><div class="quotebox"><cite>algotrader21 wrote:</cite><blockquote><div class="quotebox"><cite>Vincenzo wrote:</cite><blockquote><p>Is this EA 165 the account you are referring to as outcome&nbsp; ?</p></blockquote></div><br /><p>Yes, that is EA 165 from my main live account.</p><p>The symbol and magic number are visible in the screenshot as well.</p></blockquote></div></blockquote></div><br /><p>Yes, I’m satisfied with it.</p><p>Not because of the numbers alone, but because of the consistency across all layers:</p><p>build phase&nbsp; <br />extra out-of-sample extension&nbsp; <br />Monte Carlo under heavy stress&nbsp; <br />and most importantly, real live performance over time&nbsp; </p><p>For me, that combination is what defines whether a system meets expectations or not.</p><p>At that point it’s no longer about a single result, but about consistent behavior across different validation stages and real conditions.</p><p>Also, considering the metrics and the fact that this EA has been running across multiple live accounts with the same behavior, I think the outcome speaks for itself.</p>]]></content>
			<author>
				<name><![CDATA[algotrader21]]></name>
				<uri>https://forexsb.com/forum/user/19926/</uri>
			</author>
			<updated>2026-04-16T13:45:41Z</updated>
			<id>https://forexsb.com/forum/post/83251/#p83251</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: The Extra OOS Trick in EA Studio With Real Examples]]></title>
			<link rel="alternate" href="https://forexsb.com/forum/post/83250/#p83250" />
			<content type="html"><![CDATA[<p>And are you satisfied with this outcome ?<br />Does it cover your expectations?</p><br /><div class="quotebox"><cite>algotrader21 wrote:</cite><blockquote><div class="quotebox"><cite>Vincenzo wrote:</cite><blockquote><p>Is this EA 165 the account you are referring to as outcome&nbsp; ?</p><br /><br /><br /><div class="quotebox"><cite>algotrader21 wrote:</cite><blockquote><p>Because this EA has already been running live for about a year, I decided to push the validation much further.</p><p>So this is not a random test.</p><br /><p>This is a proven strategy being pushed to its limits.</p><br /><p>I increased the stress deliberately:</p><p>history data heavily randomized (80% changed bars)<br />ATR variation increased to 40%<br />slippage increased up to 40 points<br />and 1500 Monte Carlo simulations</p><p>Not because I needed to “prove” it.</p><p>But because I already knew this strategy could handle it.</p><br /><p>Why?</p><p>Because this is not a fresh build.</p><p>This is an EA that has already:</p><p>passed the build process<br />survived extra out-of-sample<br />and proven itself in live conditions</p><p>So at this stage, the goal is different.</p><p>Not discovery.</p><p>Confirmation under pressure.</p><p>Results</p><br /><br /><p>Even under heavy stress:</p><p>the structure holds<br />performance remains consistent<br />degradation stays controlled<br />SQN remains solid across confidence levels</p><p>This is exactly what you want to see when you push a proven system.</p><br /><br /><p>Key point</p><p>No changes were made to the strategy.</p><p>No reoptimization.<br />No adjustments.</p><p>Just pure stress testing.</p><p>At this level, you are no longer looking at a backtest.</p><p>You are looking at behavior under extreme conditions.</p><p>And it still holds.</p></blockquote></div></blockquote></div><br /><p>Yes, that is EA 165 from my main live account.</p><p>The symbol and magic number are visible in the screenshot as well.</p></blockquote></div>]]></content>
			<author>
				<name><![CDATA[Vincenzo]]></name>
				<uri>https://forexsb.com/forum/user/14930/</uri>
			</author>
			<updated>2026-04-16T13:38:14Z</updated>
			<id>https://forexsb.com/forum/post/83250/#p83250</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: The Extra OOS Trick in EA Studio With Real Examples]]></title>
			<link rel="alternate" href="https://forexsb.com/forum/post/83249/#p83249" />
			<content type="html"><![CDATA[<div class="quotebox"><cite>aaronpriest wrote:</cite><blockquote><p>Yup, you nailed it. They are very short lived and have to get replaced often, that is a huge downside. And quite likely some good strategies are replaced prematurely, but most short term ones are bad strategies in the long run. I&#039;ve been running a mix of both short and long term strategies on demo accounts, and keeping track of their performance via magic numbers, swapping out short term ones frequently and only swapping out long term ones when they clearly break their pattern--that part takes more effort to track, but it&#039;s easy to see if one has been unprofitable for three months in a row for example to flag it for testing.</p></blockquote></div><br /><br /><br /><br /><p>Yeah, honestly I think what you’re doing right now is probably one of the best ways to really understand what works and what doesn’t.</p><p>That hybrid approach combining short-term strategies with longer-data builds gives you a lot of insight through actual experience, not just theory.</p><p>For me personally, I lean more towards the long-data side. I build on larger datasets with 50% OOS, allow full optimization during the build phase (even quite heavy, like 40 steps), and then rely on Monte Carlo to validate whether the structure holds or breaks under stress. That’s exactly why I’m comfortable pushing optimization&nbsp; because MC is there to test if it’s real or not.</p><p>But at the same time, I completely agree with you that experimenting like you’re doing is key. In the end, everyone has to go through that process and figure out what fits their own mindset and workflow.</p><p>Short-term strategies can definitely work, but like you said, they tend to require constant rotation and replacement. It becomes a much more active process. The longer-data approach is slower and honestly a bit more “boring”, but for me it fits better because it’s more focused on long-term behavior rather than constant switching.</p><p>For example, at this point I have a core group of around 12 strong systems that have been running live well for over a year, plus a few others that are still profitable but not top-tier. Because of that, my building frequency has dropped a lot. I mostly focus now on managing, validating, and slowly expanding what already works, rather than constantly generating new strategies.</p><p>On the execution side, I also try to keep things structured. Ideally, I want to see around 6 months on demo before moving a robot to live. But if a robot already reaches around 60 trades within 4 months and the metrics still look strong, that can also be enough for me. In the end, it always depends on the overall behavior and structure.</p><p>At the same time, I do agree with your point that after enough experience, you start recognizing patterns faster. You can often see earlier when something is off. But even then, I still try to balance that with giving strategies enough room, because sometimes it really depends on the market phase they start in.</p><p>That’s also something I’ve noticed a lot certain strategies just need the right conditions. For example, breakout systems can struggle in ranging periods at the start, even if they perform well over time.</p><p>So yeah, overall I think both approaches can work. It really comes down to what kind of process you want to run and what fits your mindset best.</p><p>Curious to see how your results evolve over time with that mix.</p>]]></content>
			<author>
				<name><![CDATA[algotrader21]]></name>
				<uri>https://forexsb.com/forum/user/19926/</uri>
			</author>
			<updated>2026-04-16T13:19:35Z</updated>
			<id>https://forexsb.com/forum/post/83249/#p83249</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: The Extra OOS Trick in EA Studio With Real Examples]]></title>
			<link rel="alternate" href="https://forexsb.com/forum/post/83248/#p83248" />
			<content type="html"><![CDATA[<div class="quotebox"><cite>algotrader21 wrote:</cite><blockquote><p>And yes, what you are doing can definitely work. Shorter data can work.</p><p>But one thing I’ve noticed with that approach is that it tends to push the process into a much faster and more rotational workflow.</p><p>You need to keep building new strategies regularly, because you need fresh ones for the next cycle.</p><p>And because of that constant rotation, everything becomes more reactive.</p><p>I also think one of the risks there is that some potentially good strategies might be rotated out too early, simply because they did not have enough time yet to show their behavior.</p><p>Strategies often need time to play out.</p></blockquote></div><p>Yup, you nailed it. They are very short lived and have to get replaced often, that is a huge downside. And quite likely some good strategies are replaced prematurely, but most short term ones are bad strategies in the long run. I&#039;ve been running a mix of both short and long term strategies on demo accounts, and keeping track of their performance via magic numbers, swapping out short term ones frequently and only swapping out long term ones when they clearly break their pattern--that part takes more effort to track, but it&#039;s easy to see if one has been unprofitable for three months in a row for example to flag it for testing.</p>]]></content>
			<author>
				<name><![CDATA[aaronpriest]]></name>
				<uri>https://forexsb.com/forum/user/12293/</uri>
			</author>
			<updated>2026-04-16T12:36:47Z</updated>
			<id>https://forexsb.com/forum/post/83248/#p83248</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: The Extra OOS Trick in EA Studio With Real Examples]]></title>
			<link rel="alternate" href="https://forexsb.com/forum/post/83247/#p83247" />
			<content type="html"><![CDATA[<div class="quotebox"><cite>Vincenzo wrote:</cite><blockquote><p>Is this EA 165 the account you are referring to as outcome&nbsp; ?</p><br /><br /><br /><div class="quotebox"><cite>algotrader21 wrote:</cite><blockquote><p>Because this EA has already been running live for about a year, I decided to push the validation much further.</p><p>So this is not a random test.</p><br /><p>This is a proven strategy being pushed to its limits.</p><br /><p>I increased the stress deliberately:</p><p>history data heavily randomized (80% changed bars)<br />ATR variation increased to 40%<br />slippage increased up to 40 points<br />and 1500 Monte Carlo simulations</p><p>Not because I needed to “prove” it.</p><p>But because I already knew this strategy could handle it.</p><br /><p>Why?</p><p>Because this is not a fresh build.</p><p>This is an EA that has already:</p><p>passed the build process<br />survived extra out-of-sample<br />and proven itself in live conditions</p><p>So at this stage, the goal is different.</p><p>Not discovery.</p><p>Confirmation under pressure.</p><p>Results</p><br /><br /><p>Even under heavy stress:</p><p>the structure holds<br />performance remains consistent<br />degradation stays controlled<br />SQN remains solid across confidence levels</p><p>This is exactly what you want to see when you push a proven system.</p><br /><br /><p>Key point</p><p>No changes were made to the strategy.</p><p>No reoptimization.<br />No adjustments.</p><p>Just pure stress testing.</p><p>At this level, you are no longer looking at a backtest.</p><p>You are looking at behavior under extreme conditions.</p><p>And it still holds.</p></blockquote></div></blockquote></div><br /><p>Yes, that is EA 165 from my main live account.</p><p>The symbol and magic number are visible in the screenshot as well.</p>]]></content>
			<author>
				<name><![CDATA[algotrader21]]></name>
				<uri>https://forexsb.com/forum/user/19926/</uri>
			</author>
			<updated>2026-04-16T10:31:19Z</updated>
			<id>https://forexsb.com/forum/post/83247/#p83247</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: The Extra OOS Trick in EA Studio With Real Examples]]></title>
			<link rel="alternate" href="https://forexsb.com/forum/post/83246/#p83246" />
			<content type="html"><![CDATA[<div class="quotebox"><cite>aaronpriest wrote:</cite><blockquote><p>Thanks for your detailed response! It mostly mirrors my own experience. I&#039;ve had many good strategies fail multi timeframe and multi symbol that still ran well for what it was designed for. I stick with H1 and higher, and occasionally M30. Multi broker gets a bit more nuanced, particularly if you can&#039;t get good data from a broker (missing candles, etc.), or enough history. This is less of a problem with MT5 if you have a good broker though. </p><p>If you don&#039;t mind me picking your brain some more... what is your opinion and experience on date range? I used to generate from 2010 (or as far back as my broker could go) with about 60/40 IS/OOS and heavy Monte Carlo, and run strategies that passed until they weren&#039;t profitable for three months in a row or so or broke a previous pattern. It was quite tedious to keep track of them. Lately I&#039;ve been experimenting with generating on the last year of data (excluding the last 3 months for OOS) with optimizer and Monte Carlo, validating the last 3 months to see how it would have traded (also with Monte Carlo, but no optimizing), and then just throwing it straight on a demo account. I generate all week, then validate and swap out new EAs on weekends, so I&#039;m always running new strategies for a week on recent data. I&#039;ve got it automated to a point where I don&#039;t spend much time on a weekend setting up the next week. I don&#039;t have enough data yet to say if it&#039;s more profitable or not, but curious to hear other people&#039;s experience on this before I waste much more of my time testing the theory... Is it better to analyze many years of various markets that won&#039;t ever repeat the same anyway, or analyze and tune for more recent data? Either way, tomorrow won&#039;t continue the same as yesterday. LOL</p><p>I think I&#039;m going back to a larger date range though because it&#039;s hard to find a strategy with enough trades on only 3 months of OOS data to be truly valid.</p></blockquote></div><br /><p>Thanks, and also respect for sharing your way of working so openly.</p><p>Yeah, exactly. I’m honestly glad to see someone who thinks like this, not just by theory, but by actually testing, by trial and error. That’s what this is really about in the end.</p><br /><p>EA Studio offers a lot of functions, and in principle that makes sense because it belongs to algo building. More tools and more testing can give more answers in a certain way. But in my eyes, not every function adds real value when it comes to robustness testing.</p><p>Because like you also experienced yourself, robots can fail multi-pair and multi-timeframe tests, and still run well live on the exact pair and timeframe they were built for.</p><p>And the opposite also happens. Some strategies can pass those multi-pair or multi-timeframe checks inside EA Studio, and still fall apart later on demo or eventually live. Not all of them, of course, but you see it happen often enough.</p><p>That is also why, for me, EA Studio, Monte Carlo, and all those tools are still reference points, not final proof of robustness. As long as a robot has not proven itself through a long enough demo period and then a long enough live period, I do not really see it as a robust system yet.</p><p>For me, real robustness only starts to become clear after 6 months live, 1 year live, and ideally even longer. The more time and data a system survives, the more we actually know. I always come back to that same principle.</p><p>To me, that already says a lot.</p><p>It tells us that multi-pair and multi-timeframe are extra answers, maybe interesting later, but not something I see as core proof of robustness in the build phase.</p><p>If a system has already proven itself live, then yes, putting it on another pair or timeframe can make sense and can add diversification. But that is something else. Then you are expanding a proven system, not validating an unproven one.</p><br /><p>About data, I’m on the long-data side, with a lot of OOS.</p><p>For me, data and OOS are key, plus Monte Carlo. Honestly, that is already enough.</p><p>That is also why I build from 2009 to 2022. Inside that build period I already use 50% OOS. The data I use is the premium data from Popov, and I’ve been using that from the beginning. I adjust that data inside EA Studio to match my broker specs, and because of that I get a correct match between EA Studio, MT4 backtests, demo, and live.</p><p>Good robots show the same structure everywhere.</p><br /><p>So my process is like this:</p><p>I build from 2009 to 2022<br />with 50% OOS<br />and inside Reactor I optimize the full build period with 40 steps, also with 50% OOS</p><p>Then when that is done, I extend the end date forward to 2026.</p><p>That creates 4 extra years of pure, raw, unseen, non-optimized OOS data.</p><p>A strategy that survives that gets Monte Carlo tested.</p><p>After that I run an MT4 tick-data backtest from 2003 to 2026, mainly to see what the robot does on the years before my build period started. Since my build period starts in 2009, I want to see how it behaves on that extra OOS before 2009 as well, so roughly 2003 to 2009.</p><br /><p>Can I be honest?</p><p>I can clearly see you are dedicated, and I respect that. I also see that you are not biased for long data or short data. You are clearly learning from your experiments and noticing patterns, and for that, real respect.</p><p>At the same time, I do think there is a downside to the shorter data approach you are currently testing.</p><p>And I’ll explain what I mean.</p><br /><p>Right now you are building robots on one year of data, where three months of that is OOS inside the building process.</p><p>So basically, you are giving the robot around nine months to train on, plus three months as an OOS check.</p><p>That means the robot has only seen a relatively small portion of the market.</p><p>Even with OOS included, it is still trained on a limited amount of data.</p><br /><p>For me personally, that is a bit too little to build strong confidence.</p><p>It is the same logic everywhere.</p><br /><p>If someone wants to build an AI model, do they train it on as little data as possible, or on as much relevant data as possible?</p><br /><p>Or if you choose someone to manage your capital, do you pick the one with 20 years of experience, or someone with only a short track record?</p><p>For me, it translates directly to algo building.</p><p>The more historical data a strategy has seen, the more market conditions it has been exposed to.</p><p>The more OOS inside the build phase, the better.</p><p>And on top of that, I like to add a wide extra OOS layer after the build.</p><br /><p>If that is not possible inside Express generator, then building on 2009–2022 and testing in MT4 on 2009–2026 can give a similar effect, just a bit slower.</p><p>Of course, I don’t know exactly how Express Generator works, since I don’t use it.</p><br /><p>And yes, what you are doing can definitely work. Shorter data can work.</p><p>But one thing I’ve noticed with that approach is that it tends to push the process into a much faster and more rotational workflow.</p><p>You need to keep building new strategies regularly, because you need fresh ones for the next cycle.</p><p>And because of that constant rotation, everything becomes more reactive.</p><p>I also think one of the risks there is that some potentially good strategies might be rotated out too early, simply because they did not have enough time yet to show their behavior.</p><p>Strategies often need time to play out.</p><br /><p>That is one of the biggest differences for me.</p><p>With longer-data robust strategies, the process is slower, yes. It takes longer in building, longer in demo, and longer in live.</p><p>But it is much less rushed.</p><p>I don’t need to switch every week. I don’t need to build every day.</p><br /><p>At this point I have around 12 robots with good metrics, plus maybe another 5 that are still profitable but where the metrics are not really top level. So those are not my best systems, but they still do their job.</p><p>And honestly, that’s also part of my view on this. We do not need a portfolio of 50 systems. Even in manual trading, one person with a good strategy and the right psychology can make serious money. For me it is the same with robots.</p><p>One strong, robust system that works for a long time can already make you serious money. So if you have a stack of around 10 robust robots that have already been running well live for a year, in my opinion you really do not need much more than that.</p><p>And yes, there are enough builders who can make shorter-data constant rotation work.</p><p>I’m not denying that.</p><p>It can work.</p><p>But for me personally, the goal is different.</p><p>I prefer building systems that are robust and can last over longer periods of time.</p><p>That is also where I see the main difference:</p><p>Long data is slower, but generally leads to more stable structures over time.</p><p>Shorter data is faster, but relies much more on constant rotation and replacement.</p><p>So yes, it moves faster.</p><p>You get robots faster.<br />You can move them faster from demo to live.</p><p>But you also tend to remove them faster again, because they depend more on staying in rotation.</p><p>That is why I personally prefer the long-data approach.</p>]]></content>
			<author>
				<name><![CDATA[algotrader21]]></name>
				<uri>https://forexsb.com/forum/user/19926/</uri>
			</author>
			<updated>2026-04-16T10:29:37Z</updated>
			<id>https://forexsb.com/forum/post/83246/#p83246</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: The Extra OOS Trick in EA Studio With Real Examples]]></title>
			<link rel="alternate" href="https://forexsb.com/forum/post/83245/#p83245" />
			<content type="html"><![CDATA[<p>Is this EA 165 the account you are referring to as outcome&nbsp; ?</p><br /><br /><br /><div class="quotebox"><cite>algotrader21 wrote:</cite><blockquote><p>Because this EA has already been running live for about a year, I decided to push the validation much further.</p><p>So this is not a random test.</p><br /><p>This is a proven strategy being pushed to its limits.</p><br /><p>I increased the stress deliberately:</p><p>history data heavily randomized (80% changed bars)<br />ATR variation increased to 40%<br />slippage increased up to 40 points<br />and 1500 Monte Carlo simulations</p><p>Not because I needed to “prove” it.</p><p>But because I already knew this strategy could handle it.</p><br /><p>Why?</p><p>Because this is not a fresh build.</p><p>This is an EA that has already:</p><p>passed the build process<br />survived extra out-of-sample<br />and proven itself in live conditions</p><p>So at this stage, the goal is different.</p><p>Not discovery.</p><p>Confirmation under pressure.</p><p>Results</p><br /><br /><p>Even under heavy stress:</p><p>the structure holds<br />performance remains consistent<br />degradation stays controlled<br />SQN remains solid across confidence levels</p><p>This is exactly what you want to see when you push a proven system.</p><br /><br /><p>Key point</p><p>No changes were made to the strategy.</p><p>No reoptimization.<br />No adjustments.</p><p>Just pure stress testing.</p><p>At this level, you are no longer looking at a backtest.</p><p>You are looking at behavior under extreme conditions.</p><p>And it still holds.</p></blockquote></div>]]></content>
			<author>
				<name><![CDATA[Vincenzo]]></name>
				<uri>https://forexsb.com/forum/user/14930/</uri>
			</author>
			<updated>2026-04-16T05:40:32Z</updated>
			<id>https://forexsb.com/forum/post/83245/#p83245</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: The Extra OOS Trick in EA Studio With Real Examples]]></title>
			<link rel="alternate" href="https://forexsb.com/forum/post/83244/#p83244" />
			<content type="html"><![CDATA[<p>Thanks for your detailed response! It mostly mirrors my own experience. I&#039;ve had many good strategies fail multi timeframe and multi symbol that still ran well for what it was designed for. I stick with H1 and higher, and occasionally M30. Multi broker gets a bit more nuanced, particularly if you can&#039;t get good data from a broker (missing candles, etc.), or enough history. This is less of a problem with MT5 if you have a good broker though. </p><p>If you don&#039;t mind me picking your brain some more... what is your opinion and experience on date range? I used to generate from 2010 (or as far back as my broker could go) with about 60/40 IS/OOS and heavy Monte Carlo, and run strategies that passed until they weren&#039;t profitable for three months in a row or so or broke a previous pattern. It was quite tedious to keep track of them. Lately I&#039;ve been experimenting with generating on the last year of data (excluding the last 3 months for OOS) with optimizer and Monte Carlo, validating the last 3 months to see how it would have traded (also with Monte Carlo, but no optimizing), and then just throwing it straight on a demo account. I generate all week, then validate and swap out new EAs on weekends, so I&#039;m always running new strategies for a week on recent data. I&#039;ve got it automated to a point where I don&#039;t spend much time on a weekend setting up the next week. I don&#039;t have enough data yet to say if it&#039;s more profitable or not, but curious to hear other people&#039;s experience on this before I waste much more of my time testing the theory... Is it better to analyze many years of various markets that won&#039;t ever repeat the same anyway, or analyze and tune for more recent data? Either way, tomorrow won&#039;t continue the same as yesterday. LOL</p><p>I think I&#039;m going back to a larger date range though because it&#039;s hard to find a strategy with enough trades on only 3 months of OOS data to be truly valid.</p>]]></content>
			<author>
				<name><![CDATA[aaronpriest]]></name>
				<uri>https://forexsb.com/forum/user/12293/</uri>
			</author>
			<updated>2026-04-16T02:37:15Z</updated>
			<id>https://forexsb.com/forum/post/83244/#p83244</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: The Extra OOS Trick in EA Studio With Real Examples]]></title>
			<link rel="alternate" href="https://forexsb.com/forum/post/83243/#p83243" />
			<content type="html"><![CDATA[<div class="quotebox"><cite>aaronpriest wrote:</cite><blockquote><p>Do you test multiple timeframes, multiple symbols, or multiple broker data? I ask because I&#039;m using express generator and mm.js (multimarket) does not support the same flags that gen.js does (the generator) for OOS and data-end, etc.</p></blockquote></div><p>Hi Aaron,</p><p>Thanks, good question.</p><p>I don’t test across multiple timeframes, multiple symbols, or different brokers, and I’ll explain why.</p><p>For me, a strategy does not need to work across everything to be considered good. What matters most is that it performs well on the specific market, timeframe, and data it was built for. I build strategies for a specific symbol and timeframe, and I validate them within that context.</p><br /><p>Regarding timeframes, I don’t build on lower timeframes at all. There is too much noise there and, in my experience, a much lower chance of getting something truly robust. I mainly work from M30 and higher.</p><br /><br /><p>For symbols, I’m not a big believer in cross-pair validation as a main measure of robustness. Some strategies can work on multiple pairs, yes, but in my view that is rare. For me, the main question is not whether a robot can run on many pairs, but whether it is truly robust on the market and timeframe it was built for. And once a robot reaches demo or live, that is really what I focus on: the actual metrics and whether it is making money on the pair and timeframe it was designed for.</p><br /><p>Another reason I don’t put much weight on multi-market or multi-timeframe validation during the build phase is that, in my view, we still cannot clearly tell what we are looking at.. Even if a robot shows good results across multiple markets or timeframes inside EA Studio, that still does not tell me clearly whether I’m looking at something truly robust or just something that is broadly overfit.</p><p>And if we then start putting that robot on demo across different pairs and timeframes, it can easily become a waste of time and resources, because the main question is still unanswered: is the robot actually robust or not?</p><p>For me, it makes much more sense to first go deeper on one market, one timeframe, and one environment, and only later, if the robot has already proven itself through forward or live results over a meaningful period, then maybe check whether it can be expanded to other pairs or timeframes.</p><p>The difference is big. Then we are not expanding with a robot that EA Studio simply approved. We are expanding with a robot that has already proven itself in real conditions.</p><br /><p>Same idea for brokers.</p><p>I’m not building or selling EAs for general distribution, so I don’t test across different brokers. I focus on the broker I actually use for building, demo, and live trading.</p><p>In my view, if you are not selling robots and you already use one solid broker, then multi-broker testing does not add much value. I would rather use that time and energy to run more strategies on demo, expand within the same broker, and push more of them through real forward testing.</p><p>That gives me much more useful information than checking whether the same robot behaves slightly differently across several brokers.</p><p>If I ever want to expand beyond the maximum number of accounts with the same broker, I can always connect another broker through copy trading. I already do that with one account. Sometimes I see a difference of a few seconds in execution, not always, but that is normal. It can even happen between different live accounts at the same broker. That is simply part of trading reality and execution. But for me, that does not say much about the underlying robustness of the strategy.<br />The main point is that the robot needs to work properly on the broker and environment it was actually built for.</p><p>So for me, testing across multiple brokers is more about distribution and compatibility than about robustness. It can make sense if you sell robots and want to know whether they behave similarly everywhere, but for my own workflow it does not add much.</p>]]></content>
			<author>
				<name><![CDATA[algotrader21]]></name>
				<uri>https://forexsb.com/forum/user/19926/</uri>
			</author>
			<updated>2026-04-16T02:16:28Z</updated>
			<id>https://forexsb.com/forum/post/83243/#p83243</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: The Extra OOS Trick in EA Studio With Real Examples]]></title>
			<link rel="alternate" href="https://forexsb.com/forum/post/83242/#p83242" />
			<content type="html"><![CDATA[<p>Do you test multiple timeframes, multiple symbols, or multiple broker data? I ask because I&#039;m using express generator and mm.js (multimarket) does not support the same flags that gen.js does (the generator) for OOS and data-end, etc.</p>]]></content>
			<author>
				<name><![CDATA[aaronpriest]]></name>
				<uri>https://forexsb.com/forum/user/12293/</uri>
			</author>
			<updated>2026-04-16T00:20:49Z</updated>
			<id>https://forexsb.com/forum/post/83242/#p83242</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: The Extra OOS Trick in EA Studio With Real Examples]]></title>
			<link rel="alternate" href="https://forexsb.com/forum/post/83241/#p83241" />
			<content type="html"><![CDATA[<div class="quotebox"><cite>mentosan wrote:</cite><blockquote><p>The “extra OOS” idea is, from a practical standpoint, one of the most solid approaches I’ve seen for reducing curve fitting.</p><p>Many traders rely on the standard OOS in EA Studio and assume it’s sufficient, but in reality the generator still “sees” the full data context. By isolating a completely unseen period and testing without any re-optimization, this method gets much closer to real live trading conditions.</p><p>What I find particularly valuable about this approach:</p><p>it enforces discipline (no tweaking after seeing results)<br />it filters out strategies that are “too good to be true”<br />it highlights true robustness rather than just performance</p><p>is probably the closest we can get to real-market conditions in a retail environment.</p><p>Great contribution,&nbsp; this is the kind of method that actually improves how strategies are filtered, not just how they are generated.</p></blockquote></div><br /><p>Thanks Mentosan, appreciate it.</p><p>Good to see this perspective being recognized, not many people approach it this way.</p><p>Once the strategy is built and optimized, extending the end date without touching anything basically turns it into pure, untouched out-of-sample data.</p><p>That’s usually the point where the difference shows between strategies that only looked good during the build phase and those that actually hold up when pushed into completely unseen conditions.</p><p>A lot of strategies simply don’t survive that extension, even if they passed initial filtering and Monte Carlo.</p><p>At the same time, you’ll never filter everything out completely, but working with longer datasets and adding that extra OOS layer makes a big difference in what remains.</p><p>Like you said, it’s probably one of the closest things we have to real market conditions in a retail environment, especially when that oos extension covers multiple market regimes instead of just a short period.</p><p>That’s where the real filtering happens.</p>]]></content>
			<author>
				<name><![CDATA[algotrader21]]></name>
				<uri>https://forexsb.com/forum/user/19926/</uri>
			</author>
			<updated>2026-04-15T13:27:55Z</updated>
			<id>https://forexsb.com/forum/post/83241/#p83241</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: The Extra OOS Trick in EA Studio With Real Examples]]></title>
			<link rel="alternate" href="https://forexsb.com/forum/post/83240/#p83240" />
			<content type="html"><![CDATA[<p>The “extra OOS” idea is, from a practical standpoint, one of the most solid approaches I’ve seen for reducing curve fitting.</p><p>Many traders rely on the standard OOS in EA Studio and assume it’s sufficient, but in reality the generator still “sees” the full data context. By isolating a completely unseen period and testing without any re-optimization, this method gets much closer to real live trading conditions.</p><p>What I find particularly valuable about this approach:</p><p>it enforces discipline (no tweaking after seeing results)<br />it filters out strategies that are “too good to be true”<br />it highlights true robustness rather than just performance</p><p>is probably the closest we can get to real-market conditions in a retail environment.</p><p>Great contribution,&nbsp; this is the kind of method that actually improves how strategies are filtered, not just how they are generated.</p>]]></content>
			<author>
				<name><![CDATA[mentosan]]></name>
				<uri>https://forexsb.com/forum/user/2989/</uri>
			</author>
			<updated>2026-04-15T09:05:01Z</updated>
			<id>https://forexsb.com/forum/post/83240/#p83240</id>
		</entry>
</feed>
