Error importing Data in FSB Pro

When i import HST files i get this error:
Unable to read beyond the end of the stream.

Hannah's Trade/Portfolio Management Tips

FSB Pro has all the tools you need to build profitable EA (the issue lies in whether you know how to use the functions to your advantage)

For those who use FSB Pro to input your trading rules/theory/conditions...

Even if you know what parameters to use, such as MA 20 cross MA 50.

Do you know which is the best combination to use? For Eg, Base Price and Methods? 

There are 3 options (Base Price, Fast Method, Slow Method) and within these 3 options there is a total of 112 combinations. 
Do you know which is the best combination to use that yields the best trading results?

Beginner FSB users tend to just the "default" setting. 

However other the years, I've realised, simply by changing these settings, it can make a dramatic effect on your trading results, it can turn a losing EA into a winning one.  Hence, the problem lies not because we don't know what parameters to use but the problem lies in, we don't know which combination to use. 

Determining the right parameters aren't really hard.  We look at the chart and we have a good idea which crossing yield seem to identify a breakout.  We often just input our parameters and use the default setting, without changing any of the Base, Signal, Smoothing Method etc (depending on different indicators, some has more combination, some has less).
And then we put to either demo test or live and we are dismay that our trading theory isn't that profitable.  Thereafter, we go about hunting for another trading strategy, another theory and as long as we didn't realised that the problem lies not so much in the wrong use of parameter but in the wrong use of the combination (sometimes, but if you really have a strong trading theory, it's probably not your parameter issues).

Even Simple MA Trading theory works very well.

Not long ago, someone approached me to conduct a Forex Trading Course.

I was hesitant because I do not want to reveal my trading theory which I've work so many years to polish and fine tune it.

So I decided that IF I were to conduct a trading course, maybe I can use a simple MA theory to demonstrate to participants,

1) How to use charts to find the ideal parameters
2) How to use different LTF to eliminate trading noises
3) How to use FSB Pro to fine tune the results

But before I ever conduct that course to teach the MA theory, I want to make sure, it's not just a theory but this theory, can be transformed (via FSB) into a very practical and profitable EA and that I've real results to back my claim. 

A lot of courses out there teaches about concepts and theory but how many of these people teaches the participants to transform these theories into profitable EA.  It's so easy to teach Forex, but how many coaches not only teaches but really impart real practical skills that can help participants create truly profitable EA? 

So I started out doing an experimentation a couple months ago.  To test out a simple MA theory.  (In the past, I upload some of these Simple MA EA in my post but the previous version FSB made these EA execute "wrongly", now with the updated version, a number of my "time sensitive" EA works better.  So I decided to try out this MA theory experimentation again in the new updated version.

For every input I made, I will keep adjusting the various combination one by one, till I get the best optimal back test results (yet saving every new high as a EA file).  I've explain in great details this step in my previous post.

To my pleasant surprise, even a simple MA has really good results with pf 2 and above.  Since it was meant to be an experiment and I didn't know whether even a simple theory such as MA can be profitable or not and hence I didn't really put too much efforts into adding different LTF to eliminate any noise.

Despite such simple rules (and simply but changing the various input), 1 to 3 conditions, I developed nearly 50 profitable EA (PF 1.3 - 2.09 after 3mth of demo testing) from the various combinations of Base Price and Fast/Slow Methods. 

I belief I would have improve the results further, if I were to put in more efforts.  Since I already have a strong trading theory I've been working on over the years, I'll put this Simple MA theory aside till I completed my original theory. 

Whether I conduct the course ultimately or not. is immaterial, this experiment concluded for me that I can create profitable EA by using a simple MA theory (and simply changing the various combination), honestly it isn't really hard for anyone else to create profitable EA, it's a just a matter of hard work.  Till the day Popov decided to automated this function, only those who are hard working enough will know how to create profitable EA.  Finding profitable EA is not impossible.  It's only out of reach for those who are simply too lazy to try.  But for those who are determine to find.  I guarantee you, if you use the method above, you will surely find a couple of profitable EA when you apply your sound trading theory to it.


Is it possible to create profitable EA using FSB Pro? of course!!!

The problem with many beginners are, we are too lazy to test out the various combination to find which is the best optimal setting.  Although I feel that such manual hard work could have been easily resolved by automation.  Unfortunately, there is no such automated function and we lazily relied on the default setting.

To test out this point, I just started another experimentation.  I've been waiting to do it but held back by the sheer amount of work but I finally feel that this experimentation will pay off richly.

What's the experimentation about?

1) Use a LTF setting, the one that you think is the deciding factor, eg H4 crossing or higher than. 
2) Use only this 1 rule from above (H4 crossover, higher than, etc) and save every possible combinations into a different file/EA name.
3) Use a coding method to organised your file name, ex 1 =simple, 2 = weighted 3 = exponential 4 = smoothing and then next option, 1 = Open, 2 = close, 3 = high etc

So when you look at the results, you can see 111, refers to Simple, Simple, Open (on hindsight, I didn't use this coding method and I have to keep referring back to the FSB to check what's the setting.  thus for future setting, I should use this coding method).

Once you have done the 1st 112 combination, save this as your "template" so that in future, you can open these files again and use another parameter input for the next experiment.

The Results?

By simply changing the various combinations, I get very vast results.  It can turn a losing EA into a winning EA and likewise, a winning trading formula into a losing one simply because we use the wrong combination.

In this link,

All these EA have exactly the same, 1 opening condition (besides Bar Open/close as the 1st rule and Close and Reverse as the closing rule), yet you can see how various combination can have vast implication on the trading results.

From the results above (via the link), you will realised that if you happened to use the wrong combination, even if you have the correct parameter, correct trading rules, you will still fail in your search for profitable EA because of the wrong use of combination. 

I mentioned this important information because I fear that many enthusiastic beginners who think they have a winning formula and set themselves out to input their trading rules and thereabout received poor trading results and begin to discard their trading theory, thinking that the fault lies in their theory when in fact it lies in the wrong use of combination instead.

Is it humanly possible that we would know out of these 112 combinations which one yield the optimal results?
If you have 3 or more opening conditions, imagine the sheer amount of hard work you have to go through, the number these combinations grew exponentially with each additional trading rules.  It becomes humanly impossible to find the optimal combination without going through these thousands of combination manually (which is not viable).

In Conclusion

Popov, I know I mentioned this issue several times and after this experiment, I want to further emphasize the importance of automating this function as an alternative "optimization method". 

As a long time FSB Pro user who has successfully created hundreds of profitable EA, do trust me that, if you automate this search function for the best combination, users can find profitable EA more effectively, especially for users who have a trading strategy to start with.  This is what I have been doing, over the years to fine tune my EA, unfortunately, it took me so many years because of the sheer amount of combinations I have to go through them manually (and also over the years, FSB's version has undergone many changes).

Project To Determine Most Effective Acceptance Criteria Settings

qattack -- if it were just you and me I wouldn't comment any further -- I admire that you have the confidence to take the road less traveled and are willing to expend the time and energy to carve out your own path and share the results.  Since this is a forum then I think it's okay to express some skepticism -- just to make clear to anyone who is reading that others have had success with Popov's software and it doesn't necessarily have to be this complex or hard.  We are both motivated and want to understand how all this works -- I think where we differ is which forex battles we choose to fight.   It is important for everyone to become familiar with Popov's software -- and it is clear that within a short time you really have.

On the positive side -- we agree that back testing does not guarantee success.  But then you follow that statement by saying an unsuccessful back test is indicative of an unsuccessful strategy.  And that is *way* not true.  Have you ever taken Popov's strategies and run them through MT4's back tester?  Have you ever taken a successful strategy and slightly modified the Data Horizon?  Also, what I still see missing from your posts is any mention of live accounts -- and to me that's a red flag.

It also sounds like one of your goals is to find a way to filter-out bad strategies before they get added to a live account.  And we've touched on this before -- I don't think you can filter-out bad strategies ahead of time.  In the brave, new world of EA Portfolios that include 100's of strategies -- bad apples are one of the prices we will need to pay.  And that's where portfolio management comes in -- something that is foreign to most of us.

In the end you will decide on one or more ways you like to generate strategies and others will have their way(s).  And I'll make another 25-cent bet -- that 100-member collection that you eventually place in a live account trades no better or worse than the 100-member collection that Popov places in a live account.  I'll send instructions for where to wire transfer the funds...

Error importing data in EA Studio

I receive this message: DE30EUR1440.json wrong server name

could you please help me?


Free Profitable EA made by FSB Generator Online

I've added the following as at 21 Aug.

1. Add e-trailing, to trail at profit 250 (step = 3) to all open positions
2. Use the same batch of EA and created another batch with SL = 500, then add "1" to the last magic number

Eg. 91005x (x = the new version, in this case "1" is the amended version with SL500)

Magic number that starts with...

Magic numbers that ends with...(note if there is an additional "1" behind, it refers to amended version 1).
001 = 1min time chart
005 = 5min
030 = 30min
060 = 1H
04 = 4H
240 = 1D

Here is the link by Fx Blue whereby I can check which strategies perform the best


1. If you want to do a "fair" comparison of how the EA perform against each other, use the "Filter" date function and choose Start date: 21 Aug, that's where you can compare both the Original Version and the Amended Version to see whether there is improvement in trade performance due to the chances made.

2. The original start date is June 14, Some EA has no SL, some has very tight SL.  In version 1, All EA SL = 500.

3. From June 14 to Aug 20, there is no trailing add.  From Aug 21, Trailing is applied to both the original version and amended version.

4. This project will end in either end of Oct or Nov (if no one is joining in) and I'll upload all the EA and it's results and the setting for all to download. 

5. If anyone wants to join in, you can upload your EA and if possible post your demo results using Fxblue for all to monitor the results.  Hopefully the pool of EA will grow larger and larger as we work together to build our portfolio together.

Project To Determine Most Effective Acceptance Criteria Settings

I understand that a successful backtest is not indicative of a successful trading strategy.

What IS true is that an UNsuccessful backtest IS indicative of an UNsuccessful trading strategy.

Our major tool is backtesting, for lack of anything else. Our goal with backtesting is basically to eliminate as many unsuccessful strategies as possible, leaving us with fewer bad strategies to bring down our good ones.

I agree that this is more a problem to approach programmatically, but that is a project beyond my programming skills. Well, I could certainly do it, but it would probably take me a couple years.

You say, "There are many ways to generate good strategies." I guess I probably agree with this statement, BUT it's not EASY to generate good strategies consistently enough to make any profit. In fact, I'd wager after the above tests that none of us knows any of these "many ways" and it is MUCH harder than any of us realize. We cannot just start cranking out a bunch of strategies that look good over the In Sample data and expect them to perform well in live trading.

Also, if we cannot find a way to filter out a number of bad strategies using OOS data as a guide, we won't be able to do so in live trading.

We need to develop a systematic approach based upon something other than trial and error.  As you say, there are billions upon billions of combinations. We're kidding ourselves if we think we can work by intuition and observation.

I also think that we may need to incorporate FSB to further refine our strategies in some manor.

For my next test, I'm conducting a huge-scale test similar to the very last test I mentioned at the end of my last post.

Project To Determine Most Effective Acceptance Criteria Settings

Thanks for writing-up all the results -- lots of good information.

I'm still trying to get my head around what you are trying to do -- so, I'm not sure what the "take home" lessons are.

There are many ways to generate good strategies -- I suspect the problem domain of how best to create forex strategies may not necessarily lend itself to following a strict recipe.  As you've discovered, there are a variety of criteria and parameters that need to be taken into account -- and each of those have a spectrum of possible settings.  In total, probably billions of combinations.  I'm thinking that a problem such as this should be approached programmatically -- though I'd still be dubious of the final results unless they were to also take into account live trading.  And that's because there is a real disconnect between live trading versus back testing and demo accounts.  We'd all be millionaires many times over if live accounts mirrored demo accounts and back testing results.

Perhaps a topic for a different thread -- Why is it that back testing often is not an accurate predictor of live trading performance?

Project To Determine Most Effective Acceptance Criteria Settings

Wow, this is very impressive. I really appreciate your efforts.

Project To Determine Most Effective Acceptance Criteria Settings

Here are my results from my initial runs incorporating OOS.

My working hypothesis is that through the use of OOS data, I can more quickly and efficiently test whether the strategies that I am generating (over the In Sample data) will continue to perform well in the future (OOS, on the data it was not optimized for). The OOS data should simulate placing the EAs on a demo account.

This allows me to change various settings (including, but not limited to, Acceptance Criteria, Monte Carlo variables, Optimizer settings, SL and TP, data sample size, etc.) and determine relatively quickly what effect those changes have in the OOS period. In fact, I can reach a "large enough" sample size to be meaningful, something that is almost impossible deploying strategies on test servers.

I know there will be dissenters, but it is my belief that using the newfound strategy generation settings in this manner will lead to generating EAs that are much more dependable. Please note that it is entirely possible that, once the "optimal" settings are found, the OOS period may be removed completely for live strategy generation. This method is simply using OOS data to determine those settings that make this possible.

This process has been extremely revealing so far. Some things I completely expected; there are a few that surprised me. It will take me a while to fully interpret and understand the data.  As always, I am open to changing my mind about anything. Thanks again to Steve/sleytus for making me interested in this more mass-generation-oriented approach than I was using before. I was quite against generation over a short periods initially.

This test was run on slightly different settings than I originally intended, due to the new OOS validation. here are those settings:

1. Historical Data:
   * Symbol: EURUSD
   * Period: M30
   (Data Horizon: 22500 bars; 15750 IS, 6750 OOS)
   * In Sample: From 10-23-2015 to 01-26-2017
   * OOS: From 01-27-2017 to 08-11-2017
2. Strategy Properties:
   (Account size: 100,000)
   * Entry Lots: 1
   * SL: Always/10-100 pips
   * TP: May use/10-100 pips
3. Generator settings:
   * Search best: System Quality Number
   * OOS: 30%
4. Optimization:
   * 5 steps
   * SQN
   * 30% OOS
5. All data validation: YES
6. MC Validation:
   * 100 tests
   * Validated tests: 95%
   (Settings: defaults PLUS "Randomize indicator parameters" [10/10/20 steps])
7. NO market validation

***Acceptance Criteria:
   > Complete Backtest:
      * Max Amb. bars: 10
      * Min Net Profit: 10 (no effect)
      * Min Trades: 100 (no effect)
   > IS part:
      * Min Trades: 100
      * Max DD%: 25
      * Max Stag%: 35
      * Min PF: 1.1
      * Min R/DD: 1
   > OOS part:
      * Min Trades: 25
      * Max DD%: 25
      * Max Stag%: 50
      * Min PF: 1.1
      * Min R/DD: 0.5
Philosophy of Setting Acceptance Criteria

When setting Acceptance Criteria Values, I entered values far worse than we would want to use in real trading systems (with the exception of number of trades IS = 100; it's necessary to generate over a sufficient sample of trades). This is so I don't accidentally filter out any strategies by trying to be too accurate, while still eliminating a portion of them.

I will progressively narrow the AC, but only so far as to remove only a very rare potentially good strategy.

Note: I ran more Calculations on IS/OOS data. Because significantly fewer strategies are validated with IS/OSS, the variance is much higher, so I needed a larger sample size. But I was initially very surprised when I noticed the IS/OOS had calculated MORE THAN TEN TIMES the number of strategies on average. First, I thought that my settings must be incorrect somewhere. This was not the case. Then, I thought perhaps the new backtesting engine had a bug in it.
And finally, the results:

IS only:
Generated Strategies: 180374
Number Passed Validation (Generation step): 4861
Percent Passed Validation (Generation step): 2.69%
Number Passed Validation (MC step): 760
Percent Passed Validation (MC step): 15.63%
Percent Passed Validation (All steps): 0.4213%

Generated Strategies: 4511276
Number Passed Validation (Generation step): 2337
Percent Passed Validation (Generation step): 0.05%
Number Passed Validation (MC step): 842
Percent Passed Validation (MC step): 36.03%
Percent Passed Validation (All steps): 0.0187%

For this initial experiment, I used a test group without the OOS period for the control group. The In Sample length was identical to that of the generation run with OOS.

When complete, I compared the statistics each generation using the number (percentage) of strategies that Passed Validation in each of the Generator Step and The Monte Carlo step.

The difference between the Percentage of "Passed Validation" in the Generator step represents the relative number of strategies generated via In-Sample-only generation that were not viable when trading out of the optimized period (i.e. they were curve-fit) for the following 6.5-month time period.

Notice that for the IS/OOS, nearly all the strategies generated did not go on to be profitable over this 6.5 month period. This is far from the final word, but the reader should at least consider that relying on IS results alone may not be viable. You might say that 6.5 months is too long and you expect most strategies to fail within that time and that your pruning strategy will (eventually) solve the problem. Perhaps.

This test can be repeated using only 10% OOS (~2 months) and I bet the results wouldn't be substantially better; yes, you will have a higher proportion of winning strategies on average. But I contend that that is due mostly to random fluctuation and small sample size. Run your own experiment: 10% OOS over the same period as I have done (except change the OOS will need only 1575 bars OOS). Reset Acceptance Criteria (but change #/trades OOS to 9, which is proportional to my AC #/trades) and quickly generate 100 random strategies. Don't spend time with MC testing. Examine those 100 random strategies and see just how many show a profit after two months OOS. I think you'll be amazed at how many actually do show a profit.

For this test, I used a mandatory time for testing for profitability. That length of time may be changed and the process repeated. You will, of course, find more strategies that are "profitable" in the shorter terms. But you will also need to base your results on MUCH reduced number of trades (or accept that the number of strategies generated will go down relative to that same length of time).

Basing your results on fewer trades leads to greater and greater problems with small sample size and reliability of results. Dr. Thorpe, in his formulation of System Quality Number (my favorite metric), details that you must make at least 40 trades before that statistic becomes truly accurate. I set "Number of Trades" to only 25 in the OOS period to capture more strategies. However, I'm sacrificing some confidence in the results.

But I quickly realized the reason: I am using Monte Carlo simulations that conduct 100 tests upon each strategy that passed the Acceptance Criteria. Because the IS/OOS was more picky in selecting its passed strategies, it had much more time to actually generate strategies rather than spending it on Monte Carlo validation.

Strategies passed Monte Carlo validation at well over twice the rate with IS/OOS. Monte Carlo validation in the IS/OOS was over both IS and OOS, so this certainly contributed to some degree to the higher validation rate. But consider, too, that the overall quality of the strategy's results are degraded by the use of OOS results. This is partially compensated by the fact that I did filter OOS, so they are the highest stats of any OOS runs.

One observation is that with the IS-exclusive generation, since there are many more strategies being generated than can be held by the Collection, it will in the end contain a much better-than-average representation of strategies than the above stats would otherwise indicate.
What do these results tell us and how can they be of use in further testing?

This intial run was not meant to prove anything so much as to provide a baseline for further testing.

The most interesting result to me was that using IS/OOS, my CPU can use its time generating MANY more strategies, rather than spend it on validating Monte Carlo tests.

One thing disturbing to me is that it's hard to immediately tell the difference between stats of the best strategies generated with IS vs. those IS/OOS. I thought the line may be more discernible. (I can hear "I told you so" already!) This seems to be a very strong argument that the actual Acceptance Criteria may have only a very small effect on proper workflow.

I haven't explored the stats very much yet, though, and I expect the real revelations to come with the further testing I have planned. Perhaps when certain AC are combined, they will generate a more predictable result.

Something else that occurred to me about this: it seems to say that the stats we have to work with cannot be heavily relied upon to select strategies. Not to say, for example, that there is not difference in a R/DD of 6 vs. a R/DD of 2. Of course, the generated strategy with R/DD of 5 has a better chance lacking more information. But what I'm saying is that the divide between these two values may be very small (by itself), especially if it is taken from an exclusively IS/Optimized strategy.

This would certainly be in line with the "observations" that there are no discernable performance differences between the "top ten" strategies of a Collection vs. the "second ten".

So if we cannot depend on the actual stats to make much of a difference, what else do we have? Currently, there is Monte Carlo testing and OOS. Monte Carlo settings are possibly the least-understood metric yet most powerful that traders have. From my time using StrategyQuant, I've watched the "gurus" propose all sorts of nonsense about how to validate your strategies, and much of it was focused on Monte Carlo testing.

The big problem is that we are given this tool, and we know it is somehow useful, but most of us don't have the math background to understand how to use it effectively. So someone that wants to sell his courses and/or software comes up with a process that seems logical to him (based on intuition, "observation", astrology, or whatever...) and this process becomes a "gold standard", unquestioned by people that follow it blindly. Because, after all, that's how EVERYONE does it now, so it MUST be the right way.

Trial and error and intuition and observation is very unlikely to get use moving in the right direction. It can actually set you in a direct opposite direction for an entire lifetime (I'm not exaggerating!).

We need to come up with a way scientifically determine the best method of MC analysis. A calculation of an exact or even near-exact method is far beyond my math skills, but I'll be happy with being in the ballpark. Right now, I think we're just flailing blindly about.
Finally, here is an additional quick test I did. The following sample size is super small, so it doesn't necessarily mean anything at all. But this small sample was rather discouraging.

I took the Collections resulting from the above runs and fed them all through 6750 bars of OOS data the immediately PREceeded the In Sample period.

Keep in mind that the IS/OOS strategies were already filtered for good OOS performance in another period adjacent to the In Sample.

Identical Acceptance Criteria was used as per the above OOS AC.

Total In Sample strategies: 300
% that Passed Acceptance Criteria: 21.7%

Total OOS strategies: 581
% that Passed AC: 19.6%

As I said, this is a very small sample, but if this trend continues then performance in OOS data is not consistent.

As long as the OOS period is adjacent to the IS, I don't think it should matter whether or not it is before or after in this case of monitoring performance.

OOS Acceptance Criteria: Interesting Phemonima

The generation was over the same number of bars (all generation occurs on the IS, whether or not you have an OOS element) and over exactly the same data. "Generated Strategies" is the first step in the process...Every time the generator finds a strategy with positive profit (I assume), it is checked against Acceptance Criteria. I am speaking of the number of strategies generated even before checking against Acceptance Criteria.

The "Generated Strategies" of the IS/OOS generation numbered more than ten times the number of those generated by the exclusively IS generation; therefor, it had a lot more strategies to check against the Acceptance Criteria.