Topic: Robustness testing

Hi,

Im looking for strategies which ideally pass Monte Carlo and Multi Market but finding many do either one but not both.

Initially, I was going for around 500 Monte Carlo tests, but since have been put on the right track by Popov and now testing not more than 50. This is obviously getting me more strategies (and saving time) - but should I use these even if they do not pass the MM tests? What's more important, Monte Carlo or Multi Market?

Is 50 Monte Carlo and passing on at least 2 multi markets OK?
Is 50 Monte carlo and passing on 3 multi markets better?

And isnt multi market "broadly" the same as monte carlo anyway, since it's all just different data 

What do you think?

Thank you,
M

Re: Robustness testing

This is a pretty tough question to answer as it is very subjective.

If you are able to do so, I suggest that you do walk forward applying different criteria and determining results after a couple months.

My 'secret' goal is to push EA Studio until I can net 3000 pips per day....

Re: Robustness testing

Thank you, yes - it's very subjective. I think that's good advice - I will just play around with a few demo accounts and the right balance of settings will appear over time.

Re: Robustness testing

Probably you can run some MT4 backtests with a good size OOS and get some indication.

My 'secret' goal is to push EA Studio until I can net 3000 pips per day....

5 (edited by ats118765 2020-05-07 05:34:56)

Re: Robustness testing

Minch wrote:

Hi,

Im looking for strategies which ideally pass Monte Carlo and Multi Market but finding many do either one but not both.

Initially, I was going for around 500 Monte Carlo tests, but since have been put on the right track by Popov and now testing not more than 50. This is obviously getting me more strategies (and saving time) - but should I use these even if they do not pass the MM tests? What's more important, Monte Carlo or Multi Market?

Is 50 Monte Carlo and passing on at least 2 multi markets OK?
Is 50 Monte carlo and passing on 3 multi markets better?

And isnt multi market "broadly" the same as monte carlo anyway, since it's all just different data 

What do you think?
Thank you,
M


Hi Minch. There are two broad approaches I take to strategy generation which is dependent on whether the strategy is of a trend following or mean reverting nature. It is important to distinguish between the two approaches as some processes such as Monte Carlo and WFT are not appropriate as robustness tests for trend following....but are appropriate for mean reversion.

For example a trend following system only performs when markets trend...but should not perform when they don't. If they do....then that is a pretty sure guarantee that they are curve fit. Ideally you are after a solution that performs well when markets trend....and simply stagnates (without too much drawdown) when market conditions are unfavourable (aka not trending).

The rationale above renders the Monte Carlo and WFT test as unnecessary and counterproductive for robustness testing of trend following systems. For example the process of segmenting your data history into equal segments for WFT assumes that performance is consistent across each segment. This is appropriate for systems designed to capture a 'convergent' repeating market condition such as mean reversion....but ineffective for 'divergent' methodologies such as trend following as your performance is unpredictable and dictated when markets decide to trend.

Furthermore the Monte Carlo technique biases your results to those that produce consistent performance and singular linear equity curves from serially repeated autocorrelation. Trend following performance however is noted by it's stepped equity curve performance from discreet unpredictable periods of autocorrelation separated by large tracts of noise. The steps occur when markets trend....but during non-trending conditions your MC array should be fairly flat or experiencing slight drawdowns.

So in a nutshell your workflow process and choice of robustness tests needs to be aligned with the design logic of your strategy.....otherwise you will lose the connection between the design logic and EA performance.

So for trend following systems I ditch the MonteCarlo and WFT and simply use very long term data sets and ensure that these data sets capture a very broad range of different market conditions. Multi-market is good for trend following provided you use an ATR based volatility adjustment methods for your data sets.

On the other hand Multi-market is not advised for convergent methods that are specifically configured for a particular market condition of finite duration.

Unfortunately the pip based methods in EA studio limits you to testing only those markets that have similar volatility profiles such as EURUSD and GBPUSD etc. I have been eagerly looking forward to an update that includes volatility based stops and trailing stop methods....but until that day...there are certain hurdles you need to overcome.

In it's current configuration....EA studio is a great tool to find and deploy 'convergent techniques' that have a short shelf life...but is more difficult to apply to 'divergent' trend following methods.

Trend following systems want to be enduring and robust over different market conditions whereas mean reverting systems are designed to make hay when favourable conditions persist...but fall over in the long term when market conditions change.

So.....you have a large number of tools at your disposal with EA studio...but you need to closely examine the logic of each tool and determine whether it is appropriate for your particular data mined solution. I hope this makes sense :-)

Diversification and risk-weighted returns is what this game is about

Re: Robustness testing

Thank you Rich! Very comprehensive answer there. I didnt realise that different robustness tests are going to be better in different market conditions, that's something else i need to think about.

At the moment, I am finding simply strategies which pass a monte carlo on basic settings - 50 tests at 80% and now Im running those on a demo account to see what they look like. Might not be the best approach but Im happy to experiement and change my approach depending what I learn - after all, I have to get the experience now while Im in demo before I start putting up real money.

Much obliged, thank you-

Re: Robustness testing

Minch wrote:

Thank you Rich! Very comprehensive answer there. I didnt realise that different robustness tests are going to be better in different market conditions, that's something else i need to think about.

At the moment, I am finding simply strategies which pass a monte carlo on basic settings - 50 tests at 80% and now Im running those on a demo account to see what they look like. Might not be the best approach but Im happy to experiement and change my approach depending what I learn - after all, I have to get the experience now while Im in demo before I start putting up real money.

Much obliged, thank you-

No problemo Minch. :-)

Diversification and risk-weighted returns is what this game is about

8 (edited by ats118765 2020-05-16 07:11:27)

Re: Robustness testing

Some food for thought that has helped me find a way forward.

Don't let the powerful processes bamboozle you. Ensure you understand what you are trying to achieve first and then decide which of the processes are going to aid or simply obfuscate that ambition.

What we are doing here is exactly the same processes employed by mining companies in extracting economic deposits (signals) from the ground (signals plus noise). We want to extract an economic deposit (as much as we can) which we can then refine rather than search for a fictional single gold bar which does not occur in nature. Noise is everywhere.

The ground dictates our success as does the market....not our processes. For example if the market does not trend....then we don't succeed and vice versa. We first need to understand how to capture trends and where to capture trends and configure that into our design before we start the process of refining. The processes we use are simply a method to distil 'sufficient' signal from the noise....or in mining terms....an economic deposit. We cannot lose sight of what we are trying to achieve from the process itself.

We want as much signal as we can get in our processes before we start refining that collection further. We first want to define where those signals are most likely found and how best to find it. The same way a mining company looks for the most likely area where gold is found based on their knowledge of how gold forms and how best to find it. We also must have a knowledge of how best to capture trends, breakouts and mean reversion so that we reduce the data set to that smaller subset we want to mine. We use that knowledge to pre-configure the key variables we want to work around with our processes.

Our signal is the broad feature that we want to extract from the market. For example, the signal may be breakout/trend or mean reversion.

There is only a finite amount of extractable signal from any market. Most of it is simply noise. We need to accept and understand that fact. This helps to set realistic expectations.

For example if we want a trend following system, simply look at a weekly chart and define when it trends. We want as much of those 'good trends' as we can obtain. However to obtain many of these trends, we need to become less specific as opposed to more prescriptive in the way we mine it. Each trend is slightly different so if we are too specific in what type of trend we want...we reduce our ability to capture an 'economic deposit' worth of them.

Noise is the greatest obstacle to our ambitions...but we must accept that we can never eliminate it totally. While trends, breakouts and mean reversion will occur in the future as sure as the sun rises next day.....we need to understand that the degree of noise in the signal will be different in the future than it was in the past. This will reshape those future signals to a degree through the impact noise has on those signals.

We therefore need to allow for this variability of future outcome from noise to be able to extract an economic deposit worth the effort.

Overly prescriptive systems with many variables arising from intensive data mining processes ensure that we data mine a solution that 'exactly' responds to the past market data. We don't want this exactness. With a slightly different future we will significantly reduce the signals we can collect if we are too prescriptive.

We know that the future is uncertain and that the degree of noise will be different in the past so we need to ensure we allow for this future variability of noise without overly compromising the signal that we want to data mine.

Every variable we add to our system from intensive data mining processes restricts the solution to respond to a more particular market condition. We are expanding our specifications to become more selective in what we want to mine. This is contrary to what we actually want....which is to find an economic deposit as opposed to a specific gold bar.....so we need to simplify the constraints to a sufficient degree to allow for variability arising from the impacts of noise on our signals.

So our aim is to simplify to allow for future variability BUT not so simple that it compromises those broad constraints we use to capture that particular broad signal itself.

Work out the best place to mine and how you are going to mine....BEFORE you put your mining process to work....otherwise you could just be mining dirt :-)

Diversification and risk-weighted returns is what this game is about

Re: Robustness testing

Hi Richard,

Thanks for your nuggets of golden advice. 

Do you think you can translate your analogies into steps we can execute in EA Studio?

For example, if we want to mine trending EA, we need to choose data period that best capture the type of trends we are looking for.  Our search (acceptance criteria) need to be broader instead of being too prescriptive (ie. acceptance criteria too strict). 

My question is, after your initial search (broader acceptance criteria), what do you do next to improve the quality of the EA you have generated? Wouldn't adding those "stricter" acceptance criteria in the 1st place help to refine the search further into 1 work process instead of dividing up our workflow into so many steps?

Regards
Hannah

10 (edited by ats118765 2020-05-16 12:09:26)

Re: Robustness testing

hannahis wrote:

Hi Richard,

Do you think you can translate your analogies into steps we can execute in EA Studio?


Hi Hannah

It is a quite hard with EA studio to mine for trend following strategies. The processes encourage you to find 'convergent' solutions that have a short shelf life.

Here is a broad outline of what I do.....and you will need to convert it into EA Studio Language. This is just for ideas that might assist others who are having a hard time generating robust strategies. Each to their own...but some might find these tips helpful.

I firstly want to establish a few very simple broad configurations that I know can capture trends and establish these as core design principles that must be embedded into each strategy solution. For example all solutions must:
1. Cut losses short and let profits run. For this to occur you need an initial stop loss and a trailing stop loss condition. Profit targets compromise the ability to let the profits run so don't use them.
2. The entry condition needs to ensure that your strategy only activates when trends are most likely to be occurring. This avoids the noise.  For example you could use presets in EA studio to lock in a trend following entry condition and allow for data mining to supplement this condition. For example use a SMA crossover (100/200) plus an ADX 'rising' condition (or alternatively a 100 period Donchian Breakout plus an ADX rising condition as a preset). The third variable of the entry condition could be data mined but needs to support the trend following logic.

Once the broad design configuration is established that we know can capture trends when they occur....and avoid non trending conditions.....then we can data mine around this core principle. Having a pre-configured logical design is an essential step that avoids 'curve fit' responses that have no logical relationship to the underlying market.

You can then adjust your preset entries to a 200/300 SMA and then a 100/400 SMA etc. and conduct data mining around these variations. The result is added diversification benefits of simple trend following solutions. You may have say 8 different trend following systems (core designs) around which you data mine additional variations.

Having diversification of trend system allows you to capture a broad range of different trending conditions. This increases the number of signals in your collection.

Use as much data as you can to data mine for trending solutions. The aim is to not use it for projecting future profits but rather to attempt to 'break' your system by finding those conditions where the strategy under-performs. You only want robust candidates that can:
a) Capture trends (which is easy to achieve through the core design logic); and
b) avoid big drawdowns (which is hard) and where the real success lies in trend following. This is where data mining helps in relation to 'noise reduction'.

Acceptance Criteria
The only criteria I require are:
1. Good positive MAR say >0.5 (or in EA studio terms Return/Drawdown which varies dependent on time horizon). THis ensures that every solution has a good Risk:Return relationship.  The two principles of risk and return must go together to identify solid performers.

Sample size can be very small per solution. It is the sample size of the total portfolio that has meaning....and not the individual return stream for trend following systems.

Then data mine over as long a data sample as you can use. Do not heavily optimise as the core design logic is what is used to ensure you capture the trends. Only use large step increments. Optimisation is fluff around the edges that actually curve fits the results....so avoid it. I data mine from 1985 (where possible) to current day using Pepperstone or Dukascopy with the  GMT +2 offset (but I can only get Dukas from 2003 on).

I then wait for a series of strategies to be generated which are then ranked by MAR. Say 20,000 solutions. These are what I refer to as my robust set of solutions that have stacked up over a 30 year plus data horizon and offer positive expectancy (no matter how slight). I tend to run this process at 6 monthly intervals.

I then on a monthly basis take the top 500 robust strategies ranked by MAR and rerun the process using the date range 2015 to current day to restrict  the validated set to only those robust solutions that have performed strongly over the past 5 years. This is what I refer to as the 'Recency test'. At this point I know two things:
1) The strategies are robust and can stand the test of time; and
2) The strategies are relevant for current market conditions. This is the adapting part of the model.

Then I iterate the strategies that have passed the Recency Test to come up with say 20 of the best strategies that as a collective result at the portfolio level, produce the best MAR. It is not the individual return streams that are important but rather how they all compile together.

I now have my 20 optimal performers configured at the portfolio level. I run these again as a portfolio over the entire 30 year sample and then perform market mapping checks to ensure that they perform when markets trend and stagnate when markets don't trend. Once this is complete I can be confident that they perform strongly over the entire period and also demonstrate strong performance over the past 5 years. As a result, they are suitably robust to stand up to future market uncertainty and also perform strongly if current market conditions persist.

I do this process for each individual market to obtain what I refer to as market sub portfolios of trending solutions. I can then further compile these into combination portfolios that span across markets.

Here is an example of a small portfolio that operates on a $2K account comprising 6 markets, 4 core trend following systems and say 40 data mined variations using this process. The intent of this process is to produce non-correlated diversification benefits across markets, systems and timeframes.

Post's attachments

Portfolio 6 markets.PNG 69.32 kb, 2 downloads since 2020-05-16 

You don't have the permssions to download the attachments of this post.
Diversification and risk-weighted returns is what this game is about

11 (edited by hannahis 2020-05-16 15:55:03)

Re: Robustness testing

Hi Richard,

Thanks a zillion for your detail explanation of your work process.

Your insights and knowledge are so precious to all of us and much to learn from you...

I like your post and always look forward to read more from you because you not only provide analogies (to help us get the bigger picture) but you also provide the details to translate these into concrete steps which are very meaningful to those who are figuring out how to improve their workflow.

I totally agree with you that getting the profits is the easiest part (make profit).  The real hardest part to for the EA to prevent loses (lose money). 

Thus a good EA consist of 2 main components/rules a) Make profit (entry rules to spot trend) and b) prevent loses (entry rules to cut off noise/false entries, this is the hardest part because if our rules are too prescriptive/strict, we eliminate other possibilities but if our rules are too lax, we get too many false entries and hence finding the optimal rule that allows trading opportunities and yet robust to prevent too much loses during stagnation time is really challenging).

Thanks once again for your generous sharing of your deep insights...

Much appreciated, Cheers

Hannah

Re: Robustness testing

hannahis wrote:

Hi Richard,

Thanks a zillion for your detail explanation of your work process.

Your insights and knowledge are so precious to all of us and much to learn from you...

I like your post and always look forward to read more from you because you not only provide analogies (to help us get the bigger picture) but you also provide the details to translate these into concrete steps which are very meaningful to those who are figuring out how to improve their workflow.

I totally agree with you that getting the profits is the easiest part (make profit).  The real hardest part to for the EA to prevent loses (lose money). 

Thus a good EA consist of 2 main components/rules a) Make profit (entry rules to spot trend) and b) prevent loses (entry rules to cut off noise/false entries, this is the hardest part because if our rules are too prescriptive/strict, we eliminate other possibilities but if our rules are too lax, we get too many false entries and hence finding the optimal rule that allows trading opportunities and yet robust to prevent too much loses during stagnation time is really challenging).

Thanks once again for your generous sharing of your deep insights...

Much appreciated, Cheers

Hannah

Cheers Hannah :-)

Diversification and risk-weighted returns is what this game is about

Re: Robustness testing

Hi,

Oh blimey, I didn't realise this discussion went on after the first post! I need to catch up - I just came back to make notes on Rich's first reply to my question and it took me about a week of percolating to realise I need to change something about my approach.

It seems I am not clear about which strategy I am trying to find. Currently, I am kind of trying to do everything at once so my results seem a little aimless. I am running the generator; optimising on different criteria; reducing with monte carlo and then eye-sighting the ones I like the look of. I have done 15 currencies so far (10 strategies each) - I have been running this for a few weeks is demo. Im not expecting fireworks, but to learn more and prune the approach as I go along until I find something I am happy with. So far, I have been down 3% and up 11% from the initial investment and I am roughly 52% winning trades.

From what Rich says, this approach seems neither trend following nor momentum based. I am using 10 years of data only and Im not using multi market at all, for some reason. Although I am looking to try some strategies on multi market and compare the performance.

Essentially, what I am creating is a very general strategy. Im concentrating on H1 at the moment but I would really like to do D1 trend strategies - but this limited by the lack of ATR functionality, if what I understand is correct. My entry and exit accuracy seems OK to me and where I am losing is (i) where I forgot to put in a SL (duh) and (ii) the perennial issue of letting profits run and stopping losses early.

To this end, when I make above 100 pips on any trade, I close out half of it and lock in some profit - it has happened a few times already out of 80 trades, so something must be going right - its just the killers when you end up going in the wrong direction.

I was also thinking of just setting my trailing stop to 50 and taking every profit at 50. I am very happy with a small but steady profit. Learning curve, but two questions :

1 What is MAR?
2 Do you think strategies which pass 50 monte carlo at 95% & multi market are not trend or momentum but could be OK as really general strategies?

Last point - not over optimising for trend strategies and, when you do, using a large gap is something I had not thought of. Perhaps I will start that approach for my D1 strategies when I get to that point. Until then, let's hope the ATR is something forthcoming soon.

Thanks,
Matthew

14 (edited by ats118765 2020-05-30 06:52:45)

Re: Robustness testing

Minch wrote:

1 What is MAR?
2 Do you think strategies which pass 50 monte carlo at 95% & multi market are not trend or momentum but could be OK as really general strategies?

Last point - not over optimising for trend strategies and, when you do, using a large gap is something I had not thought of. Perhaps I will start that approach for my D1 strategies when I get to that point. Until then, let's hope the ATR is something forthcoming soon.

Thanks,
Matthew

Hi Minch. Welcome to the world of data mining. All of us have stories we can tell about our experiences but the truth of the matter is that you need to treat all these stories with a grain of salt and build those mental models yourself through your research efforts.

You will find that this is a game of risk:return trade-offs where many are under the false assumption that a particular solution has a definitive edge. Positive expectancy is never a guarantee and is bound within confidence intervals for a particular duration. Furthermore there are ways in which you convince yourself through manipulating it's variables such as the Win Rate and Reward to Risk to gain an impression that you have a statistically robust solution with an edge. A win rate of 80% means nothing unless counter-balanced with the Return to Risk relationship. It is simply a nice 'psychological' pipe dream that you win more times that you lose....as the more fundamental truth is the long term result of your PL. So this is why I trade divergent models as they have an enduring impact to your long term success in this game which can be backed up by validated track records of professional FM's who specialise in this game of risk.

The only way you are going to make sense of this minefield is through your own efforts but that is why this game is so addictive. But always question everything you do and closely look at all your assumptions as this is typically where the errors are found in an quantitative approach.  :-)

MAR is a risk adjusted metric which uses the Compound Annual Growth Rate % divided by the Maximum Drawdown in %   =  (CAGR/Max Draw %)

It is a representative benchmark measure used to compare the relative performance between return streams which is not materially affected by leverage (aka position sizing).

We can always scale up a good return stream with a high MAR using position sizing, so the metric allows us to find the better return streams that deliver best return $bang for your risk $buck.

EA studio does not currently have MAR but you can use Return/Drawdown as a proxy....however the use of CAGR normalises results so you can compare and contrast different return streams of different duration.

You can calculate CAGR using Excel with the function "rate" where you nominate the number of compounding periods, the present value and final value of the return stream.

In regards to the Monte Carlo question....IMO this process is not recommended for divergence where multi-market solutions are preferred....but is appropriate for Convergent methodologies that are looking for equity curve stability for a finite duration. Convergence predates on a particular repeating pattern of market behaviour such as price oscillations etc. that represent 'emergent' market patterns of oscillation around an equilibrium or mean tendency.

Given that you are looking for an 'emergent' market behaviour based around a central tendency, then multi-market is not advised as the particular market conditions of convergence are unique to a particular market over a finite duration. If you assume multi-market, then that assumption is based on the notion that all markets concurrently display that particular predictable behaviour.....which is rarely the case and particularly evident when markets are not correlated. They certainly have been more correlated from 2010....but this is no guarantee.

So if you play with convergence you are after a 'concentrated' strategy as opposed to a 'diversified' strategy. The use of diversified portfolios is an important prerequisite for trading divergence but can be a huge mistake when trading convergence.

Portfolios need to be treated very carefully as they, like position sizing, are a method to leverage returns. Having portfolios of convergent solutions can lead to excessive portfolio heat. Divergent methods address this potential to warehouse risk at the portfolio level by always 'cutting losses short' and never allowing a single return stream to bring down the whole portfolio. The unfortunate side of convergent methodologies is that the 'trade-off' associated with the temporary nice linear equity curve' is the strategies ability to 'warehouse risk'. You see this in action when these solutions fall off the cliff when market conditions are no longer convergent in nature.

Inclusion of convergent systems in your portfolio is like carrying a collection of nitroglycerine around with you.....so beware :-)

It appears that in your generic method that you may be mining for both divergence and convergence. If you are getting good results....then something must be going right.....but it is so hard determining what is luck versus 'edge' and it is also hard to discern how much risk is being held by your strategies.

It therefore really helps if you can categorise your process into either divergent process or convergent process. The two methods require different approaches in data mining activities to have a better chance of finding solutions with a finite or more enduring edge.

Diversification and risk-weighted returns is what this game is about

15 (edited by ats118765 2020-05-31 03:32:08)

Re: Robustness testing

All this talk about convergence and divergence can be confusing but for those interested in harvesting an edge in today's modern efficient markets, the following diagram (of the typical market distribution of returns for a liquid instrument such as the S&P500) tells the story and is an invaluable guide towards your data mining efforts.

https://atstradingsolutions.com/wp-content/uploads/2020/05/Distribution-of-Returns.png

The market distribution of returns shows through the Law of Large Numbers where an edge resides in the market data itself and reveals why there are only two broad forms of approach that can harvest an edge made available by the market.

Convergent methodologies focus on the 'finite' peaks of the distribution, whereas Divergent methodologies focus on the 'unbounded' 'tails of the distribution. All results that arise from harvesting returns that plot within the normal distribution can simply be attributed to 'luck alone'. It is the non-normal zones of the market distribution where you need to invest your efforts if you want to be classed as a trader as opposed to a mere gambler.

Convergence relates to the edge that can be achieved when markets are fairly predictable in nature for a finite duration and oscillate around a central tendency (quasi-equilibrium)....whereas Divergence relates to the edge associated with market uncertainty arising from unpredictable market transitions (directional moves of endurance) between states of quasi-equilibrium.

Most traders are attracted to the lure of 'convergence' as they like to be 'right' about their predictions, but these conditions are finite in nature, require a concentrated strategy and if risk is not managed well, is the downfall of these trading solutions.....but seasoned traders that can manage risk at all times and diversify widely (across many different market distributions) know that the bounty arising from the 'unbounded' upside leads to long term wealth and sustainability....but you have to recognise that you will be wrong 'most' of the time.

It is not whether you are right most of the time and wrong less of the time. What matters is the degree of that rightness and wrongness. Your long term fate in this game ultimately is decided by the outlier events you experience in life. They can either be massive 'wrongs' that lead to the graveyard or massive 'rights' that lead to your fortunes.

How you fare in this game with the Law of Large numbers is ultimately decided by where you direct your efforts w.r.t the market distribution of returns.

Contrary to popular opinion, it is the market that determines your ultimate fate.....not your system. Your system is the method you use to constrain your options to those that (with the Law of Large numbers) harvest the 'arbitrage' that is made available by the market condition. A small sample of trades tells you nothing about your sustainable venture into the Law of Large numbers....but the next 1000 trades certainly tell you something that is more than 'nothing'.

Diversification and risk-weighted returns is what this game is about

16 (edited by Minch 2020-06-03 20:03:05)

Re: Robustness testing

Hi Rich,

Thank you very much for that explanation, I believe I followed most of it but Im going to bookmark it and keep it in mind moving forwards.

Appreciate also the explanation of MAR - I have been using Return Draw Down as a benchmark already and my acceptance criteria is 0.5 RDD per year in the backtest, which seems to work well. CAGR makes sense as well since that is the compounded figure, could be nice to have that metric offered in the software but, if it's not, then easy enough to calculate.

My approach is definitely a blended one, then.

Im finding it easier to reduce from 500 strategies down to 50 or so using Monte Carlo so these are my Predictive (converging) strategies and - is this what you mean (from an earlier post) about mean-reverting, i.e. Monte Carlo and Walk Forwards and these strategies needing to be optimised fairly often? The walk forwards only tells me how decent the strategy is in terms of re-optimising so if I wanted to find completely new strats after a few months, I could forego that stage.

If I got that right, then the other approach would be the divergent strategy, using multi market and something like a trend following strategy, optimised in a fairly "loose" way (if at all) and based on as long a data range as possible?

If I can get my TSL and TP working better, I will trade H1 with the hybrid approach for data; and then set up a D1 trend following strategy, with wider TSL and TP.

Anyway, I am certainly having fun to experimenting with each approach - my blended strategies are giving me decent entry points but there has been an issue with the Stop Losses not triggering and I have to check back on each portfolio to see what I entered. I want to try a trailing stop loss (at 50) then a take profit at 150. I will likely sell half the position at 100 to lock in some profit, if it goes my way.This would certainly help with my money management - my hybrid approach is getting some decent overall win rates (but which mean nothing if you cant keep the profit!)

CADCHF    100%
EURGBP    67%
AUDNZD    63%
AUDCAD    56%
USDCAD    55%
EURUSD    54%
EURCHF    38%
GBPUSD    35%
AUDUSD    0%

But I see also what you mean about portfolio heat ... I assume that means how many trades are on at the same time, and I seem to be averaging about 10 on the go, so when one goes awry, it tends to take the whole lot down with it - especially with no stop loss - I was up 10% from my original demo investment last week, now Im -5% ... But that's fine since it's all part of the process of improvement and understanding and what the demo account is for

Re: Robustness testing

Yep - and I see you did answer my question earlier in the thread:

The rationale above renders the Monte Carlo and WFT test as unnecessary and counterproductive for robustness testing of trend following systems. For example the process of segmenting your data history into equal segments for WFT assumes that performance is consistent across each segment. This is appropriate for systems designed to capture a 'convergent' repeating market condition such as mean reversion....but ineffective for 'divergent' methodologies such as trend following as your performance is unpredictable and dictated when markets decide to trend.

18 (edited by ats118765 2020-06-04 01:35:19)

Re: Robustness testing

Morning Minch :-)

Minch wrote:

Im finding it easier to reduce from 500 strategies down to 50 or so using Monte Carlo so these are my Predictive (converging) strategies and - is this what you mean (from an earlier post) about mean-reverting, i.e. Monte Carlo and Walk Forwards and these strategies needing to be optimised fairly often? The walk forwards only tells me how decent the strategy is in terms of re-optimising so if I wanted to find completely new strats after a few months, I could forego that stage.

You got it mate :-)


Walk Forward techniques were originated by Rob Pardo as a 'final' process method to statistically evaluate the degree of optimisation required for a particular 'convergent' strategy based on past in sample performance against a particular class of market condition. Unfortunately it has been taken by many as a proxy for more comprehensive robustness testing across all forms of trading strategy (divergent or convergent).

Optimisation works two ways. It can make a bad 'convergent' strategy look good and a good 'convergent' strategy look bad. The intent of walk forwards is to assess whether it made a profit in the Walk Forward OOS phase....but more important and subtly it measures the pace of walk-forward efficiency. This therefore provides a basis to assess how often you should re-optimise your strategy.

This is used as a basis to reduce the need to 'turn on and turn off strategies based on their performance' and rather let strategies run with a degree of statistical confidence that have high walk-forward efficiencies.

You will find with convergent strategies that your long term success with this approach is tied to how often your strategies  underperform or outright fail..... as opposed to the degree of success achieved when your convergent models work well.

The final arbiter of whether you achieve positive expectancy over the long run with convergence tends to correlate with the degree to which you need to switch strategies or turn them off. The time you decide to turn them off, is typically when the convergent strategies are underperforming. The summated costs of all those times when they underperform over the Law of Large numbers has a significant impact on your returns. This is what I refer to as the 'hidden' costs of switching strategies.

I actually cannot find validated long term track records of professional FM's that specialise in convergence. They may be extremely successful over say 5-10 years....but then I lose trace of them. Rob Pardo is very confident about his techniques....but I simply can't seem to get on his wave.

What I have found....as a fundamental quest for 'data mining' is that the trading strategies themselves are really just a means to a bigger end. They are tools you use as a method to lift your performance to the point when returns start to benefit from the principles of compounding. The  key thing here is to consider your trading strategies as the tool to generate sufficient positive expectancy to the point where compounding can then take over and compound your wealth.

Convergent strategies have a finite shelf life and their deployment can actually lead to horrible long term performance through their continual failures and resultant volatile impact on your equity curve. Their short term performance over favourable conditions is a 'trap' to traders that attracts them like flies. You need to assess performance over say a 20 year period where compounding wealth has a major impact.

If you see the 'true long term 20 year chart of a convergent system...many of them actually meet risk of ruin' when conditions become unfavourable. This means the skill to convergence is the degree to which you are successful in 'timing' and 'selection'. For example...to know when your strategy is approaching risk of ruin or simply naturally underperforming.....and then to find an alternative 'strong performer' to switch to.

Unfortunately market conditions can be a significant obstacle to achieving this 'timing' and 'selection' outcome particularly when they remain divergent for long periods of time.

The real 'volatile' nature of convergent equity curves over the long term (from strategy hopping and risk of ruin) suppresses as opposed to reinforces the principles of compounding. Divergence on the other hand may look volatile over the short term...but is far more enduring and less volatile long term. They provide a method for you to get to a point in your trading career where compounding starts exerting it's emphasis and your wealth building returns start to soar.

So there is a real 'sting in the tail' to convergence that you need to be aware of.

In regards to trading the 'hybrid' approach....just be aware of the 'negative skew' that lies in convergence and continuously keep your eye on those convergent strategies. It is the negative skewed nature of convergence that compromise a portfolio when things get ugly. Negative skew is found in those strategies that have 'many small wins and the occasional large loss' during favourable conditions.  The 'occasional large loss' is a warning sign for a portfolio. If those occasions become more frequent when conditions remain unfavourable, a single return stream with a sequence of large losses can totally compromise the total portfolio.

Cheers mate

Diversification and risk-weighted returns is what this game is about

Re: Robustness testing

Some words of Wisdom from David Druz on trend following.....a very respected 'thinker' in the world of Funds Management :-)

https://atstradingsolutions.com/wp-content/uploads/2020/06/David-Druz.png

David Druz is a long-time trend follower who was the first mentee of Ed Seykota. He has run Tactical Investment Management since the early 1980's. Here are some of his nuggets of wisdom:

“There are whole families of trend trading ideas that seem to work forever on any market. The down side is they are very volatile because they are never curve-fit. They're never exactly fit to any particular market or market condition. But over the long run, they do extract money from the market.”

“Robust systems tend to be designed around successful trading tactics (origin of our 'Tactical' name) classical money management techniques, and universal principles of market behaviour. These systems are not designed for specific types of markets or market action. And here is the amazing thing about robust systems: The more robust a system, the more volatile it tends to be!

This is because robust systems are not optimized to particular markets or market conditions. The converse is also true. You can design systems with excellent returns and low volatility on historical testing, but which only work for given periods in given markets. These systems tend to be curve-fit or market-fit and are not robust.”

"Successful traders rely on a small number of very profitable trades to compensate for several smaller losses."

"You have to keep trading the way you were before the drawdown and also be patient. There’s always part of a trader’s psyche that wants to make losses back tomorrow. But traders need to remember you lose it really fast, but you make it up slowly. You may think you can make it up fast, but it doesn’t work that way."

"For a system trader, it's way more important to have your trading size down than it is to fine tune your entry and exit points."

To many, the points made here aren't very exciting or new - but form the essence of trend following.

It is also interesting seeing how Druz talks about robustness and volatility. Basically, the simpler the method, the better the long-term results, BUT you will have more losers than winners and be prone to suffering a higher level of volatility in your returns.

In this regard, the performance record of long-time trend followers Dunn Capital is worthy of mention. In a 40 year period going back to the mid 1970's they suffered twelve drawdowns of 25% or more. And yet each time they carried on doing the same thing and ended up making new equity highs.

They didn't change anything - they just continued acting on their signals and adopting the same risk profile. And then things just started to happen - seemingly out of nowhere. With trend following you will often see a strong burst of performance after a run of losing trades and a drawdown, and a powering up to new equity highs. And it may only take one or two trends to do that.

It's just they way it works.

Source: https://www.thetrendfollower.com/2020/05/some-words-of-wisdom-from-david-druz.html

Diversification and risk-weighted returns is what this game is about

20 (edited by Minch 2020-06-11 20:54:14)

Re: Robustness testing

Hi Rich

This is such good advice and interesting background reading, thank you very much again for a comprehensive and thought-provoking update.

I am becoming more and more convinced to leave behind my predictive approach altogether and perhaps even scrap the hybrid predictive/trend in favour of a purely trend following system. This is due to the following:

1. The general themes mentioned in the last few posts suggest that convergent strategies are more volatile over the long term; make less use of a compounding approach over time; increase general risk of ruin; do not perform well in trending/divergent markets (and I read somewhere that markets tend to trend the majority of the time - even lacking a % figure to apply to that statement, just look at any 1Month chart to see that quite clearly) - and so on.

2. I want to be as diversified across currencies as I can be, so having to re-optimise 100 strategies on a quite a slow computer every month is going to mean Im constantly on my PC (which for me defeats the purpose of having an automated strategy - to some extent, I would be happier with average returns for less need to be watching my screen)

3. Originally, I really liked the look of 1H charts and was trying to get some predictive strategies on those, but the lack of ATR stop loss and the issue I just found out about not having the function of trailing stop losses in the portfolios - make me more uncomfortable in trading that timeframe now. I think moving to a D1 timeframe is the logical choice because then I can have an initial stop loss, but have the time to manually amend it to ATR or TSL at the broker level the next day then I can sit back and watch it play out.

So, if that's OK - a couple of questions about finding divergent strategies with EA Studio

1. I think you had mentioned to set a Preset Indicator (e.g. MA) on all trades and then have another indicator as a confirmation. Do you suggest I optimise the preset moving average crossover or just keep this to a standard (eg 20/50)? Update: We can use some presets for identifying trends (MA, Donchian, MACD, RSI etc) and then Data mine the third indicator. Put together 8 combinations for the first 2 set indicators.

2. For the confirmation indicator, are we using all the available indicators in EA Studio or just a handful of the trend following ones, e.g RSI, MACD and a volume indicator like MFI (although that's only broker volume, not market volume)

3. Optimise but loosely, e.g. 40 steps is OK? Am I also optimising any of the presets or keeping these as static and letting the data mining work out the difference approach (e.g I use 100/200 MA crossover + Donchian Breakout at a set level + a data mined third option ) ... After all we are trying to avoid a tight overfit.

4. We would be looking for a good RDD (0.5 for each year, minimum) but ... is a balance line stability / R Squared ( minimum 60) also useful - what do you think?

5. And now there is no Monte Carlo ... I would be testing robustness with Multi Multi market. I assume the more markets it passes the better, and the criteria should be as strict in terms of RDD/Balance Stabilty or just the standard setting (which is, offhand, just having a minimum net profit above zero) ... do you think passing 2 markets at above zero profit is a passing grade (minimum) or basically the more the better?

Your guidance would be really appreciated but I will also check back on the original post so Im not repeating my questions o what might have been covered -

Cheers,
Matthew

21 (edited by ats118765 2020-06-13 04:21:01)

Re: Robustness testing

Hi Minch :-)

Mate...the guidance given was simply a way to try to adapt the methods of EA studio to the workflow process I undertake within MT4. I have tried to mimic what I do within MT4 using EA Studio....but unfortunately I can't fully achieve this. But I get a result that isn't too different.....so that being said.......

Minch wrote:

1. I think you had mentioned to set a Preset Indicator (e.g. MA) on all trades and then have another indicator as a confirmation. Do you suggest I optimise the preset moving average crossover or just keep this to a standard (eg 20/50)? Update: We can use some presets for identifying trends (MA, Donchian, MACD, RSI etc) and then Data mine the third indicator. Put together 8 combinations for the first 2 set indicators.


I tend to choose a preset configuration in my entry rules such as a Donchian 200 with an ADX condition (rising) and turn off all exit rules....but ensure that I have the fixed stop and trailing stop (where the trailing stop is always greater than the fixed stop to create divergent outcomes) applied to all outcomes and then run the Generator around this. I need to include a single additional entry rule variable into the mix to allow the process to run within EA Studio. The result is that the Donchian 200 and ADX rising condition is included in all solutions....and then we get variation about the stop, trailing stop and additional 3rd entry rule variable.

I then create a collection which is saved into a file titled Donch Breakout 200 Variations.

I then create another collection where I change the Donch lookback from 200 to say 500 and go again to generate different solutions with a longer term lookback.

I would apply this to a number of simple trend following models that are readily available in the literature of the professional FM's and use this as a basis to get variation (system diversification) around a few core principles that I know stack up over a very large data sample.

Having a pre-defined core logic that you can clearly see would work in a broad range of different trending market conditions is a significant step to avoiding a curve fit result. The additional parameters and differing value sets is simply a way to diversify around this core logic. A pre-set design is the way we can be confident that you do not have to go overboard in testing for robustness using methods such as Monte Carlo etc. All we need is a large trade sample size and a method to see whether our solution actually captures trending market conditions. I do this by mapping the equity curve of our result against the market data (called Map to Market).

Monte Carlo is advised when you cannot clearly see an edge playing out and need a method to attempt to extract an edge when visual clues are lacking....hence is a valid approach for convergent strategies such as 'mean reversion' when applied over a particular market condition. For example during more stationary market conditions (non-trending periods), it is not visually evident whether the stationary market condition is a result of noise or a mean reverting tendency. In that case you use Monte Carlo as a method to determine which is which.
 

Minch wrote:

2. For the confirmation indicator, are we using all the available indicators in EA Studio or just a handful of the trend following ones, e.g RSI, MACD and a volume indicator like MFI (although that's only broker volume, not market volume)


Just use the ones that make sense from a trend following logic. Particularly those that look for momentum breaks in addition to the core rules defined with the presets. It is essential that the logic of the solution is tied to the market condition. So do not use any additional entry variables that do not have meaning to you. Ideally we want fewer variables as opposed to more. You could drop the ADX preset and include the ADX in the entry rules....so that you get variation around that.

Minch wrote:

3. Optimise but loosely, e.g. 40 steps is OK? Am I also optimising any of the presets or keeping these as static and letting the data mining work out the difference approach (e.g I use 100/200 MA crossover + Donchian Breakout at a set level + a data mined third option ) ... After all we are trying to avoid a tight overfit.

I personally do not use the optimisation. I turn it off. IMO it is not relevant to trend following methods that work off a general market principle. You do not want a curve fit result of many parameters but rather a result that applies to a general market condition such as a simple linear equation with a slope.

Minch wrote:

4. We would be looking for a good RDD (0.5 for each year, minimum) but ... is a balance line stability / R Squared ( minimum 60) also useful - what do you think?

I just look for good RDD while I data mine. A smooth equity curve is applied at the end of the process where I compile a portfolio together of robust return streams. Over long term trade samples, volatility is embedded in all my return streams....so at the end of the process this is where I compile the result using 'market mapping methods' to produce an uncorrelated result that makes sense against the market condition it is applied against.

Minch wrote:

5. And now there is no Monte Carlo ... I would be testing robustness with Multi Multi market. I assume the more markets it passes the better, and the criteria should be as strict in terms of RDD/Balance Stabilty or just the standard setting (which is, offhand, just having a minimum net profit above zero) ... do you think passing 2 markets at above zero profit is a passing grade (minimum) or basically the more the better?

This is where it gets difficult with EA studio. Multimarket with EA studio offers some help but you really need to normalise your return streams to get the full benefits from multi-market.

You will find that a larger data sample makes it easier to pass the multi-market test provided that your validation criteria is set to achieving a slight positive expectancy. This is because over long term data sets of say 20-35 years or so....every liquid market has trending periods at varying points of time. The significant returns that are generated when markets decide to trend really build up those equity reserves and allow you to survive for extended periods when markets are not trending. You won't see this effect with small sample sizes.

Use long term data....tie to design logic to the market condition and diversify...diversify....and diversify.

Cheers

Rich

PS Mate...I am only giving my opinion here mate.....so best to test all these conclusions of mine yourself.

Diversification and risk-weighted returns is what this game is about

Re: Robustness testing

Excellent post Rich!

Re: Robustness testing

hannahis wrote:

Excellent post Rich!

Cheers H :-)

Diversification and risk-weighted returns is what this game is about

Re: Robustness testing

Thank you so much, Rich! That's more than enough info for me to build on and I really appreciate your time and comprehensive replies. Time to get to work smile

Matthew

Re: Robustness testing

No problemo Minch :-)

Diversification and risk-weighted returns is what this game is about