Do you think you can translate your analogies into steps we can execute in EA Studio?
It is a quite hard with EA studio to mine for trend following strategies. The processes encourage you to find 'convergent' solutions that have a short shelf life.
Here is a broad outline of what I do.....and you will need to convert it into EA Studio Language. This is just for ideas that might assist others who are having a hard time generating robust strategies. Each to their own...but some might find these tips helpful.
I firstly want to establish a few very simple broad configurations that I know can capture trends and establish these as core design principles that must be embedded into each strategy solution. For example all solutions must:
1. Cut losses short and let profits run. For this to occur you need an initial stop loss and a trailing stop loss condition. Profit targets compromise the ability to let the profits run so don't use them.
2. The entry condition needs to ensure that your strategy only activates when trends are most likely to be occurring. This avoids the noise. For example you could use presets in EA studio to lock in a trend following entry condition and allow for data mining to supplement this condition. For example use a SMA crossover (100/200) plus an ADX 'rising' condition (or alternatively a 100 period Donchian Breakout plus an ADX rising condition as a preset). The third variable of the entry condition could be data mined but needs to support the trend following logic.
Once the broad design configuration is established that we know can capture trends when they occur....and avoid non trending conditions.....then we can data mine around this core principle. Having a pre-configured logical design is an essential step that avoids 'curve fit' responses that have no logical relationship to the underlying market.
You can then adjust your preset entries to a 200/300 SMA and then a 100/400 SMA etc. and conduct data mining around these variations. The result is added diversification benefits of simple trend following solutions. You may have say 8 different trend following systems (core designs) around which you data mine additional variations.
Having diversification of trend system allows you to capture a broad range of different trending conditions. This increases the number of signals in your collection.
Use as much data as you can to data mine for trending solutions. The aim is to not use it for projecting future profits but rather to attempt to 'break' your system by finding those conditions where the strategy under-performs. You only want robust candidates that can:
a) Capture trends (which is easy to achieve through the core design logic); and
b) avoid big drawdowns (which is hard) and where the real success lies in trend following. This is where data mining helps in relation to 'noise reduction'.
The only criteria I require are:
1. Good positive MAR say >0.5 (or in EA studio terms Return/Drawdown which varies dependent on time horizon). THis ensures that every solution has a good Risk:Return relationship. The two principles of risk and return must go together to identify solid performers.
Sample size can be very small per solution. It is the sample size of the total portfolio that has meaning....and not the individual return stream for trend following systems.
Then data mine over as long a data sample as you can use. Do not heavily optimise as the core design logic is what is used to ensure you capture the trends. Only use large step increments. Optimisation is fluff around the edges that actually curve fits the results....so avoid it. I data mine from 1985 (where possible) to current day using Pepperstone or Dukascopy with the GMT +2 offset (but I can only get Dukas from 2003 on).
I then wait for a series of strategies to be generated which are then ranked by MAR. Say 20,000 solutions. These are what I refer to as my robust set of solutions that have stacked up over a 30 year plus data horizon and offer positive expectancy (no matter how slight). I tend to run this process at 6 monthly intervals.
I then on a monthly basis take the top 500 robust strategies ranked by MAR and rerun the process using the date range 2015 to current day to restrict the validated set to only those robust solutions that have performed strongly over the past 5 years. This is what I refer to as the 'Recency test'. At this point I know two things:
1) The strategies are robust and can stand the test of time; and
2) The strategies are relevant for current market conditions. This is the adapting part of the model.
Then I iterate the strategies that have passed the Recency Test to come up with say 20 of the best strategies that as a collective result at the portfolio level, produce the best MAR. It is not the individual return streams that are important but rather how they all compile together.
I now have my 20 optimal performers configured at the portfolio level. I run these again as a portfolio over the entire 30 year sample and then perform market mapping checks to ensure that they perform when markets trend and stagnate when markets don't trend. Once this is complete I can be confident that they perform strongly over the entire period and also demonstrate strong performance over the past 5 years. As a result, they are suitably robust to stand up to future market uncertainty and also perform strongly if current market conditions persist.
I do this process for each individual market to obtain what I refer to as market sub portfolios of trending solutions. I can then further compile these into combination portfolios that span across markets.
Here is an example of a small portfolio that operates on a $2K account comprising 6 markets, 4 core trend following systems and say 40 data mined variations using this process. The intent of this process is to produce non-correlated diversification benefits across markets, systems and timeframes.
Diversification and risk-weighted returns is what this game is about