A benchmark may be developed with certain data set in conjuction of certain generator options/indicators. A pool of indicators may be selected averaging different computational needs. With enough time and right options given to strategy generator, it may test all the indicators / options so the benchmark can give an idea about how well system scales on single test. Also, multi instaced benchmark may be developed where the benchmark also measures the multi-instanced calculation performance and multi-instance scale, also the total number of operations the system can achive.
It may be very time consuming, so instead i suggest a community driven benchmark pool. With a defined data set, with a restricted amount of oppening / closing position / logic options.
I made some restricted Strategy Generator settings with :
Opening Point of Position : Bar close / open , day opening, moving average
Closing Point of Position : Bar closing, day closing, moving average, trailing stop, weekly closing
Opening Logic Condition : MACD, Moving average, RSI , Stochastics
Closing Logic Condition : MACD, Moving average, RSI , Stochastics
Do not change permanent SL / TP / Breake Even
Perform Initial opt.
Max number of Open/Close Logic option slots: 4
Working Time: 10 mins.
This is a radically restricted test, yet gives enough idea to compare my i5 vs. core2. From that kind of comparision, i found that i5 do %75 more calculations in the given time(both are set to same freq. etc.), for the single instance of program.
In my opinion, this kind of comunity driven benchmark system can maintain easily. A script can be made to set options automatically. So you will have more time for other kind of developments i guess.
I still strongly suggest to invest development resources to GPU. C++ AMP routines can be called from c#, and i guess it is relativly easy to adopt than OpenCL. If there is enough speed-up via GPU support, i believe it will bring the SFB to another level.