My first 100% automated and 100% accurate workflow, StrategyQuant test case
DISCLAIMER: The presented results below are still preliminary, there
is still a small chance that my positive results are influenced by an
undiscovered bug in the current version of SQ-X (build 118.84) or that
I’ve just made a stupid mistake somewhere in my workflow resulting in a
huge ‘Data Mining Bias’. However I did my best and rechecked everything
multiple times…Moreover since this all is based on a relatively new
‘custom projects’ feature of SQ-X, nothing of this has been tested yet
on a real account…but I think I have built a strong case supporting I
could be right on this one 😉
My claim: It looks like I’ve managed to create a 100% automated and 100% accurate workflow using StrategyQuant feature called ‘custom projects’.
100% automated means: I push 1 button before going to
bed, and every morning my workflow automatically generates, validates
and selects few new strategies which are ‘ready to go’
‘Ready to go’ means: I can deploy them immediately to my live account. Without a need of further processing.
100% accurate means: Every single strategy that has
been selected by this automatic workflow (~50 so far), has been
profitable in the 2 years period from the generation date.
To test this all I’ve adapted my SFT method as described in this topic: See HERE.
The workflow is based on standard validation test (common knowledge) as
shared by SQ team in their free courses, however with a very rigorous settings.
The workflow does not use any advanced validation methods like WFA,WFM,OP,SPP. Instead a customized
Monte-Carlo test is used to simulate behavior of a SPR method. No
portfolio analysis is performed (some systems can be correlated).
My automatic workflow test case is split into two verification moments:
1. End of year 2014.
2. End of year 2016.
At each point in time 1 and 2, I used my workflow to automatically generate and select 20 NEW strategies (out of several hundreds thousands systems) without ANY manual intervention and then ALL of
selected systems where forward tested using SFT (future data). Let me
be clear on one thing: I did not cherry picked any strategies.
It seams that every single selected strategy was profitable in the
period following the selected generation date. See figures blow:
Test case 1: Strategy creation @ 2014.12.31, Simulated Forward test: 2015.01.01…2016.12.31.
Real-Ticks (Dukascopy data), Real-spread (no commissions)
Test case 2: Strategy creation @ 2016.12.31, Simulated Forward test: 2017.01.01…2018.12.31.
Real-Ticks (Dukascopy data), Real-spread (no commissions)
My conclusions so far:
1. If there are no mistakes, then it seems that it is totally possible
to use SQ-X automated ‘custom projects’ to automatically generate and
select profitable trading systems.
2. No advanced validation/filtering methods needed. Of course these
tests should only improve total result and minimize DD on portfolio
level.
3. The results in SFT of >2014 are slightly better than >2016.
Workflow is somehow sensitive to used data during strategy generation (due to
changing market condition). It seems that years 2017 and 2018 are very
difficult years for trading using selected trading type.
4. It is not 100% proven yet, but it’s a pretty damn good result so far, taking into account it’s based on a simple workflow that is using basic filtering principles.
5. Some of the strategies can be correlated, but for the sake of
this investigation no manual correlation filtering has been performed.
This would jeopardise the objectivity of this test case.
6. The filtering settings are very rigorous, this workflow filters out only the most robust strategies. According to my statistics only 0.05% of the generated strategies are able to pass this workflow.
TODO:
– Refine the workflow and implement further strategy selection, perform
correlation tests, WFM analysis and additional portfolio level related
tests.
Greets,
Chris