StrategyQuant_AdvancedOptions.png

In short, what StrategyQuant does is:

1 . It auto-generates trading strategies using random or machine learning algorithms. (= Artificial intelligence). No coding required, only some input parameters and data configuration.

2. It has several built-in robustness test possibilities, that allow us to verify and select only the best and most stable strategies.

3. It has advanced strategy optimization features, like walk-forward and also walk-forward matrix.

4. It has even more great features like: automatic tick data download (for forex, stock and commodities), multi-core support and distributed computing, EA/algo wizard for manual strategy generation, portfolio analysis (combined multiple EAs), multi-market backtesting and analysis and so much more....Basically everything you can dream of.

If you are serious about trading and want to learn algorithmic trading, there is no better option. You can read my full review here: Coensio StrategyQuant review.

For people that will consider purchasing this tool I do recommend to first download a **DEMO** version and play with it and then decide it this will work for you. If you are not a 'technical' person and have difficulties with handling your PC, you can forget about algo-trading, I guess it is not for everyone, you need to like working with your computer and be determined and precise like a real data scientist ;)

Below my personal StrategyQuant 20% discount code you can use during the purchase:

**COENSIO-20-OFF-ZSHXY **

People that will use this discount code, will also apply for free licenses of all coensio tools presented on my website and my 1-on1 personal support and guidance.

Greets,

Chris

]]>

With some VPS and 1 SQ license you can easily produce hundreds of Strategies in a short time that are worthwile to monitor in forward testing, can't you?

I don't see the reasoning behind a whole forum doing this?

]]>

There is no holy-grail in auto-trading. With all tools we use here, we can easily generate a strategy with a nice steep equity curve, that goes 'to-the-moon', but this will be pure curve-fitting. Instead we want to concentrate on the following two most important things: **statistical significance** and **system stability (robustness)**, and not the highest profits (as many of you would think). The main goal is to find as many as possible, stable and robust strategies that make some profits over time and add them to our portfolio of strategies.

Let's go back to our theory..let's create a perfect strategy with a WinRate of 80% (thick golden line) and compare it 500 randomly generated strategies (the figure below shows only first 100 random curves). Let's also print a histogram of all trades to see where the result of our strategy is placed in the probability distribution curve.

Figure 4.1: Results distribution of 100 trades

We can see that after 100 trades, our 80% WinRate strategy (golden line) if far from the 'random noise' dominated by the results of 500 random systems. The resulting blue bin of our golden line which is visible in our histogram is very small since the occurrence is only 1 (we have only one outcome of our winning system and 500 outcomes of randomly generated systems).

At this moment we see that our golden line result is statistical significant (since it beats all 500 random systems after 100 trades), but we still cannot tell if our result is curve-fitted or not. The only possible way to find it out is to perform several robustness tests on our golden line strategy. Here for we can do the following, e.g.:

- We can randomize the input parameters of our golden line strategy and see if this will have large influence on our results
- We can randomly skip trades within results to see if the equity curve will still be profitable
- We can randomly change the start date of our systems to see if the date change will have impact on the results
- We can randomly change market data to is if system will still perform well if the market will slightly change
- Many many more...

During our validation period we will 'jiggle' (= randomize) many different parameters around our strategy and strategy settings to check for robustness. This method is know as **Monte-Carlo **analysis method which is widely used in science. We will use that a lot since it is one of the most important stability tests.

**The perfect strategy:**

Let's imagine that our golden line strategy is perfect and none our Monte-Carlo tests has any influence on the final outcome. In that case the final result will be always the same, so for example after 50 different randomized sweeps, our probability histogram will look as follows:

Figure 4.2: Perfect strategy distribution

Now we can see, that our blue bin is much larger since all of 50 randomized sweeps result in the same outcome. This means that our strategy is not affected by any of 50 random tests we could think of. In scientific terms we could say that our Signal-to-Noise ratio is very high (Signal=our golden line results, Noise=random strategies). Our strategy is perfect!

However in real life, this will be very different, since no strategy is perfect and we will run much more randomized sweeps in our Monte-Carlo analysis. (StrategyQuant has several different robustness test modes which we will use extensively in our strategy validation). This all will result in a 'cloud' of golden lines, where each line represent outcome of different randomized sweep. Our probability histogram will also look very different, see the following figure:

Figure 4.3: Real strategy distribution

**CONCLUSION:**

Up to now we have seen that:

- We need to have strong statistical significance of our results (use high amount of trades)
- We need to prevent curve-fitting (decrease the degrees-of-freedom = use simple systems with less parameters)
- We need to pass all Monte-Carlo robustness tests with as low outcome variance as possible

Those three golden rules can be put into one beautiful equation that describes the resulting **total statistical significance**:

**Statistical significance ∝ ( Number of trades * degrees-of-freedom ) / Robustness test results variance**

In simple words this means that the **total statistical significance** (higher value is better) is proportional to the **total number of trades** (more is better) within your testing period and also proportional to **total amount of degrees-of-freedom** (input parameters = less is better), and inverse-proportional to the observed **variation of the outcomes** of Monte-Carlo tests (less variance is better).

Now we know what are the key elements we should be looking for during the strategy design period.

Gr

Chris

]]>Figure 3.1: Example of curve fitting

Since curve-fitting is a mathematical process it can be minimized by the following two approaches:

**Minimize the number of parameters**: Each new EA parameter will increase the total resulting degrees of freedom in which the resulting equity line can be ‘bend’ or in other words, it will increase the number of ways in which the dots can be connected. With only one parameter which has only two settings ‘A’ or ‘B’ the total number in degrees of freedom is 2. This results in one straight line. The more parameter the more EA settings possibilities the bigger chance of curve-fitting. Thus, one way to fight curve-fitting is decrease the complexity of trading system by minimizing its parameters and range of parameters.**Increasing the number of trades**: Following the logic presented above, the more trades which are used to optimize given trading system, the more difficult it will become to curve-fit the resulting equity line. Results based on a high amount of trades tend to be more stable and have less chance to be curve-fitted.

**Conclusion**: To prevent curve-fitting, we need to reduce complexity (degrees-of-freedom) and optimize using high amount of trades (See lesson 2!).

*Following the rules of probability which are explained in the previous lessons, we can conclude that the requirements for stable system design are quite contradicting, namely:*

*On one side we need to remove complexity of our systems to fight curve-fitting***BUT**on the other hand we need to add complexity in order to be able to search for profitable strategies.*Adding complexity to our trading systems (by increasing the number of indicators and rules) leads to high curve-fitting risk, and also to 'over-sepecialization' --> systems with too many trading rules (large amount of parameters = degrees-of-freedom) will be able to pick-up only the top high quality signals --> this will dramatically limit the number of trades in your optimization/backtest! Making your results less reliable. See lesson 2.*

*Solution: KEEP IT SIMPLE!*

Gr

Chris

]]>To make clear what statistical significance is let's look at the scientific concept know as **P-Value**. I will try to explain that using two examples:

The P-VALUE is calculated by big pharmaceutical corporations each time they are testing new drugs in order to check if a new drug is really effective or if the positive results are pure accidental (due to randomness or luck). So, for example they are running tests on 20 different test groups (each group several hundreds of people) where only 1 group of 20 groups will get the real drug and the rest of groups will get fake drugs (candies). After some months they will compare if the results in this one group with the real drugs are really much better than all other 19 groups with fake drugs. If the results are significantly better, then it means that the calculated P-VALUE is 5% = 1 group out of 20 groups (1/20= 0.05 = 5%). This 5% is the minimal__Example 1__:__scientifically__accepted level of P-VALUE, where you can assume that your results are statistically significant. Moreover, P-VALUE of 1% (1 out of 100) is much stronger than 5%. This also applies to trading and optimization, for each single optimization results you need to check what the resulting P-VALUE is and determine if this is really profitable setting or just a random lucky shot.Imagine you claim to have a system or crystal ball (or a system) which is capable of predicting results from a simple coin toss sequence. Of course, I do not believe you and of course I want to test if you are telling the truth. So in order to test your claims I need to perform the following experiment: each time the coin will be thrown (heads or tail) I will write down your prediction upfront, but I will also write down all results from 19 other random prediction (for this I will use 19 different coins, which I will throw in parallel with the main coin to generate random predictions). So, if your claim (or system) is right I need to see a significant difference in accuracy between your predictions and my 19 randomly selected predictions, after some X-number of tosses. This is because my pure random predictions should always result in 50%/50% and your system (if valid) should give much better win/loss ration like: 60%/40%.__Example 2__:

This simple test also gives you **the X number.** This number is the minimum required number of coin tosses (or FX trades), needed to be able to tell if a given system (or an EA setting) has statistical edge or statistical significance. So, if you see a profitable setting after only 25 trades during optimization, you need to compare this to at least 19 other random trading systems (random coin tosses). If one or more random results produces equal or better results than your optimized system, then it is NOT a statistically significant result. That is why it is almost impossible to optimize using short term data (like: weekly basis), since each EA setting will produce not more than 25 trades. When comparing to random systems those random systems will always produce similar or even better results! You will not know if your system is profitable or if the positive result during optimization is caused by a random lucky shot.

You can test it by yourself, but the minimum valid number of trades >= 50. Only after 50 trades there will be a significant difference between all random systems and any profitable setting. 50 is the absolute minimum 150 or more is considered as stable (this number depends on 'degrees-of-freedom'...keep reading...). See the following example.

Figure 1: 25 trades = example of poor P-VALUE

As you can see in the example above after 25 trades the main strategy result (gold line) is not much better than randomly distributed results (based on random entry strategy, coin toss). In that case you can not say if this result is due to good strategy or just pure luck like show by random trading systems. This also means that the result is within 'first sigma' of probability distribution, among pure randomness.

Figure 2: Result location in Gaussian curve

Figure 4: 200 trades = example of good P-VALUE

**Conclusion**: In this example the selected strategy (EA setting) is profitable over long term and results in a strong P-Value of 5% (since the final result beats 19 random strategies).

**Thus, in order to be able to say if the given setting is profitable or not we need to test it over a long(er) period of time using high amount of trades! The optimization/backtesting results based on a (too) small amount of trades (<100) have very low STATISTICAL SIGNIFICANCE and cannot be trusted!**

Gr

Chris

]]>Gr

Chris

]]>Trade Analysis:

Monte-Carlo:

MT4 Alpari backtest:

SQ Files:

]]>