Avoiding Blowups with Monte-Carlo
In this series on signal testing, we’ve examined the importance of looking into the results that make up an average, and the stats that can help. We’ve revealed how important it is to look at the signals that are generated by an idea. This week, we want to go a step beyond just looking at the signal and see what happens if we were to use the signal repeatedly. This gets us closer to what could be expected.
In real life, it’s rare that we can take every signal. Each time we do, our result could be anywhere from a loss to a profit. Hopefully we pick more winners and we profit in the long run. The question is, how do we simulate what to expect in real life to see if our signal is robust?
Many would answer “run a backtest”. The issue we have is that a backtest picks one “path” through all the signals. It’s rare that we would pick the same path in real life. We need a better way to model real life over and over again—just as we did with the signal.
A very powerful—but little used—technique called Monte Carlo Simulation (MC) can help. Let’s explore where Monte Carlo came from…
Blowing Stuff Up Without Getting Blown Up
During WWII, scientists working on the Manhattan Project had a dilemma. They only had so much Uranium to test bombs, and trial and error wasn’t exactly an option! “Hey guys, do you think we’re standing too close…feels a little warm?”
They needed to tackle uncertainty by understanding ranges of possible outcomes, allowing them to adequately prepare. MC was their solution—simulating that possible range of outcomes. MC was named after the casino where one of the scientist’s uncles frequently gambled.
We can do the same—simulating before we jump—conserving money, time, and emotions until we can make a more informed decision. We can learn from blowing up simulated trades instead of real ones.
Everybody’s Doing It
Fast forward to today. MC has become a best practice in the finance industry. We’ve already seen its use in science—it shines when things are complex. I’d venture to say the markets qualify!
Dog Bites And Trades
The theory behind MC is complicated, but the intuition isn’t. Perhaps this analogy might help…
Let’s say I’ve never seen a dog. I walk out my front door and BAM!—the first one I see bites me. The experience is painful, and I think, “all dogs bite”. Now I’m biased—dogs suck.
However, the next dog I see sits and wags his tail, is happy and playful. I’m confused. Was I wrong? Maybe I need to adjust my theory about dogs? Dogs seem unpredictable.
This goes on over time until eventually, after encountering thousands of dogs, I have a good approximation of how often I might be bitten. I conclude that a dog attempted to bite me 2% of the time, 100 times out of 5000.
MC works the same way. Too little experience (past trades), and we can be very wrong. Too much? Don’t worry about it—the more we know, the better. However, MC can be a lot less painful.
Remember what we said about a Backtester picking a single “path”. It’s not enough information to base our strategies on. Like the dog experience, we can not base decisions on just one test.
That Makes Sense
To make MC work, we need three things:
- A mathematical formula that turns inputs into outputs — our indicators.
- A reasonable idea of the inputs needed — the parameters of the indicators.
- An idea of which results are acceptable for what we are trying to do — positive returns from trading.
That’s great! We’ve got everything the model needs; the indicators we use, how we use them, and the goal of making money with them.
History Doesn’t Repeat, But It Sure Does Rhyme
MC uses the past trades generated by our signal to simulate many of the outcomes of what we want to see and understand. What happens when we reinvest trades? These trades are visually represented in the white and blue histogram in FIGURE 1.
FIGURE 1. Monte Carlo test example of reinvesting 10 randomly selected trades.
Let’s look at one simulation, called a test. MC takes all of our trades, places them in a box and shuffles them. It then reaches in and randomly picks 10 trades to reinvest back-to-back, giving us the effect of compounding returns. The results are recorded and the trades are returned to the box and shuffled for the next test.
FIGURE 2 shows the path at each step along the way to that -39% loss for the first test. This test result represents one potential outcome.
FIGURE 2. Results of 10 trades reinvested at each step.
Remember our dog bite? The more dogs we encountered, the more accurate our estimate was. We do this by repeating our first test a lot.
After tens of thousands of tests, the results from each test are placed into bins of similar returns. The total tests in each bin is tallied and the percentage of the total is calculated.
Voilà! We now have the Monte Carlo Simulation distribution shown below in blue, in FIGURE 3. The summary stats will be used later, at the end of the blog.
FIGURE 3. Output from our signal test.
FIGURE 4 summarizes the process resulting in a wide range of outcomes, worst-case, best-case, average, and everything in between. What we are doing is randomly selecting from the thousands of trades and compounding the returns to create the MC results.
FIGURE 4. From historic trades to simulated outcomes.
The MC distribution is also known as a probability distribution because it links the outcomes in our signal test to their probabilities of occurrence. Each bin’s percentage represents the likelihood of a particular return, after reinvesting 10 trades over our window of time. For example, in FIGURE 5, outcomes with returns near 50% occurred and are likely about 0.5% of the time after one year.
FIGURE 5. Probability distribution from simulating reinvesting 10 trades back-to-back.
I Have A Question
We did all of this because we’d like to see what the outcomes could be when we trade several of our ideas in a row. And, we’d like to know the answer to certain types of questions to help make more informed decisions:
- “What are the odds of X happening?”
- “What’s the probability of negative returns?”
- “What changes if I do this instead of that?”
Monte Carlo simulation is here to help. Past trades that met our criteria were the input, and after Monte Carlo Simulation we know what the probabilities of success are.
FIGURE 6. Probability distribution from simulating reinvesting 10 trades back-to-back.
By assessing the probability distribution in FIGURE 6 (answers), we can now generate important questions like these (and sound smart too):
Answer: The range of returns around the mean with 60% probability is 61% to -24%.
What is the likely range of results after reinvestment?
Identified by the dark shaded area under the probability distribution
The area between the 20th and 80th percentiles
Sometimes called the interdecile range or shoulder
Represents the outcomes which occurred most frequently
Notice that the percentage of outcomes outside of this area drop off sharply.
Answer: The median, which is 13%.
What’s the expected performance after reinvestment?
From the median of the distribution
This middle value has an equal number of outcomes on both sides
It’s less affected by large losers and winners than the average
Answer: The total of all outcomes below -24%: the left tail 20%.
What is the probability of losing more than -24%?
This is a way to look at risk examining the left tail—AKA negative outcomes
The total area under the distribution to the left of -24% returns—this also happens to be the 20th Percentile (the left edge of the shaded light blue area)
Answer: The probability of loss is 39%.
What is the probability of ending with negative returns?
Represents the total of all outcomes having returns less than 0%
The total area under the curve and to the left of 0%
Can also be seen from the from the Monte Carlo summary statistic Probability of Loss
Answer: Compare the initial simulation distribution and stats with a new one.
How to measure improvement between different versions?
In general a shift of the distribution to the right, positive returns, is better
A higher distribution with less movement around the median and mean is better
As you can see, this little blue distribution is brimming with useful information. And, like the game show Jeopardy, the answers are already given. Monte Carlo simulation is a powerful technique to help us ask the right questions—knowing the probabilities—giving us the ability to make more informed decisions, the potential to refine our process, and improving our odds for success.
Carson Dahlberg, CMT
Author & Co-founder of Northington Dahlberg Research
Carson comes to Optuma bringing nearly 20 years of experience in the financial markets involved in technical and quantitative trading, research and education. Carson has worked for several notable firms: Morgan Stanley, Wachovia Securities, Wells Fargo, and Schaeffer’s Investment Research. In addition, he is the co-founder of Northington Dahlberg Research, a quantitative driven, volatility-based research firm, and was the Director of the CMTinstitute (an online program to assist financial professionals in the passing of the CMT examination process).
Carson is presently a Director at Large on the Board of the Market Technicians Association and serves the Market Technicians Association (MTA) in various capacities. He was the founder and first Chapter Chair of the Charlotte Chapter for the MTA. His involvement with being the Committee Chair for the Ethics Committee has led to an updated and globally relevant ethics offering for the designation. In the past, he has served on the Admissions Committee, was the Board Liaison for the Journal of Technical Analysis Committee, and was the Director of the CMTinstitute.
Carson received a degree in Chemistry from the University of Cincinnati and was awarded the Chartered Market Technician designation in January of 2008.