×

Members Login

Forgot password?

Survivorship Bias

Survivorship Bias

Survivorship bias refers to stocks that have survived. So for example, Microsoft is in the S&P 500 these days, and it was in the S&P 500 back in 1998. Enron is not in the S&P 500 these days because it’s now bankrupt, but it was back in the S&P 500 back in 1998. So Enron has not survived. Microsoft has survived.

What we want to do when we test our strategies is we want to do two things. We want to test those stocks that are delisted to make sure our strategy has worked on those, because those stocks are now gone. But we also want to ensure that we’re trading the same universe today, as we would have traded back in 1998. For example, using a historical constituent list, we can see what stocks were in the S&P 500 back in 1998 and make sure we’re trading those stocks. If I have a universe which I trade, which I do –the S&P 500, I want to know that the S&P 500 I’m testing on back in 1998 was the actual 500 stocks in that list back then. Including all the ones that delisted.

So survivorship bias can dramatically, in certain circumstances, overstate your simulated performance. You want to get very accurate performance data, and the only way to do that is to remove survivorship bias.

As companies join the S&P 500, how do you actually build that into your data?

Do you need to know the date that they came into that index?

That’s correct. And out, and back in again. Some stocks come in and out of an index a number of times. For example, Regis Resources is one such stock that came into the top one hundred in the Australian market and then it’s gone out again as it fell away. So if we’re trading a specific universe then that’s important. We want to ensure that we have that information, and it is available. We use a data provider that makes that available and it’s very beneficial. There are only certain software packages that can cope with delisted data as well, and it does become quite technical, because there are a few other bits and pieces in a programming code that you actually have to put into place to ensure that those stocks are removed at the right time.

We use a very high quality data provider and we’ve used them for a long time – Premium Data. It’s not overly expensive. There are a number of data suppliers out there but the people that we use are exceptional. We constantly track our real time trading against our simulated performance, on a week to week basis. So we know that the data is doing what it’s supposed to do.

If you’re going to be a serious trader then you really can’t skimp on things like data. It’s an old saying that I have. “Do you want to make money, or you want to save money? If you want to make money you’ve got to make a conscious decision to choose quality data, quality systems, quality software. If you want to save money, then maybe you can get away with using free yahoo data. If you have a bad data point or you have non adjusted data etc you may end up with a trading error that will cost significantly more than the cost of buying and acquiring good quality data.

The more data you have the more likely the strategy is going to be robust. The fact that the strategy does eight hundred trades a year is good. It might produce two thousand trades a year, which is good, but more is better. Robustness is a function of how often a particular setup or pattern occurs. The more it occurs the more robust it is. Think about the sun rising and setting. Effectively that is a pattern. We know that it happens at the same time of day, and it’s been happening for a long time, so we’re fairly certain that it’s going to happen again tomorrow.

So the more occurrences we have in history then the greater the evidence that the pattern will continue to reoccur, and most likely continue to occur the same way it has in the past.

Whenever I see a system that’s based on a single market, it’s a red flag. A single market system tends to be (not all the time) optimized, and then if the performance metrics are through the roof then that is a major red flag. Performance metrics would be percentage winning trades; if that is around 75%, I would really start to question the strategy. If the system has only got twenty-five trades in a ten year period well that’s also a big red flag. And other metrics like the profit factor, for example. If that’s in excess of three, then again, you’ve really got to ask a serious question. When you see a profit factor of sixteen, you know it’s just an absolute curve fit data mining thing. I understand that there are people out there that trade single market systems, and that’s fine, but it’s not something that I would do because I don’t think that would be robust enough to actually trade that, unless of course they’ve got thousands of trades in the sample, then yes, but usually that’s not usually the case.

The Chartist free trial membership

Shopping Cart
Scroll to Top