Strategic Wealth

View Original

Oracle Animals - Luck vs Skill

We are all disappointed by the All Blacks loss against a strong England team, especially when many picked the All Blacks to go on to win the cup.

However, we can't win them all and predictions are often wrong. I find myself interested in the sources of these predictions, especially the more absurd.

Animal oracles often make headlines during major sporting events. The German octopus Paul garnered worldwide attention for predicting outcomes for football games. He made 12 correct predictions out of 14, including 8 out of 8 during the 2008 FIFA World Cup.

New Zealand had their own oracle Sonny Wool, a sheep who picked 7/10 results during the 2011 Rugby World Cup, correct for the All Black games. Other examples include Mani the parakeet, Leon the porcupine and Petty the pygmy hippopotamus.

While it would be daft to place bets based on the opinions of a mollusc, the stories are none the less entertaining. They also offer examples of interesting concepts and lessons for investors (bear with me).

Survivorship Bias

How we overlook what is missing

Sometimes the data that is missing is integral to the overall picture. The most famous example is regarding planes returning from combat during World War Two.

Paul the Octopus looks clairvoyant on his own, but we don't read articles about animals making lousy predictions. If we believe his choices were due to chance, the chance of being right 8 times in a row is only 0.4%. But if we have 250 different animals making predictions, we'd expect one to choose correctly. Then the media picks up on the successful one and the rest are ignored.

Survivorship is literal for another ill-fated oracle octopus named Rabio, who predicted the outcomes of all four of Japan's group games in the 2018 FIFA World Cup. Before he could weigh in on the next game, Rabio was sold and presumably turned into sashimi.

This bias is important when evaluating active investment managers both individually and in aggregate. If you were to visit a manager's website, you would not likely see a list of funds closed due to underperformance. So the funds that remain make the manager look much more impressive.

Dimensional looked at a sample of 2,786 US mutual funds over a 15-year period to the end of 2018. Only 18% of them outperformed their benchmarks and nearly half closed over the period. If we were only to consider the funds lasting the full 15 years, we would see 36% outperforming their benchmarks, which seems much better than the full picture.

Skill vs Chance

Can we see the difference?

Most would assume Paul's success is due to luck not skill because there is no reasonable explanation for an octopus understanding a football world cup. We can chalk it up to one lucky animal out of a huge sample.

Like the lottery, it is likely someone will win, but very unlikely a particular person will win. As there are many oracle animals, we expect a few to be right.
 
With active managers, it is more difficult to accept the evidence that chance is a bigger determining factor than skill. You would expect an industry of people specialising in picking stocks would do better than, for example, a collection of oracle animals. But the evidence suggests the performance of both are explained by chance.

S&P's persistence scorecard takes the top 25% performing managers for a given year and checks how many stay in the top 25% for the next year. If their high returns were due to chance and not skill, we would expect a quarter to remain in the top 25%. Lo and behold, this is exactly what the data shows.

So while an octopus (or a cat) may not understand how to analyse a stock price or manage risk, if you are considering long-term performance, it would be on the same level as an active manager.

Though I assume the octopus would charge lower fees.