[ad_1]
Why does one investment outperform another? Economists and investment firms have studied this for centuries. But it turns out that many of their more recent findings may be wrong.
In a new article from the NBER, University finance professor Duke Campbell R. Harvey, University of Oklahoma assistive finance professor Heqing Zhu and Texas A&M assistant professor of marketing Yan Liu arrive at the conclusion that the majority of articles in financial economics are false.
What they studied
Harvey says he was inspired by a 2005 study that shook the medical community when it proclaimed that more than half of all medical study results were wrong. He wanted to know if this was also true in the area of ââfinance.
Him and his co-authors studied 315 articles that examine different factors that can predict stock returns. These articles offer all kinds of potentially predictive variables, such as leverage and price / earnings ratios.
He uses genetic test as a way to explain. Scientists who want to find the gene that causes or is linked to a particular disease can test many genes. For any gene-disease test, the chances of a statistical relationship between the two being pure coincidence are low. But as you test more and more hypotheses, the chances of finding a “statistically significant” relationship that has no causal basis become higher and higher..
What they found
Harvey and his co-authors found that the study’s authors did not use sufficiently rigorous standards to determine statistical significance. As a result, they write, “most of the alleged research results in financial economics are probably false.”
The reason is that in trying to figure out what exactly correlates with high returns, academics and finance gurus often compare many different variables. Statistical tests are generally significant at the 5 percent level, Harvey says. This means that when a variable is found to be statistically significant, there is a 5% chance of seeing that result (or a larger result) in the numbers, even if there is no real effect present. That’s pretty low if you’re only running one test, but if you’re using powerful computers to run hundreds of tests, you’re sure to find some. ““significant” which is just random noise.
To show this in another way, Harvey ran 200 random variables using a random number generator. The one that outperformed the others is highlighted in dark red below.
(Source: Campbell Harvey)
That sounds like a pretty good comeback, until you consider this to be random data – the equivalent of “a monkey throwing darts at the Wall Street Journal stock listings,” Harvey says.
So if you study a lot, a lot of variables, you have more and more chances of getting a false result, he says.
What this means
This means, first of all, that Harvey’s colleagues in finance may need to change the way they study. Not only that, but it would mean that they will have to increase the standard of the significance test over time, as researchers test more and more factors.
But the implications are much broader. On the one hand, it confirms what many investors may have already suspected.
âThe broader view is that some investment managers will appear to outperform – purely by luck,â he said in an email to Vox.
And that means some investment managers may need to change their strategies. Campbell uses a variable he saw as an example.
âA very important company in this space, one of their variables is a company’s market capitalization cube,â says Harvey. “That, for me, doesn’t have much economic basis.”
[ad_2]