"It is exceedingly difficult to make predictions, particularly about the future." – Neils Bohr
I was remembering my history with forecasts and predictions as I read Nate Silver’s book, “The Signal and The Noise.” I started my career in the Indian financial markets as a credit rating analyst. When we had to make financial forecasts for determining the ratings, we used to make them “conservative” and it was the accepted thing to do. A couple of years later, I switched to being an equity analyst, and I had to make forecasts for a power company. When I made my first recommendation to my boss, he asked me the basis for my forecasts. I told him I was being conservative. I still remember the way he barked at me: “You are not paid to be conservative; you are paid to be correct.”
Over the years, I have made hundreds of corporate financial forecasts, but I always remember that the objective of forecasting is to be as correct as possible. If you are too conservative, you might miss out on good opportunities; but if you are too optimistic, you might get stuck with dud investments.
But I do not understand the false precision in the forecasts of many financial-market analysts. Instead of acknowledging that they can forecast the EPS only within a range, they still provide forecasts to the second decimal place. In predicting macro-economic numbers too, most economists still provide single-point number, rather than a range.
Silver’s book opens with a discussion on the failure of rating agencies to forecast default rates on the U.S. mortgage-backed securities. He runs through some of the familiar ground, including the home speculators’ failure to see that prices could fall, the rating agencies’ failure to see that the quality of the underlying mortgage pool was changing, and the policymakers’ failure to understand that the housing crisis could have implications spread across the economy.
In another chapter dealing with the failure of economic forecasts, Silver cites studies to show that actual GDP growth fall outside the 90% confidence interval almost half the time. That is a terrible track record, indeed. It is no wonder that the Queen of England asked the economists in the London School of Economics why nobody had noticed the financial crisis coming. Silver acknowledges that economies are complex systems, but argues that part of the failure of economic forecasting has to do with overfitting the data. In other words, there are simply so many economic variables being measured that you can fit the data to reach any conclusion you want.
He also points out that, even if you manage to choose the right variables to predict the path of the economy, it is still difficult to separate correlation from causation; economic policymakers observe the same variables and start influencing them through their policies, changing their subsequent behavior. Even more fundamentally, economic data is noisy and subject to many revisions as better estimates become available. And finally, there is the incentive problem: even if an economist is not sure, it benefits him in the long run to issue forecasts as if he is sure!
In contrast to the failure that economic forecasting has been, Silver’s book also provides a fascinating explanation about another, more successful, area of forecasting complex systems: the weather. Although the number of variables in the weather system is very large and the interaction between variables complex, weather forecasting has steadily improved in terms of forecasting accuracy over time. For example, predictions of hurricane paths, temperatures, rainfall and other components of weather have steadily improved as meteorologists have repeatedly used the actual weather to improve their models.
Silver also discusses sports forecasting and political forecasting, the two areas that have won him fame. While it is true that he is generally dismissive of “punditry,” he still finds value in adding judgment to a probabilistic forecast derived from numerical analysis. For me, two of the most fascinating chapters were the one on the probabilities behind poker and the one discussing heuristic models in playing chess. He also discusses the problems with making long-range forecasts, such as global warming models, or forecasting with inadequate feedback, such as with large earthquakes.
While this is not a “how to” manual, the book discusses a lot of relevant concepts such as overfitting, calibration of models, the Bayesean model, and probabilistic thinking. Perhaps the biggest takeaway from the book is to think in terms of probabilities and not single-point forecasts. Among the people in the financial markets, traders are more adept in thinking probabilistically, trying to make a profit in the aggregate, rather than from every single trade.
As a practitioner of forecasting of company financials, I think there are two kinds of forecasts, one based on mass of statistical data (such as sports or weather), and the other based on judgmental systems (such as the financial results of a specific company). In the first, good forecasts depend on the quality of the data and the forecasting models, both of which can be refined over time. In the latter, the challenges are greater because you are trying to predict the behavior of a single firm based on economic, sectoral, technological and regulatory trends. Even if your underlying predictions of business trends are correct, the company’s management may still surprise you by its actions, say, an acquisition or entry into a new business. I believe it is important not to shy away from making forecasts of the big trends and their impacts, but one should not get too focused on single numbers of single years.
Besides getting the overall trend and turning points correct, one also has to get the magnitude and timing correct in order to make money in the markets. Otherwise, one can keep issuing bearish forecasts in the hope of being eventually correct (a la Nouriel Roubini!)
Nate Silver’s book is an important and entertaining read for anyone in the prediction business.