What future for forecasts? It’s time economists grappled with the flaws in their models
Macroeconomic forces influence truly vast sums of money, but the process of analysing them is notoriously unreliable. Economic forecasting is often very wide of the mark and seriously inaccurate just one year ahead.
Traditional economic forecasting has conspicuously failed to identify turning points in the economic cycle, while the highly sophisticated DSGE (dynamic stochastic general equilibrium) models, which are the workhorse of central banks around the world, also missed the financial crisis completely. There is an analogy here with climate change models, which are a significant intellectual achievement but of little practical use.
Modelling can’t go backwards to traditional macro-econometrics. First, the Lucas critique in the 1970s argued that, when economic policy changed, previous economic relationships would also change, making the existing models redundant. Consequently, the theory went, macroeconomic models had to be built from the bottom-up, based on microeconomic foundations. The Lucas critique thus laid the foundations for DSGE models.
Second, the practical experience of large-scale macroeconomic models was that the coefficients were unstable or failed to capture the true economic relationships. The world is simply far too complex, with dynamic feedback effects. And no model run is ever published raw. Every forecast you see based on an econometric model has the human touch, with the result being adjusted to accord with the expectations of the modellers.
So modelling can’t go backwards to traditional modelling, but I doubt it can go forward with DSGE models either. DSGE models are just far too stylised, with so-called optimising representative agents. The real world is replete with distortions that can’t be captured in a DSGE model. These models titillate academia and central bank economists, but they are notoriously absent from hedge funds and investment banks. The proof of the pudding is in the eating, and DSGE models would be everywhere in the private sector if they delivered accurate views of the future. But they’re not, and they don’t.
The world is simply too complex for a model, and yet we still thirst to know what the future holds. People are both unbelieving and fascinated by forecasts. The problem is obvious and longstanding, but financial markets still require a view(s) of the future. My guess is that markets are reluctant swallowers of traditional sources, but think they have no alternative. It then becomes a strategic game: I don’t need a perfect forecast, just a better insight than my competitors.
Where do we go from here? I think there are three options. First, there could be a continuation of the status quo. The DSGE approach is so embedded in academia and central banks that we could just see more of the same, for some time to come. Second, there could be a pronounced shift towards a far more complex modelling approach, based on decision-making under uncertainty. However, this would be a very numbers-orientated approach and the story behind the numbers wouldn’t necessarily be clear. Third, there could be a shift towards a much simpler approach using scenarios and stories, based on what I call a “macro moot”. A moot is a mock trial where a proposition is tested to destruction, in order to help identify probabilities and potential triggers for alternative scenarios.
Following this approach, will the future of forecasting be based simply on wide reading and deep thought, producing far greater emphasis on the “stories”?