Brian Dowd, FocusEconomics
Laurence J. Peter, a Canadian educator and author, is often referenced as saying, “an economist is an expert who will know tomorrow why the things he predicted yesterday didn’t happen today.”
Economics and especially economic forecasting are often given a bad rap. Many people think of forecasting as akin to licking a finger and testing the wind. However, there is a science to it.
Forecasting is essentially attempting to predict the future and predicting the future behavior of anything, much less something as complex and enormous as an entire economy, is not an easy task, to say the least. Accurate forecasts, therefore, are often in short supply.
There are a few reasons for this; the first being that economies are in perpetual motion and therefore extrapolating behaviors and relationships from past economic cycles into the next one is, as one might imagine, tremendously complicated.
The second reason, and perhaps the most surprising, has to do with the vast amount of raw economic data available. In an ideal world, economic forecasts would consider all of the information available. In the real world, however, that is nearly impossible, as information is scattered in myriad news articles, press releases, government communications, along with the aforementioned mountain of raw data.
Although some might consider having all of that information an advantage, nothing could be further from the truth. The thousands of economic indicators and data available tend to produce a vast amount of statistical noise, making the establishment of meaningful relations of causation between variables a serious challenge.
And, of course, we cannot forget the uncertainty that is inherent with forecasting, something that forecasters must take into account and which creates even more noise to deal with.
The question then becomes, is there a way to cancel out all of that noise to get a more accurate forecast? This is where the wisdom of the crowds comes in.
Is there wisdom in the crowds?
To illustrate how the wisdom of the crowds works, it’s best to tell the story of Sir Francis Galton, a Victorian polymath, who was the first to note the wisdom of the crowds at a livestock fair he visited in 1906. In one event, fairgoers were given the opportunity to guess the weight of an ox. The person with the closest guess to the actual weight would win a prize.
Galton hypothesized that not one person would get the answer right, but that everyone would get it right. Bear with me.
Over 750 participants made their guesses and unsurprisingly no one guessed the weight perfectly. However, when Galton calculated the mean average of all of the guesses, incredibly, it turned out to be the exact weight of the ox: 1,198 pounds.
Tapping economic analysts’ wisdom with consensus forecasts
The basic idea of the wisdom of the crowds is that the average of the answers of a group of individuals is often more accurate than the answer of any one individual expert. This was evident in the story of Galton’s experiment at the fair.
For the wisdom of the crowds to be more accurate, it depends on the number of participants and the diversity of the expertise of each individual participant. The more participants involved and the more diverse the participants are, the lower the margin of error.
So what does the wisdom of the crowds have to do with economic forecasting? Remember all of that noise that makes economic forecasting so difficult and as a result affects the accuracy of forecasts? The theory is that idiosyncratic noise is associated with any one individual answer and by taking the average of multiple answers, the noise tends to cancel itself out, presenting a far more accurate picture of the situation.
Sometimes also referred to as simply combining forecasts, the consensus forecast borrows from the same idea of Galton’s wisdom of the crowds – a consensus forecast is essentially the average of forecasts from various sources. Averaging multiple forecasts cancels out the statistical noise to yield a more accurate forecast.
But don’t take my word for it. Over the last few decades there has been a great deal of empirical research that has shown consensus forecasts to increase forecast accuracy, including those cited below.
With that said, it is possible for an individual forecast to beat the Consensus, however, it is unlikely that the same forecaster will consistently do so one forecast period after another. Moreover, those Individual forecasts that do happen to beat the consensus in one period are impossible to pick out ahead of time since they vary significantly from period to period.
Taking a look at a practical example may serve to clear things up a bit further.
A practical example of a consensus forecast
In the graph above, the Consensus Forecast for Malaysia’s 2015 GDP taken in January 2015 was 5.1%. All the other points, marked in grey, along the same axis represent the individual forecasts from 25 prominent sources taken at the same time.
In March 2016, the actual reading came out at 5.0%. A few forecasts were closer to the end result, however, as mentioned previously, some individual forecasts are going to beat the consensus from time to time, but it won’t happen consistently and it would be impossible to know which forecasts those will be until after the fact.
The second graph uses the same example as before; 25 different economic analysts forecasted Malaysia’s 2015 GDP in January of 2015. By March 2016, the maximum forecast turned out to be 16% above the actual reading with the minimum 10% below the actual reading. The consensus was only 1.9% above the actual reading. By taking the average of all forecasts, the upside and downside errors of the different forecasts mostly cancelled each other out. As a result, the consensus forecast was much closer to the actual reading than the majority of the individual forecasts.
Consistency and reducing the margin of error are key
The point to keep in mind is that whether they are consensus forecasts or individual forecasts or any other kind of forecast, predicting the future is seldom going to be perfect. In the Malaysia GDP example, the Consensus wasn’t spot on, but it did certainly reduce the margin of error. It is important to note that there is almost always going to be some error, but reducing that error is the key, and more often than not, it will result in a more accurate forecast.
The consensus not only reduces the margin of error, but it also provides some consistency and reliability. As was mentioned previously, an individual forecaster can beat the consensus, however, it is impossible to know which of hundreds of forecasts will be the most accurate ahead of time. As is evident in our previous example, the forecasts from individual analysts can vary significantly from one to another, whereas the consensus will consistently provide accurate forecasts.
Forecasting isn’t perfect, but does it need to be?
Forecasting is a science, but it isn’t an exact science. They may not be perfect, but forecasts are still very important to businesses and governments, as they shed light on the unforeseen future, helping them to make vital decisions on strategy, plans and budgets.
So, should you trust forecasts? That is a tough question to answer. Yes, forecasting is complicating and, yes, forecasts are notoriously inaccurate and there are few ways to consistently improve forecast accuracy. The point is, however, that forecasts don’t necessarily need to be perfect to be useful. They just need to be as accurate as possible. One such way to do so is leveraging the wisdom of a crowd of analysts to produce a consensus forecast.
As French mathematician, physicist and philosopher Henri Poincaré put it, “It is far better to foresee even without certainty than not to foresee at all.”
The consensus forecast is a more accurate way to “foresee.”
Academic research on consensus forecasts
“Consider what we have learned about the combination of forecasts over the past twenty years. (…) The results have been virtually unanimous: combining multiple forecasts leads to increased forecast accuracy. This has been the result whether the forecasts are judgmental or statistical, econometric or extrapolation. Furthermore, in many cases one can make dramatic performance improvements by simply averaging the forecasts.”- Clemen Robert T. (1989) “Combining forecasts: A review and annotated bibliography” International Journal of Forecasting 5: 559-560
“A key reason for using forecast combinations […] is that individual forecasts may be differently affected by non-stationaries such as structural breaks caused by institutional change, technological developments or large macroeconomic shocks. […] Since it is typically difficult to detect structural breaks in ‘real terms’, it is plausible that on average, across periods with varying degrees of stability, combinations of forecasts from models with different degrees of adaptability may outperform forecasts from individual models.” Aiolfi M. and Timmermann A. (2004) “Structural Breaks and the Performance of Forecast Combinations”
Roberto Astolfi, OECD Statistics Directorate
To some Professor Luis Garicano of the London School of Economics is a leading expert in the fields of productivity and industrial organisation, but to many he’s the man Queen Elizabeth asked “Why did no one see it coming?”; “it” being the crisis. In retelling the story, Pr. Garicano pointed out that he welcomed the question as it provided an opportunity to cite many that did see it coming, including Messrs, Krugman and Volcker. Was the OECD among them?
At the OECD, we use a number of techniques to determine what the data are telling us is happening now and what might happen in the future. Dave Turner from the OECD Economics Department describes some of those approaches here. One additional technique used by the OECD Statistics Directorate, is the system of Composite Leading Indicators (CLIs). Simply put, the CLIs combine individual indicators for a given country to anticipate when economic expansion starts entering a downturn, or when growth starts to return. A relevant question in this context therefore is:
How useful were the OECD CLIs just before the crisis?
Perhaps the simplest way to answer the question is by reference to the headline messages announced in each of the monthly CLI Press Releases over the crisis period.
The first indication of potential trouble ahead came in September 2007 (Figure 1) where the headline assessment moved from ‘Mixed outlook’ to “Moderating outlook”. With each successive bulletin, the announcements became more pessimistic. “Weakening outlook” in the months that followed gave way to “Downswing” in January 2008, and even February 2009’s “Lowest level since 70s” was followed by “New low” in March 2009.
From today’s estimates we know that the CLI for the OECD area as a whole reached its pre-crisis high in June 2007, six months before the actual GDP peak that we now locate in December 2007 (vertical solid red line and black dotted line, respectively, in Figure 1).
Using the most recent statistical information, (in other words, including any revisions that may have been made in the interim) Gyomai and Guidetti in 2011 concluded that the “CLI was able to anticipate the downturn in the real economy at least 5 months ahead of its initial materialisation” (detailed results are available in the Statistics Newsletter).
A more stringent approach is to review the performance of the CLI at the time of the crisis using only the statistical information that was then available, as we do in the recently published Statistics Working Paper.
Figure 1: Evolution of CLI Press Release headlines during the Great Recession, OECD area
Note: The vertical lines identify the turning points detected by the CLIs for the OECD area as a whole (peak in June 2007 and trough in February 2009, marked in red) and GDP (marked in dotted black, with a peak in December 2007 and a trough in May 2009).
This approach is more ‘severe’ as formal identification of the turning points can only ever be confirmed some time after they manifest. Nevertheless, even with this more severe examination the latest results confirm the leading properties of the CLI while also indicating that the statistical and methodological revisions that have occurred since the crisis have not shifted CLI turning points to earlier dates, nor have they artificially improved the CLI performance.
Overall then, the OECD CLIs proved to be a robust tool in anticipating the crisis some months before GDP reached its pre-crisis high watermark, and so, perhaps they can be added to the list of illustrious names that can be quoted the next time somebody asks ‘why did no-one see it coming’. Moreover, although, by their very nature and design, CLIs are not able to quantify the magnitude of slowdowns or upturns, and, so, could not quantify the severity of the crisis, the increasing downbeat tone of assessments that followed the first warning in September 2007 provided strong pointers.
If you’re a regular visitor to the blog, you’ll know we often report on the latest OECD economic forecasts. After reading these, you may have found yourself asking this question: Is this really what’s going to happen? You’re not alone. The OECD’s economists, too, have been asking themselves the same question, and today they offered some answers.
In a “post-mortem” project, the OECD examined the accuracy of the economic forecasts it issued between 2007 and 2012, a period of seemingly endless economic turbulence. The study reports that, in general, the OECD tended to be too optimistic – it didn’t predict the scale of the collapse in economic activity between 2008-09 and then overestimated the speed of the subsequent recovery.
The errors were particularly notable for certain countries. When it came to the vulnerable Eurozone economies, for instance, the OECD was too hopeful of a quick resolution to the euro crisis. Equally, when it came to small, open economies, the OECD didn’t grasp just how connected these had become to the global economy, making them highly vulnerable to developments beyond their borders.
This review may seem a bit like navel-gazing, but there is a point to it. “We have learned a lot from the crisis,” OECD Chief Economist Pier Carlo Padoan said today in London, where he presented findings from the OECD review. He added that the experience gained over the past few years was helping to change how the organisation works and thinks – a process reflected in the OECD’s ongoing New Approaches to Economic Challenges project.
So what has the OECD learned (pdf)? One quite striking lesson is that the organisation’s economists are better at predicting upturns than downturns (so much for “the gloomy profession”). Between 2007 and 2012, they predicted 91% of accelerations in OECD countries over the following 12 months, but only 46% of decelerations.
The organisations says it also needs to take better account of the impact of the financial system on economies. That’s one reason why it tended to underestimate the impact of weak banking systems on economies. “The repeated deepening of the euro area sovereign debt crisis took us by surprise,” according to Pier Carlo Padoan, “because of the stronger-than-expected feedback between banking and sovereign weaknesses …”.
Of course, this isn’t the first time the crisis has illustrated what many see as a gap in how economists see the world. “There’s a lot of stuff that isn’t there – financial institutions, feedback effects,” William White, Chairman of the OECD’s Economic Development and Review Committee, remarked four years ago in OECD Insights: From Crisis to Recovery. “All of this stuff is very, very hard. I don’t want to disparage current modelling, but the fact of the matter is it’s all very hard. But I do think progress is being made – something has started.”
The review also reflects another oft-repeated theme from the crisis – namely, the tendency towards “groupthink,” or a herd mentality, in economic and financial analysis. As we reported here on the blog some time ago, an IMF self-evaluation in 2011 attributed some of its failures in the run-up to the crisis to “a high degree of groupthink” and “an institutional culture that discourages contrarian views”. Indeed, it’s notable the extent to which OECD forecasts between 2007 and 2012 matched those of other leading international agencies like the IMF, World Bank and European Commission. A lesson for the OECD, perhaps, is that it needs to encourage its people to “think out of the box” (apologies for the jargon).
But perhaps the most important lesson to come out of the crisis is that governments, economists, international organisations need to come to terms with the reality that they know less than they think they do. That might sound like a platitude, but it has important implications for economic forecasting, some of which are already evident in OECD work. Most notably, the OECD is now placing more emphasis on the health warnings surrounding its projections; in other words, it’s saying “here’s what we think will happen, but here’s why it might not”. Worth bearing in mind next time you read a forecast on the blog.
OECD Forecasts During and After the Financial Crisis: A Post Mortem (pdf) (An OECD Economics Department Policy Note)
New Approaches to Economic Challenges at the OECD
After enduring months, even years, of political campaigning, many Americans probably identified with little Abigael Evans this week: “I am tired of Bronco Bama and Mitt Romney,” the four-year-old wailed in a video that quickly went viral.
Don’t worry, the election’s all over … for now. But 2016 is not too far down the road and by then Abigael may have something new to complain about: Namely, all that talk about China finally overtaking the United States to become the world’s biggest economy. That change will come soon, possibly not before America chooses Barack Obama’s successor but certainly under the decade-long watch of the new Chinese leadership installed this week.
Of course, China has been here before. As the late Angus Maddison noted, China was the world’s biggest economy for almost 2000 years “but in the 1890s this position was taken by the United States”. But when the two giants swap places again, it will undoubtedly attract a lot of attention, not all of it favourable. In reality, the change will represent just a moment in a much longer shift in the balance of power in the global economy.
Today, the wealthy countries of the OECD area account for almost two-thirds (65%) of world GDP. But by the time little Abigael is in her mid-50s, their share will fall to not much more than two-fifths (43%). That will be below the forecast combined share for China and India of 46%, which today account for just around a quarter of global economic activity. Average living standards in China and India, as measured by GDP per capita, will more than quadruple, although by 2060 they will still be well behind those of the leading economies.
All these forecasts come in a fascinating new paper from an OECD team led by Åsa Johansson, which attempts to set out the shape of economic things to come between now and 2060. The paper forecasts that, as soon as the debris from the great recession is tidied up, the world economy could start growing by about 3% a year.
The bulk of that growth – at least in the early years – will be in emerging economies like China and India, and it will be driven by a number of factors.
One is demographics. India and South Africa, for example, may enjoy a “demographic dividend” – a period when they have a lot of workers and not too many dependant retirees and children. China faces a different fate: Partly as a result of the one-child policy, it’s poised to go through what a World Bank report calls “wrenching demographic change” that could start shrinking the size of the labour force by 2015. That’s one reason why China’s share of the global economy is forecast to level off at 28% in 2030. India’s, by contrast, should keep growing, from 11% in 2030 to 18% in 2060.
But there’s an even more important factor behind the world’s shifting economic balance – rising productivity. Technology and a more highly skilled workforce are already allowing many emerging economies to shift from relying on agriculture and mass manufacturing towards more high-end production. Their businesses, too, are becoming more efficient and effective in how they operate. In the coming decades, these trends will only deepen.
Of course, the future is, by its nature, unknowable. So what might happen to blow all these forecasts off course? To quote the former British prime minister Harold Macmillan, “events, dear boy, events”. More importantly, and this is a point the OECD paper stresses, there is much that governments themselves could do to outpace current growth forecasts, especially in the areas of product-market regulation and in reforms to labour markets. Substantial change in these two areas alone, it argues, could add 16% to global GDP by 2060.
The OECD’s Chinese-language site – 网站 (中文)
To the uninitiated, a double Hodrick-Prescott filter probably sounds like something a 1920s spaceship used to reduce coal consumption during long flights. But as some of you probably know, and I certainly didn’t until this morning, it is in fact a mathematical means used to help spot turning points in a time series. So, it’s not rocket science, but it is every bit as complicated. And very useful in calculating the OECD Composite Leading Indicators, published today.
The turning points that the CLIs spot are those in the business cycle, and experience over the past 30 years shows that the turning points of the CLIs consistently precede those of the business cycle, with lead times of about 6 – 9 months. In fact in the animated graph of the OECD Business Cycle Clock, it looks like GDP is stalking the CLIs around the axes.
The CLIs work by comparing a country, or group of countries, with itself, looking at how the latest figures compare with long-term trends – is the country doing better or worse than long term trends suggest it should. They provide qualitative information rather than quantitative measures, but by forecasting changes in direction of the economy, they help economists, businesses and policymakers to improve their analysis of current trends and anticipate economic developments.
The indicators used to build the CLIs include both facts and opinions, such as the number of new houses being built or a consumer confidence indicator. The actual components vary from one country to another, with Turkey for example including electricity production while Belgium’s CLI uses new passenger car registrations. You can find the components of the CLI for each OECD country, the BRIICS, and various regional groupings here.
So what do they latest figures reveal? Weakening growth in most major economies, I’m afraid. The CLIs for the United States and Japan continue to show signs of moderating growth while in Canada the CLI points to weak growth. In Germany, France, Italy and the Euro Area as a whole, the CLIs point to continued weakening growth. In China, the CLI points to soft growth, but tentative signs are emerging that the recent deterioration in the short-term outlook may have stabilised. In India and Russia, the CLIs continue to point to weak growth.
A new name to add to the list of those who want to know what caused the crisis: Queen Elizabeth II. During a visit to the London School of Economics, the British monarch asked economists why they didn’t do a better job of predicting the timing and scale of the slowdown, The Observer reports. “She seemed very interested, and she asked me: ‘How come nobody could foresee it?,” Professor Luis Garicano of the LSE told the newspaper.
Stirred by the Queen’s query, some of Britain’s leading economic experts wrote to her to explain what they think went wrong. “Everyone seemed to be doing their own job properly on its own merit. And according to standard measures of success, they were often doing it well,” they told her. “The failure was to see how collectively this added up to a series of interconnected imbalances over which no single authority had jurisdiction.” While the crisis had many causes, they concluded, “[it] was principally a failure of the collective imagination of many bright people, both in this country and internationally, to understand the risks to the system as a whole.”