In The Wealth of Nations, Adam Smith wrote that: “Little else is requisite to carry a state to the highest degree of opulence from the lowest barbarism but peace, easy taxes, and a tolerable administration of justice: all the rest being brought about by natural course of things.” Others were less optimistic. They argued that nations are rich or poor because of differences in religion, culture, endowments, and/or geography.
Modern economic development theories originate from thinking about how to reconstruct Europe in the aftermath of World War II. The European Recovery Program – or the Marshall plan – was based on the notion that economic growth can be stifled by local institutions and social attitudes, especially if these influence the domestic savings and investments rate. According to this linear growth model, a correctly-designed massive injection of capital coupled with public sector intervention to address market failures would ultimately lead to industrialisation and economic development. Many other economic development theories have since followed, but none have been able to explain convincingly why some countries experience rapid economic growth and others not.
The development community has continued its quest for the missing ingredient to ignite economic growth. Candidates have included capital, technology, policies, institutions, better politics, and market integration. Every time we think we have identified what’s missing, we find that it is actually not something which can be provided from outside, but turns out to be an endogenous characteristic of the system itself. Traditionally, development assistance has been rooted in a type of engineering, mass production, conveyor belt mentality, with agencies promoting “silver bullet” solutions for such complex problems as eradicating malaria, reducing vulnerability, improving resilience, strengthening connectivity etc. Unfortunately, piecemeal or one step at a time development programmes often failed to deliver.
Increasingly, complexity thinking – a way of understanding how elements of systems interact and change over time – has found its way into the development discourse. After all, what could be more complex than promoting development, sustainability, human rights, peace, and governance? We should think of the economy and society as being composed of a rich set of interactions between large numbers of adaptive agents, all of which are coevolving. Based on this approach development is not just an increase in outputs, but the emergence of an interlinked system of economic, financial, legal, social and political institutions, firms, products and technologies. Together these elements and their interaction provide citizens with the capabilities to live happy, healthy and fulfilling lives.
Once we look at development as the outcome of a complex adaptive system instead of the sum of what happens to the people and firms, we will get better insights into how we can help accelerate and shape development. We would be more effective if we assess development challenges through this prism of complex adaptive systems. This could yield important insights about how best to prioritise, design and deliver holistic development programmes for achieving the multiple goals of inclusiveness, sustainability and economic growth that underpin the 2030 Sustainable Development Agenda. There is increasing support in aid agencies for the idea that solutions to complex problems must evolve, through trial and error – and that successful programmes are likely to be different for each local context, with its particular history, natural resources and webs of social relations. The key for anyone engaged in the aid business is to put their own preconceived ideas aside and first observe, map, and listen carefully to identify the areas where change for the better is already happening and then try to encourage and nurture that change further.
Complexity matters particularly when the knowledge and capacities required for tackling problems are spread across actors without strong, formalised institutional links. Inherent to many complex problems are divergent interests, conflicting goals or competing narratives. Moreover, it is often unclear how to achieve a given objective in a specific context, or change processes that involve significant, unpredictable forces. At the same time, it is important to emphasise that the counsel of complexity should not be taken as a counsel of despair for development. There has been immense social and economic progress, and development assistance has found to be helpful overall. Development co-operation has contributed to achieving economic objectives by helping developing countries connect their firms to international markets; to achieving social objectives by making globalisation pro-poor and reducing inequalities; and to environmental objectives by adapting to climate change while exploiting comparative advantages.
Not all development challenges are inherently complex though. For those that are, complexity should not be used as an excuse for fatalism and inertia. Instead we should strive to promote innovation, experimentation and renewal. We should build partnerships to learn about the past, allowing us to shape approaches that are more likely to work and that are owned by the people we are trying to help. They will tell us what is working and what is not. Together we should build a narrative for change involving many different voices and perspectives. We should also be modest and realise that it might better to start small and learn and adapt as we go along in iterative processes of dialogue. We should keep looking for change, scanning widely for new factors emerging in the wider world; listen to a wide range of opinions to be better able to anticipate and adapt and seize opportunities.
Embracing complexity where it matters will allow us to contribute more effectively to the 2030 Sustainable Development Agenda.
The OECD organised a Workshop on Complexity and Policy, 29-30 September 2016, at OECD HQ, Paris, along with the European Commission and INET. Watch the webcast: 29/09 morning; 29/09 afternoon; 30/09 morning
Despite the recent drought in California, farms have continued to supply water-intensive crops such as fruits and nuts to consumers both in the US and around the world. Doing so has not always been easy for farmers – or for the environment. Agricultural producers turned to groundwater to irrigate their crops, a change made so intensively that in some parts of the state the ground started sinking because the water table had fallen so much.
The south-western United States is not an isolated case. The green fields of India’s Punjab state hide a similar problem. Groundwater supplies around 60% of India’s water needs for agriculture but the country suffers from depletion and pollution of this water resource in approximately 60% of its states. In Punjab, India’s breadbasket, demand for water already outstrips supply by 38%.
These countries are only examples of a growing global policy challenge. The disruption that climate change poses to water supplies in many parts of the world only increases the importance of correctly managing this resource. Getting groundwater policy right could ensure that farmers have supplies of water to last them through dry periods.
The OECD and the International Food Policy Research Institute (IFPRI) have organised a panel discussion on groundwater and agriculture at the Global Forum for Food and Agriculture (GFFA) 2017 on Friday 20 January in Berlin. The speakers will discuss how this vital resource for agriculture around the world can be properly managed to ensure that policy decisions taken today will protection future food production. The outcomes of the discussion will feed into the following day’s GFFA meeting of agriculture ministers where the topic of water and agriculture will be discussed.
Groundwater supplies need to be properly managed because this resource has the potential to provide a reliable, on-demand source of water to irrigate crops, and has become central to agricultural production in a range of countries. Groundwater accounts for over 40% of global irrigation on almost 40% of irrigated land and has become indispensable for agriculture production in many countries. It accounts for half of South Asia’s irrigation and supports two-thirds of grain crops produced in China. OECD countries alone extract an estimated 123.5 km3 of groundwater each year to irrigate semi-arid areas.
This heavy use of groundwater has become unsustainable in many regions. High rates of extraction may boost production today but doing so also causes problems such as land subsidence, salinisation, and other forms of land and water quality degradation.
These knock-on effects may be putting global food security at risk.
Already a number of OECD regions are facing challenges in pumping water out of the ground. A quarter of surveyed irrigating regions in the OECD that use groundwater are seeing a major reduction in well yields as well as significant increases in pumping costs (see Figure 1).
Importantly, there are efforts that policy makers can implement that can ensure that groundwater can continue to feed billions of people around the world.
“You can’t manage what you can’t measure” has become a mantra for groundwater campaigners in California. The same approach must be applied in countries around the world. Greater information needs to be collected about stocks and flows over time – data without which it becomes almost impossible to implement effective management.
And where groundwater stresses are identified, governments must put in places measures that not only reduce water demand, but also take into account how surface and groundwater interact. These measures would go some way to preventing collapses in water supply for agriculture. Excess groundwater demand in Punjab could be curbed by providing information on best practice to farmers and by realigning economic incentives away from electricity and crop subsidies and instead encouraging sustainable irrigation systems.
A locally-focused package of regulatory, economic and collective-action approaches should be introduced in areas of intensive groundwater usage. This package should support a well-defined groundwater entitlement system, incentive efficient resource use and, importantly, involve the local users. In California, the state government introduced the 2014 Sustainable Groundwater Management Act, under which local agencies are being formed that will develop regionally-specific and long-term water management programmes with defined sustainability objectives.
Groundwater has the potential to act as a natural insurance mechanism for farmers, so that they are not reliant on surface water to continue to produce in times of drought. This resource would support them in an increasingly volatile climate and allow us to keep producing the food demanded by a growing global population.
More information on the GFFA panel discussion can be found here together with a list of the speakers.
The OECD’s review of groundwater policies in agriculture, which includes 16 country profiles, can be found here
An overview of the OECD’s work on water use in agriculture can be found here.
IFPRI’s work on water policy can be found here.
Laurent Bossard, Director, OECD Sahel and West Africa Club (SWAC) Secretariat
The latest SWAC/OECD publication Cross-Border Co-operation and Policy Networks in West Africa addresses the crucial but often overlooked issue of cross-border co-operation, employing an analytical approach sparsely used in the development field and in West Africa in particular – social network analysis. These two unique features of the publication make for enriching reading.
More than 46% of West Africa’s agglomerations and over half of the West African urban population are located within 100 km of a border. In fact no place in Benin, The Gambia, Guinea Bissau or Togo is more than 100 km from a border, and these border areas cover two-thirds of Guinea, Senegal and Sierra Leone, and more than half of Burkina Faso and Ghana, as well as being home to the vast majority of the Mauritanian and Niger populations. These figures show us the importance that we should place on border and cross-border dynamics, particularly in relation to the development of agro-pastoral and food security, health-related education, the management and preservation of the environment, and of course, security issues. The only things that stop at borders are national policies. The rest passes through: goods, information, people; but also crises and instabilities. People living close to borders, their networks, villages and towns are the foundations of powerful, transnational processes of de facto regional integration, whilst de jure integration continues to struggle with implementation challenges.
The idea of reconciling “bottom-up” integration with “top-down” integration through cross-border co-operation policies is slowly progressing. Cross-border co-operation, promoted at the turn of the century by former Malian President Alpha Oumar Konaré, now benefits from programmes led by the African Union Commission, ECOWAS and UEMOA, whilst a number of international initiatives are also underway. However, this approach remains marginal in the public policies of West African countries and in the portfolios of development co-operation institutions, due largely to the persistence of legal and financial constraints.
It is true that these initiatives are taking place in a context marked by an upsurge in transnational terrorism, which, as in Mali, Nigeria and the Lake Chad Basin, encourages the international community and African countries to attach increasing importance to the security of borders. More than ever, border control is a crucial issue for the stability of states and the prosperity of West Africans. As underscored by the International Organization for Migration’s (IOM) recent report on the management of Mali’s borders, a new balance must be found “between control and free movement so that border areas can fully facilitate integration and peace”.
But for this to happen, the co-operation potential of border regions and the functioning of public policy networks that enable the collaboration of cross-border actors must be known.
There are many publications that describe cross-border dynamics. However, few studies have attempted to systematically map the regions that are most favourable to cross-border co-operation, or to visualise the structure of co-operation networks. The analysis of cross-border policy networks presented in this publication is a welcome development for all actors involved in cross-border co-operation in West Africa.
It highlights, for the first time, how cross-border governance networks are organised, how information circulates between partners of different natures, and who are the most central actors, thus facilitating an understanding of these largely informal dynamics. Beyond the academic field, social network analysis is also an empowering tool for local communities and non-governmental organisations, as well as an operational tool for international organisations and governments.
Social network analysis is also pertinent for understanding the functioning of cross-border co-operation networks as it illustrates the complexity of both the relationships that exist between actors and their geographical locations, particularly when networks operate across borders. Furthermore, by providing information at a more detailed and geographically local level, the analysis is more relevant for local actors and the conditions they operate within, enabling local characteristics to be accounted for within national and international strategies. Future African public policies must utilise this approach to better match local realities. At the same time, public policies need to better integrate border areas where there is significant potential for development and regional integration.
Brian Dowd, FocusEconomics
Laurence J. Peter, a Canadian educator and author, is often referenced as saying, “an economist is an expert who will know tomorrow why the things he predicted yesterday didn’t happen today.”
Economics and especially economic forecasting are often given a bad rap. Many people think of forecasting as akin to licking a finger and testing the wind. However, there is a science to it.
Forecasting is essentially attempting to predict the future and predicting the future behavior of anything, much less something as complex and enormous as an entire economy, is not an easy task, to say the least. Accurate forecasts, therefore, are often in short supply.
There are a few reasons for this; the first being that economies are in perpetual motion and therefore extrapolating behaviors and relationships from past economic cycles into the next one is, as one might imagine, tremendously complicated.
The second reason, and perhaps the most surprising, has to do with the vast amount of raw economic data available. In an ideal world, economic forecasts would consider all of the information available. In the real world, however, that is nearly impossible, as information is scattered in myriad news articles, press releases, government communications, along with the aforementioned mountain of raw data.
Although some might consider having all of that information an advantage, nothing could be further from the truth. The thousands of economic indicators and data available tend to produce a vast amount of statistical noise, making the establishment of meaningful relations of causation between variables a serious challenge.
And, of course, we cannot forget the uncertainty that is inherent with forecasting, something that forecasters must take into account and which creates even more noise to deal with.
The question then becomes, is there a way to cancel out all of that noise to get a more accurate forecast? This is where the wisdom of the crowds comes in.
Is there wisdom in the crowds?
To illustrate how the wisdom of the crowds works, it’s best to tell the story of Sir Francis Galton, a Victorian polymath, who was the first to note the wisdom of the crowds at a livestock fair he visited in 1906. In one event, fairgoers were given the opportunity to guess the weight of an ox. The person with the closest guess to the actual weight would win a prize.
Galton hypothesized that not one person would get the answer right, but that everyone would get it right. Bear with me.
Over 750 participants made their guesses and unsurprisingly no one guessed the weight perfectly. However, when Galton calculated the mean average of all of the guesses, incredibly, it turned out to be the exact weight of the ox: 1,198 pounds.
Tapping economic analysts’ wisdom with consensus forecasts
The basic idea of the wisdom of the crowds is that the average of the answers of a group of individuals is often more accurate than the answer of any one individual expert. This was evident in the story of Galton’s experiment at the fair.
For the wisdom of the crowds to be more accurate, it depends on the number of participants and the diversity of the expertise of each individual participant. The more participants involved and the more diverse the participants are, the lower the margin of error.
So what does the wisdom of the crowds have to do with economic forecasting? Remember all of that noise that makes economic forecasting so difficult and as a result affects the accuracy of forecasts? The theory is that idiosyncratic noise is associated with any one individual answer and by taking the average of multiple answers, the noise tends to cancel itself out, presenting a far more accurate picture of the situation.
Sometimes also referred to as simply combining forecasts, the consensus forecast borrows from the same idea of Galton’s wisdom of the crowds – a consensus forecast is essentially the average of forecasts from various sources. Averaging multiple forecasts cancels out the statistical noise to yield a more accurate forecast.
But don’t take my word for it. Over the last few decades there has been a great deal of empirical research that has shown consensus forecasts to increase forecast accuracy, including those cited below.
With that said, it is possible for an individual forecast to beat the Consensus, however, it is unlikely that the same forecaster will consistently do so one forecast period after another. Moreover, those Individual forecasts that do happen to beat the consensus in one period are impossible to pick out ahead of time since they vary significantly from period to period.
Taking a look at a practical example may serve to clear things up a bit further.
A practical example of a consensus forecast
In the graph above, the Consensus Forecast for Malaysia’s 2015 GDP taken in January 2015 was 5.1%. All the other points, marked in grey, along the same axis represent the individual forecasts from 25 prominent sources taken at the same time.
In March 2016, the actual reading came out at 5.0%. A few forecasts were closer to the end result, however, as mentioned previously, some individual forecasts are going to beat the consensus from time to time, but it won’t happen consistently and it would be impossible to know which forecasts those will be until after the fact.
The second graph uses the same example as before; 25 different economic analysts forecasted Malaysia’s 2015 GDP in January of 2015. By March 2016, the maximum forecast turned out to be 16% above the actual reading with the minimum 10% below the actual reading. The consensus was only 1.9% above the actual reading. By taking the average of all forecasts, the upside and downside errors of the different forecasts mostly cancelled each other out. As a result, the consensus forecast was much closer to the actual reading than the majority of the individual forecasts.
Consistency and reducing the margin of error are key
The point to keep in mind is that whether they are consensus forecasts or individual forecasts or any other kind of forecast, predicting the future is seldom going to be perfect. In the Malaysia GDP example, the Consensus wasn’t spot on, but it did certainly reduce the margin of error. It is important to note that there is almost always going to be some error, but reducing that error is the key, and more often than not, it will result in a more accurate forecast.
The consensus not only reduces the margin of error, but it also provides some consistency and reliability. As was mentioned previously, an individual forecaster can beat the consensus, however, it is impossible to know which of hundreds of forecasts will be the most accurate ahead of time. As is evident in our previous example, the forecasts from individual analysts can vary significantly from one to another, whereas the consensus will consistently provide accurate forecasts.
Forecasting isn’t perfect, but does it need to be?
Forecasting is a science, but it isn’t an exact science. They may not be perfect, but forecasts are still very important to businesses and governments, as they shed light on the unforeseen future, helping them to make vital decisions on strategy, plans and budgets.
So, should you trust forecasts? That is a tough question to answer. Yes, forecasting is complicating and, yes, forecasts are notoriously inaccurate and there are few ways to consistently improve forecast accuracy. The point is, however, that forecasts don’t necessarily need to be perfect to be useful. They just need to be as accurate as possible. One such way to do so is leveraging the wisdom of a crowd of analysts to produce a consensus forecast.
As French mathematician, physicist and philosopher Henri Poincaré put it, “It is far better to foresee even without certainty than not to foresee at all.”
The consensus forecast is a more accurate way to “foresee.”
Academic research on consensus forecasts
“Consider what we have learned about the combination of forecasts over the past twenty years. (…) The results have been virtually unanimous: combining multiple forecasts leads to increased forecast accuracy. This has been the result whether the forecasts are judgmental or statistical, econometric or extrapolation. Furthermore, in many cases one can make dramatic performance improvements by simply averaging the forecasts.”- Clemen Robert T. (1989) “Combining forecasts: A review and annotated bibliography” International Journal of Forecasting 5: 559-560
“A key reason for using forecast combinations […] is that individual forecasts may be differently affected by non-stationaries such as structural breaks caused by institutional change, technological developments or large macroeconomic shocks. […] Since it is typically difficult to detect structural breaks in ‘real terms’, it is plausible that on average, across periods with varying degrees of stability, combinations of forecasts from models with different degrees of adaptability may outperform forecasts from individual models.” Aiolfi M. and Timmermann A. (2004) “Structural Breaks and the Performance of Forecast Combinations”
Responsible Algorithms in Business: Robots, fake news, spyware, self-driving cars and corporate responsibility
Roel Nieuwenkamp, Chair of the OECD Working Party on Responsible Business Conduct (@nieuwenkamp_csr)
Why is the topic of robots frequently being raised at recent conferences on responsible business conduct? For example, October last year the Polish Deputy Prime Minister noted the connection between robotisation and corporate responsibility during the opening of the Conference in Warsaw celebrating the 40 years anniversary of the OECD Guidelines for Responsible Business.
The potential negative impacts of robots or automated systems have proved cause for concern. In May 2010 there was a trillion dollar stock market crash, a ‘flash crash’, attributed to algorithm trading or in other words: robot investors. And let’s not forget the mathematical models that contributed to the financial crisis of 2007 and 2008. Recent events surrounding fake news, with Pizzagate as the most extreme example, are also contributing to these concerns.
What is the common denominator of these automated systems? Algorithms! These rule-based processes for solving mathematical problems are being applied to more and more areas of our daily lives. Likely, we are only at the beginning of the era of algorithms and their widespread application is raising many ethical questions for society and businesses in particular.
For example “killer robots”, weapons systems that select and attack targets without meaningful human control raise questions about dehumanisation of killing and who is responsible? In December the United Nations decided to set up an expert group, in order to look into this issue following a campaign ‘Stop Killer Robots’ by Human Rights Watch and other NGOs. While self-driving cars will never be at risk of driving while intoxicated they can make decisions that might pose moral dilemmas for humans. Online face recognition technology raises concerns around privacy. These are just a few examples.
The pervasiveness of the use of algorithms may result in many unintended consequences. In her book ‘Weapons of Math Destruction’ Cathy O’Neil describes how algorithms in combination with big data increase inequality and threaten democracy. She provides examples of the financial crisis and the housing market, but also of a college student who does not get a minimum wage job in a grocery store due to answers provided on a personality test, people whose credit card spending limits are lowered because they shopped at certain stores, etc. She also discussed predictive policing models such as those that predict recidivism and algorithms that send police to patrol areas on the basis of crime data, which can have a racist effect because of harmful or self-fulfilling prophecy feedback loops.
Scholars and practitioners in this field are beginning to consider the ethical implications of application of algorithms. Julia Bossmann of the Foresight institute described her top 9 ethical issues in artificial intelligence. Prof Susan Leigh Anderson of the University of Connecticut stated: “If Hollywood has taught us anything, it’s that robots need ethics.” Cathy O’Neil proposes a ‘Hippocratic oath’ for data scientists. Recently a group of scholars developed Principles for Accountable Algorithms. In the private sector Elon Musk, SpaceX CEO and other business leaders have founded OpenAI, an R&D company created to address ethical issues related to artificial intelligence. Amazon, Facebook, DeepMind, IBM and Microsoft founded a new organisation called the Partnership on Artificial Intelligence to Benefit People & Society. The partnership seeks to facilitate a dialogue on the nature, purpose of artificial intelligence and its impacts on people and society at large. It is encouraging that certain industry efforts are being undertaken in this area. Additionally one thing should be clear for businesses that create and use these technologies: when things go wrong, using algorithms as a scapegoat won’t do the trick.
What guidance on these issues can be found in the most important instrument on business ethics, the OECD Guidelines for Multinational Enterprises (MNE), a multilateral agreement of 46 states on corporate responsibility. Cases brought to National Contact Points, the globally active complaints mechanism of the Guidelines, provide a good illustration of what the Guidelines recommend with respect to these issues. For example, in February of 2013 a consortium of NGOs led by Privacy International (PI) submitted a complaint to the UK National Contact Point (NCP) alleging that Gamma International had supplied a spyware product – Finfisher – to agencies of the Bahrain government which then used it to target pro-democracy activists.
The NCP concluded that Gamma had not acted consistently with the provisions of the OECD Guidelines requiring enterprises to do appropriate due diligence, to undertake a policy commitment to respect human rights and to remediate human rights impacts. Furthermore the company’s approach did not meet with the OECD Guidelines’ standards to respect human rights and the engagement of the company with the NCP process was unsatisfactory, particularly in view of the serious nature of the issues. The NCP recommended that the company engage in human rights due diligence.
What is human rights due diligence and what does it mean for companies developing algorithms? Under the Guidelines due diligence is a process that should be carried out by corporations as part of a broader range of actions to respect human rights. The right to privacy, freedom of speech, freedom from torture and arbitrary detention are examples of the many potential human rights that could be impacted. Due diligence is the process of identifying, preventing and mitigating actual and potential adverse human rights impacts, and accounting for how these impacts are addressed. If there is a risk of severe human rights impacts a heightened form of due diligence is recommended. For example, significant caution should be taken with regard to the sale and distribution of surveillance technology when the buyer is a government with a poor track record of human rights. Due diligence should be applied not only to a company’s activities but across its business relationships. In the context of a company producing algorithms therefore it is not sufficient that they behave responsibly in the context of their own operations but due diligence should also be applied to ensure buyers of the technology are not using it irresponsibly. In instances where this is the case, the company that created and sold the technology is expected to use its leverage in the value chain to prevent or mitigate the impact.
A number of valuable tools to respect human rights and implement the ’know your client’ principle have been developed in the context of ICT business operations. For example, the European Commission has developed a useful guide for companies on respecting human rights in the ICT sector. TechUK, an industry association of ICT companies in the UK, in partnership with the UK government has published a guide on how to design and implement appropriate due diligence processes for assessing cyber security export risks. Additionally the Electronic Frontier Foundation has developed a guide on How Corporations Can Avoid Assisting Repressive Regimes and the Global Network Initiative has developed Principles on Freedom of Expression and Privacy.
Beyond the human rights related recommendations, the OECD Guidelines make other relevant recommendations for companies developing algorithms. For example the Environment Chapter recommends environmental, health and safety impact assessments. The Consumer Chapter advises companies to provide accurate, verifiable and clear information to consumers. In addition companies should respect consumer privacy and take reasonable measures to ensure the security of personal data that they collect, store process or disseminate.
Businesses that create algorithms should do their due diligence on potential human rights impacts. Companies should also carry out due diligence on labour, environmental and health and safety impacts. They should provide accurate verifiable and clear information about their algorithms and take measures to protect personal data. Collaborative industry efforts on responsible algorithms are highly needed to shape these expectations in concrete terms. Responsible algorithms will not only generate profit, but protect the rights of individuals worldwide while doing so.
There’s an algorithm for that. Or there soon will be Marina Bradbury on Insights
 OECD Guidelines for Multinational Enterprises, Chapter VI.3
 OECD Guidelines for Multinational Enterprises, Chapter VIII.2
 OECD Guidelines for Multinational Enterprises, Chapter VIII.6
Andy Haldane, Chief Economist and Executive Director, Monetary Analysis & Statistics, Bank of England
It would be easy to become very depressed at the state of economics in the current environment. Many experts, including economics experts, are simply being ignored. But the economic challenges facing us could not be greater: slowing growth, slowing productivity, the retreat of trade, the retreat of globalisation, high and rising levels of inequality. These are deep and diverse problems facing our societies and we will need deep and diverse frameworks to help understand them and to set policy in response to them. In the pre-crisis environment when things were relatively stable and stationary, our existing frameworks in macroeconomics did a pretty good job of making sense of things.
But the world these days is characterised by features such as discontinuities, tipping points, multiple equilibria, and radical uncertainty. So if we are to make economics interesting and the response to the challenges adequate, we need new frameworks that can capture the complexities of modern societies.
We are seeing increased interest in using complexity theory to make sense of the dynamics of economic and financial systems. For example, epidemiological models have been used to understand and calibrate regulatory capital standards for the largest, most interconnected banks, the so-called “super-spreaders”. Less attention has been placed on using complexity theory to understand the overall architecture of public policy – how the various pieces of the policy jigsaw fit together as a whole in relation to modern economic and financial systems. These systems can be characterised as a complex, adaptive “system of systems”, a nested set of sub-systems, each one itself a complex web. The architecture of a complex system of systems means that policies with varying degrees of magnification are necessary to understand and to moderate fluctuations. It also means that taking account of interactions between these layers is important when gauging risk.
Although there is no generally-accepted definition of complexity, that proposed by Herbert Simon in The Architecture of Complexity – “one made up of a large number of parts that interact in a non-simple way” – captures well its everyday essence. The whole behaves very differently than the sum of its parts. The properties of complex systems typically give rise to irregular, and often highly non-normal, statistical distributions for these systems over time. This manifests itself as much fatter tails than a normal distribution would suggest. In other words, system-wide interactions and feedbacks generate a much higher probability of catastrophic events than Gaussian distributions would imply.
For evolutionary reasons of survival of the fittest, Simon posited that “decomposable” networks were more resilient and hence more likely to proliferate. By decomposable networks, he meant organisational structures which could be partitioned such that the resilience of the system as a whole was not reliant on any one sub-element. This may be a reasonable long-run description of some real-world complex systems, but less suitable as a description of the evolution of socio-economic systems. The efficiency of many of today’s networks relies on their hyper-connectivity. There are, in the language of economics, significantly increasing returns to scale and scope in a network industry. Think of the benefits of global supply chains and global interbank networks for trade and financial risk-sharing. This provides a powerful secular incentive for non-decomposable socio-economic systems.
Moreover, if these hyper-connected networks do face systemic threat, they are often able to adapt in ways which avoid extinction. For example, the risk of social, economic or financial disorder will typically lead to an adaptation of policies to prevent systemic collapse. These adaptive policy responses may preserve otherwise-fragile socio-economic topologies. They may even further encourage the growth of connectivity and complexity of these networks. Policies to support “super-spreader” banks in a crisis for instance may encourage them to become larger and more complex. The combination of network economies and policy responses to failure means socio-economic systems may be less Darwinian, and hence decomposable, than natural and biological systems.
Andy Haldane addresses OECD New Approaches to Economic Challenges (NAEC) Roundtable
What public policy implications follow from this complex system of systems perspective? First, it underscores the importance of accurate data and timely mapping of each layer in the system. This is especially important when these layers are themselves complex. Granular data is needed to capture the interactions within and between these complex sub-systems.
Second, modelling of each of these layers, and their interaction with other layers, is likely to be important, both for understanding system risks and dynamics and for calibrating potential policy responses to them.
Third, in controlling these risks, something akin to the Tinbergen Rule is likely to apply: there is likely to be a need for at least as many policy instruments as there are complex sub-components of a system of systems if risk is to be monitored and managed effectively. Put differently, an under-identified complex system of systems is likely to result in a loss of control, both system-wide and for each of the layers.
In the meantime, there is a crisis in economics. For some, it is a threat. For others it is an opportunity to make a great leap forward, as Keynes did in the 1930s. But seizing this opportunity requires first a re-examination of the contours of economics and an exploration of some new pathways. Second, it is important to look at economic systems through a cross-disciplinary lens. Drawing on insights from a range of disciplines, natural as well as social sciences, can provide a different perspective on individual behaviour and system-wide dynamics.
The NAEC initiative does so, and the OECD’s willingness to consider a complexity approach puts the Organisation at the forefront of bringing economic analysis policy-making into the 21st century.
This article draws on contributions to the OECD NAEC Roundtable on 14 December 2016; The GLS Shackle Biennial Memorial Lecture on 10 November 2016; and “On microscopes and telescopes”, at the Lorentz centre, Leiden, workshop on socio-economic complexity on 27 March 2015.
Noe van Hulst, Ambassador of the Netherlands to the OECD
As we start a year that Ian Bremmer (President Eurasia Group) has coined as entering ‘the geopolitical recession’, it is worth asking what the OECD focus could be in 2017. I see two key issues worth highlighting in this context. First: Escaping the Low-Growth Trap. Escape games are popular nowadays, but this one is of eminent importance to all of us. In the latest Economic Outlook (November 2016) the OECD has aptly demonstrated how we got stuck at 3% per year for the last five years. How can we get out of this low-growth trap? Now that extraordinary accommodative monetary policy has reached its limits, the OECD recommends a more balanced policy set with a much stronger role for collective fiscal action and for more inclusive structural and trade policies.
Although the Economic Outlook makes a passionate case for a more expansionary fiscal stance in many countries, the reality is that this is unlikely to happen. Partly because some countries are cautious in the light of a heavy public debt burden. Partly because they are already growing at or above potential growth, as we heard from Prof. Christoph Schmidt (Chairman of the German Council of Economic Experts) a week after the publication of the Economic Outlook. The reason that potential growth is so low has, of course, everything to do with the productivity slowdown that was – very appropriately – the main topic of the OECD Ministerial Council Meeting in June 2016. Against this background, I think we will find more common OECD ground in 2017 if we focus strongly on boosting smarter structural policies as the main avenue to get out of the low-growth trap.
Let me mention just two concrete examples. The first one is harvesting the great potential of the digital economy, both a priority of the German G20 presidency and a promising new horizontal project within the OECD. The second example is inclusive structural reforms, particularly in product markets potentially delivering short-term benefits in terms of output, investment and employment. Making reforms more inclusive is also about exploiting benefits of complementarities between product and labour market reforms, synergies (growth & equity objectives) and designing policy packages to help vulnerable groups or mitigate trade-offs.The launch of the 2017 OECD Going for Growth publication is an excellent opportunity to highlight this key point. Reinvigorating good-old competition policy will also reinforce stronger and faster diffusion of new (digital) technologies from frontier to laggard firms and hence boost average productivity. Let’s not forget that structural policies are a traditional OECD strength, an area where the OECD holds a strong comparative advantage and rightly enjoys high international credibility.
What about inclusive trade policies? Well, that is my second key issue to focus on in 2017. Global trade growth has been very weak relative to historic norms for five years. The general consensus is that the relationship between trade and GDP growth is undergoing a fundamental shift. In the ‘good old days’ we enjoyed trade growth at a rate of twice global GDP growth and now trade barely makes global output growth. According to OECD analysis this also contributes to the productivity slowdown. So what exactly is going on with trade? Is low trade growth somehow intertwined with the general global growth malaise? To what extent is this due to global value chains contracting, as reflected in OECD analysis? Is the current slowdown in global trade only natural and should not be a major concern? In any case, it is clear that the rise of trade restrictions in G20 countries, still continuing in stunning contradiction to countless G20 communiqués, surely are not helpful. Deeper OECD analysis is required to pin down more precisely how the different factors contribute to the trade slowdown. And how trade impacts labour markets and economic growth in different regions within countries.
Deeper analysis, however, is not enough. We definitely need to ask ourselves some tough questions about where the public backlash against trade and globalisation is coming from and what went wrong. And even more importantly, what we can and should do better. One area is the need to rebalance our trade and investment policies, towards a more fair, sustainable and inclusive system. Making the OECD Guidelines for Multinational Enterprises the centerpiece of trade and investment policies would be a concrete step. Another area is more effective complementary domestic policies to help people deal faster and more successfully with trade-related job losses if and when they occur. Ideally, this entails not only effective ‘safety net’ policies but also so-called “trampoline” policies offering a tangible springboard to new jobs.
In any case, it is obvious that trade and trade policies are politically more under fire now than I can remember – and I am not young. As Martin Wolf wrote in the Financial Times “The era of globalisation under a US-led order is drawing to a close…the question is whether protectionism and conflict will define the next phase”. For very open economies like the Netherlands it is of critical importance how this ‘next phase’ will shape up in 2017 and beyond. At this juncture, the Dutch economy is growing at a solid 2% per year (in 2016 and 2017) with unemployment coming down rapidly to 5%, but the downside risks are all related to where the global economy is heading. Many other OECD member countries have a similarly high exposure to shifts in the global economy. According to Open Market Index data from the International Chamber of Commerce (ICC), more than two-thirds of OECD countries have an above average openness, as measured by observed openness to trade, trade policy, Foreign Direct Investment openness and infrastructure for trade.
The OECD has a crucial role to play, in cooperation with other international organisations, in clearly demonstrating the adverse impact of rising protectionism, in monitoring what’s happening in trade and stimulating policy dialogue on better alternatives that help global growth. In this light it is very fitting that the OECD Ministerial Meeting in June 2017 will focus on the theme of making globalization work for all. Let’s try to come up with concrete policy improvements that can help us preserve a well-functioning open global economy.