The aid community should stop pretending to know the answers and start asking the right questions.

Frans Lammersen and Jorge Moreira da Silva (Director) OECD Development Co-operation Directorate – DCD-DAC

In  The Wealth of Nations, Adam Smith wrote that: “Little else is requisite to carry a state to the highest degree of opulence from the lowest barbarism but peace, easy taxes, and a tolerable administration of justice: all the rest being brought about by natural course of things.” Others were less optimistic. They argued that nations are rich or poor because of differences in religion, culture, endowments, and/or geography.

Modern economic development theories originate from thinking about how to reconstruct Europe in the aftermath of World War II. The European Recovery Program – or the Marshall plan – was based on the notion that economic growth can be stifled by local institutions and social attitudes, especially if these influence the domestic savings and investments rate. According to this linear growth model, a correctly-designed massive injection of capital coupled with public sector intervention to address market failures would ultimately lead to industrialisation and economic development. Many other economic development theories have since followed, but none have been able to explain convincingly why some countries experience rapid economic growth and others not.

The development community has continued its quest for the missing ingredient to ignite economic growth. Candidates have included capital, technology, policies, institutions, better politics, and market integration. Every time we think we have identified what’s missing, we find that it is actually not something which can be provided from outside, but turns out to be an endogenous characteristic of the system itself. Traditionally, development assistance has been rooted in a type of engineering, mass production, conveyor belt mentality, with agencies promoting “silver bullet” solutions for such complex problems as eradicating malaria, reducing vulnerability, improving resilience, strengthening connectivity etc. Unfortunately, piecemeal or one step at a time development programmes often failed to deliver.

Increasingly, complexity thinking – a way of understanding how elements of systems interact and change over time – has found its way into the development discourse. After all, what could be more complex than promoting development, sustainability, human rights, peace, and governance? We should think of the economy and society as being composed of a rich set of interactions between large numbers of adaptive agents, all of which are coevolving. Based on this approach development is not just an increase in outputs, but the emergence of an interlinked system of economic, financial, legal, social and political institutions, firms, products and technologies. Together these elements and their interaction provide citizens with the capabilities to live happy, healthy and fulfilling lives.

Once we look at development as the outcome of a complex adaptive system instead of the sum of what happens to the people and firms, we will get better insights into how we can help accelerate and shape development. We would be more effective if we assess development challenges through this prism of complex adaptive systems. This could yield important insights about how best to prioritise, design and deliver holistic development programmes for achieving the multiple goals of inclusiveness, sustainability and economic growth that underpin the 2030 Sustainable Development Agenda. There is increasing support in aid agencies for the idea that solutions to complex problems must evolve, through trial and error – and that successful programmes are likely to be different for each local context, with its particular history, natural resources and webs of social relations. The key for anyone engaged in the aid business is to put their own preconceived ideas aside and first observe, map, and listen carefully to identify the areas where change for the better is already happening and then try to encourage and nurture that change further.

Complexity matters particularly when the knowledge and capacities required for tackling problems are spread across actors without strong, formalised institutional links. Inherent to many complex problems are divergent interests, conflicting goals or competing narratives. Moreover, it is often unclear how to achieve a given objective in a specific context, or change processes that involve significant, unpredictable forces. At the same time, it is important to emphasise that the counsel of complexity should not be taken as a counsel of despair for development. There has been immense social and economic progress, and development assistance has found to be helpful overall. Development co-operation has contributed to achieving economic objectives by helping developing countries connect their firms to international markets; to achieving social objectives by making globalisation pro-poor and reducing inequalities; and to environmental objectives by adapting to climate change while exploiting comparative advantages.

Not all development challenges are inherently complex though. For those that are, complexity should not be used as an excuse for fatalism and inertia. Instead we should strive to promote innovation, experimentation and renewal. We should build partnerships to learn about the past, allowing us to shape approaches that are more likely to work and that are owned by the people we are trying to help. They will tell us what is working and what is not. Together we should build a narrative for change involving many different voices and perspectives. We should also be modest and realise that it might better to start small and learn and adapt as we go along in iterative processes of dialogue. We should keep looking for change, scanning widely for new factors emerging in the wider world; listen to a wide range of opinions to be better able to anticipate and adapt and seize opportunities.

Embracing complexity where it matters will allow us to contribute more effectively to the 2030 Sustainable Development Agenda.

Useful links

The OECD and the Sustainable Development Goals

The OECD organised a Workshop on Complexity and Policy, 29-30 September 2016, at OECD HQ, Paris, along with the European Commission and INET. Watch the webcast: 29/09 morning29/09 afternoon30/09 morning

Should we rely on economic forecasts? The wisdom of the crowds and the consensus forecast

Brian Dowd, FocusEconomics

Laurence J. Peter, a Canadian educator and author, is often referenced as saying, “an economist is an expert who will know tomorrow why the things he predicted yesterday didn’t happen today.”

Economics and especially economic forecasting are often given a bad rap. Many people think of forecasting as akin to licking a finger and testing the wind. However, there is a science to it.

Forecasting is essentially attempting to predict the future and predicting the future behavior of anything, much less something as complex and enormous as an entire economy, is not an easy task, to say the least. Accurate forecasts, therefore, are often in short supply.

There are a few reasons for this; the first being that economies are in perpetual motion and therefore extrapolating behaviors and relationships from past economic cycles into the next one is, as one might imagine, tremendously complicated.

The second reason, and perhaps the most surprising, has to do with the vast amount of raw economic data available. In an ideal world, economic forecasts would consider all of the information available. In the real world, however, that is nearly impossible, as information is scattered in myriad news articles, press releases, government communications, along with the aforementioned mountain of raw data.

Although some might consider having all of that information an advantage, nothing could be further from the truth. The thousands of economic indicators and data available tend to produce a vast amount of statistical noise, making the establishment of meaningful relations of causation between variables a serious challenge.

And, of course, we cannot forget the uncertainty that is inherent with forecasting, something that forecasters must take into account and which creates even more noise to deal with.

The question then becomes, is there a way to cancel out all of that noise to get a more accurate forecast? This is where the wisdom of the crowds comes in.

Is there wisdom in the crowds?

To illustrate how the wisdom of the crowds works, it’s best to tell the story of Sir Francis Galton, a Victorian polymath, who was the first to note the wisdom of the crowds at a livestock fair he visited in 1906. In one event, fairgoers were given the opportunity to guess the weight of an ox. The person with the closest guess to the actual weight would win a prize.

Galton hypothesized that not one person would get the answer right, but that everyone would get it right. Bear with me.

Over 750 participants made their guesses and unsurprisingly no one guessed the weight perfectly. However, when Galton calculated the mean average of all of the guesses, incredibly, it turned out to be the exact weight of the ox: 1,198 pounds.

Tapping economic analysts’ wisdom with consensus forecasts

The basic idea of the wisdom of the crowds is that the average of the answers of a group of individuals is often more accurate than the answer of any one individual expert. This was evident in the story of Galton’s experiment at the fair.

For the wisdom of the crowds to be more accurate, it depends on the number of participants and the diversity of the expertise of each individual participant. The more participants involved and the more diverse the participants are, the lower the margin of error.

So what does the wisdom of the crowds have to do with economic forecasting? Remember all of that noise that makes economic forecasting so difficult and as a result affects the accuracy of forecasts? The theory is that idiosyncratic noise is associated with any one individual answer and by taking the average of multiple answers, the noise tends to cancel itself out, presenting a far more accurate picture of the situation.

Sometimes also referred to as simply combining forecasts, the consensus forecast borrows from the same idea of Galton’s wisdom of the crowds – a consensus forecast is essentially the average of forecasts from various sources. Averaging multiple forecasts cancels out the statistical noise to yield a more accurate forecast.

But don’t take my word for it. Over the last few decades there has been a great deal of empirical research that has shown consensus forecasts to increase forecast accuracy, including those cited below.

With that said, it is possible for an individual forecast to beat the Consensus, however, it is unlikely that the same forecaster will consistently do so one forecast period after another. Moreover, those Individual forecasts that do happen to beat the consensus in one period are impossible to pick out ahead of time since they vary significantly from period to period.

Taking a look at a practical example may serve to clear things up a bit further.

A practical example of a consensus forecast

In the graph above, the Consensus Forecast for Malaysia’s 2015 GDP taken in January 2015 was 5.1%. All the other points, marked in grey, along the same axis represent the individual forecasts from 25 prominent sources taken at the same time.

In March 2016, the actual reading came out at 5.0%. A few forecasts were closer to the end result, however, as mentioned previously, some individual forecasts are going to beat the consensus from time to time, but it won’t happen consistently and it would be impossible to know which forecasts those will be until after the fact.

The second graph uses the same example as before; 25 different economic analysts forecasted Malaysia’s 2015 GDP in January of 2015. By March 2016, the maximum forecast turned out to be 16% above the actual reading with the minimum 10% below the actual reading.  The consensus was only 1.9% above the actual reading. By taking the average of all forecasts, the upside and downside errors of the different forecasts mostly cancelled each other out. As a result, the consensus forecast was much closer to the actual reading than the majority of the individual forecasts.

Consistency and reducing the margin of error are key

The point to keep in mind is that whether they are consensus forecasts or individual forecasts or any other kind of forecast, predicting the future is seldom going to be perfect. In the Malaysia GDP example, the Consensus wasn’t spot on, but it did certainly reduce the margin of error. It is important to note that there is almost always going to be some error, but reducing that error is the key, and more often than not, it will result in a more accurate forecast.

The consensus not only reduces the margin of error, but it also provides some consistency and reliability. As was mentioned previously, an individual forecaster can beat the consensus, however, it is impossible to know which of hundreds of forecasts will be the most accurate ahead of time. As is evident in our previous example, the forecasts from individual analysts can vary significantly from one to another, whereas the consensus will consistently provide accurate forecasts.

Forecasting isn’t perfect, but does it need to be?

Forecasting is a science, but it isn’t an exact science. They may not be perfect, but forecasts are still very important to businesses and governments, as they shed light on the unforeseen future, helping them to make vital decisions on strategy, plans and budgets.

So, should you trust forecasts? That is a tough question to answer. Yes, forecasting is complicating and, yes, forecasts are notoriously inaccurate and there are few ways to consistently improve forecast accuracy. The point is, however, that forecasts don’t necessarily need to be perfect to be useful. They just need to be as accurate as possible. One such way to do so is leveraging the wisdom of a crowd of analysts to produce a consensus forecast.

As French mathematician, physicist and philosopher Henri Poincaré put it, “It is far better to foresee even without certainty than not to foresee at all.”

The consensus forecast is a more accurate way to “foresee.”

Useful links

OECD forecasting methods and analytical tools

OECD Economic outlook, analysis and forecasts

Academic research on consensus forecasts

“Consider what we have learned about the combination of forecasts over the past twenty years. (…) The results have been virtually unanimous: combining multiple forecasts leads to increased forecast accuracy. This has been the result whether the forecasts are judgmental or statistical, econometric or extrapolation. Furthermore, in many cases one can make dramatic performance improvements by simply averaging the forecasts.”- Clemen Robert T. (1989) “Combining forecasts: A review and annotated bibliography” International Journal of Forecasting 5: 559-560

“A key reason for using forecast combinations […] is that individual forecasts may be differently affected by non-stationaries such as structural breaks caused by institutional change, technological developments or large macroeconomic shocks. […] Since it is typically difficult to detect structural breaks in ‘real terms’, it is plausible that on average, across periods with varying degrees of stability, combinations of forecasts from models with different degrees of adaptability may outperform forecasts from individual models.” Aiolfi M. and Timmermann A. (2004) “Structural Breaks and the Performance of Forecast Combinations”

From economic crisis to crisis in economics

Andy Haldane, Chief Economist and Executive Director, Monetary Analysis & Statistics, ​Bank of England

It would be easy to become very depressed at the state of economics in the current environment. Many experts, including economics experts, are simply being ignored. But the economic challenges facing us could not be greater: slowing growth, slowing productivity, the retreat of trade, the retreat of globalisation, high and rising levels of inequality. These are deep and diverse problems facing our societies and we will need deep and diverse frameworks to help understand them and to set policy in response to them. In the pre-crisis environment when things were relatively stable and stationary, our existing frameworks in macroeconomics did a pretty good job of making sense of things.

But the world these days is characterised by features such as discontinuities, tipping points, multiple equilibria, and radical uncertainty. So if we are to make economics interesting and the response to the challenges adequate, we need new frameworks that can capture the complexities of modern societies.

We are seeing increased interest in using complexity theory to make sense of the dynamics of economic and financial systems. For example, epidemiological models have been used to understand and calibrate regulatory capital standards for the largest, most interconnected banks, the so-called “super-spreaders”. Less attention has been placed on using complexity theory to understand the overall architecture of public policy – how the various pieces of the policy jigsaw fit together as a whole in relation to modern economic and financial systems. These systems can be characterised as a complex, adaptive “system of systems”, a nested set of sub-systems, each one itself a complex web. The architecture of a complex system of systems means that policies with varying degrees of magnification are necessary to understand and to moderate fluctuations. It also means that taking account of interactions between these layers is important when gauging risk.

Although there is no generally-accepted definition of complexity, that proposed by Herbert Simon in The Architecture of Complexity – “one made up of a large number of parts that interact in a non-simple way” – captures well its everyday essence. The whole behaves very differently than the sum of its parts. The properties of complex systems typically give rise to irregular, and often highly non-normal, statistical distributions for these systems over time. This manifests itself as much fatter tails than a normal distribution would suggest. In other words, system-wide interactions and feedbacks generate a much higher probability of catastrophic events than Gaussian distributions would imply.

For evolutionary reasons of survival of the fittest, Simon posited that “decomposable” networks were more resilient and hence more likely to proliferate. By decomposable networks, he meant organisational structures which could be partitioned such that the resilience of the system as a whole was not reliant on any one sub-element. This may be a reasonable long-run description of some real-world complex systems, but less suitable as a description of the evolution of socio-economic systems. The efficiency of many of today’s networks relies on their hyper-connectivity. There are, in the language of economics, significantly increasing returns to scale and scope in a network industry. Think of the benefits of global supply chains and global interbank networks for trade and financial risk-sharing. This provides a powerful secular incentive for non-decomposable socio-economic systems.

Moreover, if these hyper-connected networks do face systemic threat, they are often able to adapt in ways which avoid extinction. For example, the risk of social, economic or financial disorder will typically lead to an adaptation of policies to prevent systemic collapse. These adaptive policy responses may preserve otherwise-fragile socio-economic topologies. They may even further encourage the growth of connectivity and complexity of these networks. Policies to support “super-spreader” banks in a crisis for instance may encourage them to become larger and more complex. The combination of network economies and policy responses to failure means socio-economic systems may be less Darwinian, and hence decomposable, than natural and biological systems.

Andy Haldane addresses OECD New Approaches to Economic Challenges (NAEC) Roundtable

What public policy implications follow from this complex system of systems perspective? First, it underscores the importance of accurate data and timely mapping of each layer in the system. This is especially important when these layers are themselves complex. Granular data is needed to capture the interactions within and between these complex sub-systems.

Second, modelling of each of these layers, and their interaction with other layers, is likely to be important, both for understanding system risks and dynamics and for calibrating potential policy responses to them.

Third, in controlling these risks, something akin to the Tinbergen Rule is likely to apply: there is likely to be a need for at least as many policy instruments as there are complex sub-components of a system of systems if risk is to be monitored and managed effectively. Put differently, an under-identified complex system of systems is likely to result in a loss of control, both system-wide and for each of the layers.

In the meantime, there is a crisis in economics. For some, it is a threat. For others it is an opportunity to make a great leap forward, as Keynes did in the 1930s. But seizing this opportunity requires first a re-examination of the contours of economics and an exploration of some new pathways. Second, it is important to look at economic systems through a cross-disciplinary lens. Drawing on insights from a range of disciplines, natural as well as social sciences, can provide a different perspective on individual behaviour and system-wide dynamics.

The NAEC initiative does so, and the OECD’s willingness to consider a complexity approach puts the Organisation at the forefront of bringing economic analysis and policy-making into the 21st century.

Useful links

This article draws on contributions to the OECD NAEC Roundtable on 14 December 2016; The GLS Shackle Biennial Memorial Lecture on 10 November 2016; and “On microscopes and telescopes”, at the Lorentz centre, Leiden, workshop on socio-economic complexity on 27 March 2015.

The OECD organised a Workshop on Complexity and Policy, 29-30 September, OECD HQ, Paris, along with the European Commission and INET. Watch the webcast: 29/09 morning29/09 afternoon30/09 morning

The Future of Economics: From Complexity to Commons

Paul B. Hartzog, Futurist

This article looks at three crucial insights for the future of economics: Complex adaptive systems; how technologies of cooperation enable commons-based peer-to-peer networks; and why we need complex adaptive systems to understand new economies

COMPLEX ADAPTIVE SYSTEMS

The Edge of Chaos

Complex adaptive systems has enjoyed considerable attention in recent decades. Chaos theory reveals that out of turbulence and nonlinear dynamics, complex systems emerge: order from chaos.

We learned that complex systems are poised on the “edge of chaos” and generate “order for free” (Stuart Kauffman). They are composed of many parts connected into a flexible network. As matter and energy flow through, they spontaneously self-organize into increasingly complex structures. These systems, continuously in flux, operate “far from equilibrium” (Ilya Prigogine). Beyond critical thresholds, differences in degree become differences in kind. “More is different.” (Phil Anderson)

Complexity science reveals the difference between prediction and attraction. We can know that a marble in a bowl will reach the bottom even though we cannot predict its exact path because of sensitivity to initial conditions. Deterministic chaos means path dependence, where future states are highly influenced by small changes in previous states. A typical economic example is the lock-in of the now-standard “QWERTY” keyboard.

Networks

We see network effects: adding another node to a network increases the value of all other nodes exponentially, because many new connections are possible, economically “increasing returns to scale” (Brian Arthur). Reed’s Law goes even farther, because new groups can be formed, exhibiting a much greater geometric growth. We know about “small-world,” or “scale-free,” networks, so called because there is no statistic at any scale that is representative of the network as a whole, e.g. no bell-curve average, but instead a “long tail,” mathematically a logarithmic “power law.” Some networks are robust to random failures but vulnerable to selective damage, i.e. network attacks that target nodes with a higher centrality. Furthermore, “centrality” means different things inside different network topologies. Network structure affects the frequency and magnitude of cascades. Like avalanches in sand piles, power laws create “self-organized criticality” (Per Bak).

Information Landscapes

Complex systems constitute “fitness landscapes,” exhibit cycles of growth and decline, are punctuated by explosions of diversity and periods of stasis, and show waves of ebb and flow, seen in traffic patterns. On fitness landscapes, algorithms that pursue merely maximization, without the ability to observe remote information from the landscape, freeze in local optima. Without system diversity, there is no improvement. Swarms escape because they not only read information from the landscape but also write to it, creating shared information environments.

Landscapes and occupants impart selection pressures on each other. Good employees and good jobs both outperform bad ones. Agents and strategies evolve. Adaptation can become maladaptation when selection pressures change.

Dynamics and Time

When we study the spread of disease through a forest we see a slow progression of infected trees. However, when we study the spread of fire, we see the same pattern enacted much faster.

Complex systems and their dynamics are not new. What is new is that human systems have accelerated to the point where political, economic, and social changes now occur rapidly enough to appear within the threshold of human perception. We change from slow social movement to an era of “smart mobs.” Consequently, while it may be true that we did not need the tools of complex systems in the past, because economic change was slow and did not require a dynamical viewpoint, the current speed of economic change demands this new lens.

THE EMERGENCE OF COMMONS-BASED PEER-TO-PEER NETWORKS

A crucial global economic phenomenon is the rise of commons-based peer-to-peer networks. “Technologies of cooperation” (Howard Rheingold) enable people to self-organize in productive ways. Open-source software was one first clue to powerful new ways of organizing labor and capital. “Commons-based peer-production” is radically cost-effective (Yochai Benkler). By “governing the commons” (Elinor Ostrom), shared resources managed by communities with polycentric horizontal rules, without reliance on either the state or the market, escape the “tragedy of the commons.” Our thinking about production, property, and even the state, must evolve to reflect the growing participatory economy of global stewardship and collectively-driven “platform cooperatives” (Michel Bauwens). New commons include food, energy, “making,” health, education, news, and even currency.

The rise of 3D printing and the Internet of Things combined with participatory practices yields new forms of value production, paralleling new forms of value accounting and exchange. We witness a “Cambrian explosion” of new currency species, like BitCoin, and innovative trust technologies to support them: the blockchain and distributed ledgers. Just as 20th century electrical infrastructure remained fragmented until standards enabled a connected network (Thomas Hughes), new infrastructure matures when separate solutions merge and the parts reinforce the stability of the whole.

THE FUTURE FATE OF ECONOMICS

Economics as a discipline can only remain relevant as long as it can provide deep engagement with contemporary reality. Overly-simplified models and problematic axioms cannot guide us forward. The world is an interwoven, heterogeneous, adaptive “panarchy.”

Harnessing complexity requires understanding the frequency, intensity, and “sync” of global connectivity. Analyzing many futures demands better tools. To analyze “big data,” first we need data. Complexity science utilizes multi-agent simulations to investigate many outcomes, sweep parameters, and identify thresholds, attractors, and system dynamics. Complexity methods provide unique metrics and representations, animated visuals rather than static graphs.

This is not just big data; it’s dynamic data. With distributed systems, it becomes peer-to-peer data: shared infrastructure. Just as ants leave trails for others, shared infrastructure bolsters interoperability through a knowledge commons. Restricting connectivity and innovation, e.g. with intellectual property rights, carries extreme costs now. Fitness impedes uncooperative agents and strategies. Fortunately new commons have novel “copyleft” licenses already, promoting fairness and equity.

Complexity science shows us not only what to do, but also how to do it: build shared infrastructure, improve information flow, enable rapid innovation, encourage participation, support diversity and citizen empowerment.

Useful links

Panarchy 101, or How I Learned to Stop Worrying and Love Global Collapse Paul B. Hartzog

The OECD organised a Workshop on Complexity and Policy, 29-30 September, OECD HQ, Paris, along with the European Commission and INET. Watch the webcast: 29/09 morning29/09 afternoon30/09 morning

A Pragmatic Holist: Herbert Simon, Economics and “The Architecture of Complexity”

Vela Velupillai, Madras School of Economics

“Herb had it all put together at least 40 years ago – and I’ve known him only for 35.” Alan Newell, 1989.

And so it was, with Hierarchy in 1950, Near-Decomposability from about 1949, and Causality, underpinning the reasonably rapid evolution of dynamical systems into a series of stable complex structures. Almost all of these pioneering articles are reprinted in Simon’s 1977 collection and, moreover, the hierarchy and near-decomposability classics appear in section 4 with the heading “Complexity”. The cybernetic vision became the fully-fledged digital computer basis of boundedly rational human problem solvers implementing heuristic search procedures to prove, for example, axiomatic mathematical theorems (in the monumental Principia Mathematica of Russell & Whitehead) substantiating Alan Newell’s entirely reasonable claim quoted above.

In defining the notion of complexity in The Architecture of Complexity (AoC), Simon eschews formalisms and relies on a rough, working, concept of complex systems that would help identify examples of observable structures – predominantly in the behavioural sciences – that could lead to theories and, hence, theorems, of evolving dynamical systems that exhibit properties that are amenable to design and prediction with the help of hierarchy, near-decomposability and causality. Thus, the almost informal definition is (italics added): “Roughly, by a complex system I mean one made up of a large number of parts that interact in a nonsimple way. In such systems, the whole is more than the sum of the parts … in the … pragmatic sense that, given the properties of the parts and the laws of their interaction, it is not a trivial matter to infer the properties of the whole. In the face of complexity, an in-principle reductionist may be at the same time a pragmatic holist.”

Simon was always a pragmatic holist, even while attempting the reduction of the behaviour of complex entities to parsimonious processes that would exhibit the properties of  “wholes”, based on nonsimply interacting “parts”, that may themselves be simple. He summarised the way this approach could apply to economics in a letter to Professor Axel Leijonhufvud and me after reading my book Computable Economics. (You can see the letter here.) Simon argued that:

“Finally, we get to the empirical boundary … of the level of complexity that humans actually can handle, with and without their computers, and – perhaps more important – what they actually do to solve problems that lie beyond this strict boundary even though they are within some of the broader limits.

The latter is an important point for economics, because we humans spend most of our lives making decisions that are far beyond any of the levels of complexity we can handle exactly; and this is where satisficing, floating aspiration levels, recognition and heuristic search, and similar devices for arriving at good-enough decisions take over. [The term ‘satisfice’, which appears in the Oxford English Dictionary as a Northumbrian synonym for ‘satisfy’, was borrowed by Simon (1956) in ‘Rational Choice and the Structure of the Environment’ to describe a strategy for reaching a decision the decider finds adequate, even if it’s not optimal in theory.] A parsimonious economic theory, and an empirically verifiable one, shows how human beings, using very simple procedures, reach decisions that lie far beyond their capacity for finding exact solutions by the usual maximizing criteria.”

In many ways, AoC summarised Simon’s evolving (sic!) visions of a quantitative behavioural science, which provided the foundations of administering complex, hierarchically structured, causal organisations, by boundedly rational agents implanting – with the help of digital computers – procedures that were, in turn, reflections of human problem solving processes. But it also presaged the increasing precision of predictable reality – not amounting to non-pragmatic, non-empirical phenomena – requiring an operational description of complex systems that were the observable in nature, resulting from the evolutionary dynamics of hierarchical structures. Thus, the final – fourth – section of AoC “examines the relation between complex systems and their descriptions” – for which Simon returned to Solomonoff’s pioneering definition of algorithmic information theory.

AoC was equally expository on the many issues with which we have come to associate Simon’s boundedly rational agents (and Institutions) satisficing – instead of optimising, again for pragmatic, historically observable, realistic reasons – using heuristic search processes in Human Problem Solving contexts of behavioural decisions. The famous distinction between substantive and procedural rationality arose from the dichotomy of a state vs process description of a world “as sensed and … as acted upon”.

Essentially AoC is suffused with pragmatic definitions and human procedures of realistic implementations, even in the utilising of digital computers. Computability theory assumes the Church-Turing Thesis in defining algorithms. The notion of computational complexity is predicated upon the assumption of the validity of the Church-Turing Thesis. Simon’s algorithms for human problem solvers are heuristic search processes, where no such assumption is made. Hence the feeling that engulfed him in his later years is not surprising   (italics added):

“The field of computer science has been much occupied with questions of computational complexity, the obverse of computational simplicity. But in the literature of the field, ‘complexity’ usually means something quite different from my meaning of it in the present context. Largely for reasons of mathematical attainability, and at the expense of relevance, theorems of computational complexity have mainly addressed worst-case behaviour of computational algorithms as the size of the data set grows larger. In the limit, they have even focused on computability in the sense of Gödel, and Turing and the halting problem. I must confess that these concerns produce in me a great feeling of ennui.”

Useful links

A version of this article with added commentary and references is available here.

As mentioned above, Herbert Simon wrote to Professors Axel Leijonhufvud and Kumaraswamy Velupillai after reading Pr Velupillai’s Computable Economics. You can see the letter here.

The OECD organised a Workshop on Complexity and Policy, 29-30 September, OECD HQ, Paris, along with the European Commission and INET. Watch the webcast: 29/09 morning29/09 afternoon30/09 morning

Ants, algorithms and complexity without management

NAECDeborah M. Gordon, Department of Biology, Stanford University

Systems without central control are ubiquitous in nature. The activities of brains, such as thinking, remembering and speaking, are the outcome of countless electrical interactions among cells. Nothing in the brain tells the rest of it to think or remember. I study ants because I am interested in how collective outcomes arise from interactions among individuals, and how collective behaviour is tuned to changing environments.

There are more than 14,000 species of ants, which all live in colonies consisting of one or more reproductive females, and many sterile workers, which are the ants that you see walking around. Although the reproductive females are called “queens”, they have no power or political authority. One ant never directs the behaviour of another or tells it what to do. Ant colonies manage to collect food, build and maintain nests, rear the young, and deal with neighbouring colonies – all without a plan.

The collective behaviour of colonies is produced by a dynamical network of simple interactions among ants.  In most ant species, the ants can barely see. They operate mostly by smell. As an ant moves around it briefly contacts other ants with its antennae, or it may contact a short-lived patch of a volatile chemical recently left behind by another ant. Ants smell with their antennae, and when one ant touches another with its antennae, it assesses whether the other ant is a nestmate, and sometimes what task the other ant has been performing. The ant uses its recent experience of chemical interactions to decide what to do next. In the aggregate, these simple interactions create a constantly shifting network that regulates the behaviour of the colony.

The process that generates simple interactions from colony behavior is what computer scientists call a distributed algorithm. No single unit, such as an ant or a router in a data network, knows what all the others are doing and tells them what to do. Instead, interactions between each unit and its local connections add up to the desired outcome.

The distributed processes that regulate the collective behaviour of ants are tuned to environmental conditions. For example, harvester ants in the desert face high operating costs, and their behaviour is regulated by feedback that limits activity unless it is necessary. A colony must spend water to get water. The ants get water by metabolizing the fats in the seeds they eat. A forager out in the desert sun loses water while out searching for food. Colonies manage this tradeoff by a simple form of feedback. An outgoing forager does not leave the nest until it meets enough returning foragers with seeds. This makes sense because each forager searches until it finds food. Thus the more food is available, the more quickly they find it and return to the nest, stimulating more foragers to go out to search. When food is not available, foraging activity decreases. A long-term study of a population of colonies shows that the colonies that conserve water in dry conditions by staying inside are more successful in producing offspring colonies.

By contrast, another species called “turtle ants”, living in the trees of a tropical forest in Mexico, regulate their behaviour very differently. The turtle ants create a highway system of trails that links different nests and food sources. Operating costs are low because it is humid in the tropical forest, but competition from other species is high. These ants interact using trail pheromones, laying down a chemical trail everywhere they go. An ant tends to follow another and this simple interaction keeps the stream of ants going, except when it is deterred by encounters with other species. In conditions of low operating costs, interactions create feedback that makes ongoing activity the default state, and uses negative feedback to inhibit activity. Thus this is the opposite of the system for desert ants that require positive feedback to initiate activity.

What can we learn from ants about human society? Ants have been used throughout history as examples of obedience and industry. In Greek mythology, Zeus changes the ants of Thessaly into men, creating an army of soldiers, who would become famous as the Myrmidons ready to die for Achilles (from  myrmex – μύρμηξ – ant). In the Bible (Proverbs 4:4), we are told to “Look to the ant” who harvests grain in the summer to save for the winter. But ants are not acting out of obedience, and they are not especially industrious; in fact, many ants just hang around in the nest doing nothing.

Ants and humans are very different. Power and identity are crucial to human social behaviour, and absent in ants. Ants do not have relations with other ants as individuals. As an ant assesses its recent interactions with others, it does not matter whether it met ant number 522 or ant number 677.  Even more fundamental, an ant does not act in response to any assessment of what needs to be done.

However, we may be able to learn from ants about the behaviour of very large dynamical networks by focussing on the pattern or structure of interactions rather than the content. While we care about what our emails say, the ants care only about how often they get them. It is clear that many human social processes operate without central control. For instance, we see all around us the effects of climate change driven by many different social processes that are based on the use of fossil fuel. No central authority decided to pump carbon into the atmosphere, but the CO2 levels are the result of human activity.  Another obvious example is the internet, a huge dynamical network of local interactions in the form of email messages and visits to websites. The role of social media in the recent US election reflects how the gap between different networks can produce completely disparate views of what is happening and why.

The most useful insights may come from considering how the dynamics of distributed algorithms evolve in relation to changing conditions. The correspondences between the regulation of collective behaviour and the changing conditions in which it operates might provide insight, and even inspire thinking about policy, in human social systems.  For ants or neurons, the network has no content. Studying natural systems can show us how the rhythm of local interactions creates patterns in the behaviour and development of large groups, and how such feedback evolves in response to a changing world.

Useful links

The OECD organised a Workshop on Complexity and Policy, 29-30 September, OECD HQ, Paris, along with the European Commission and INET. Watch the webcast: 29/09 morning29/09 afternoon30/09 morning

Ants at Work: How an Insect Society is Organized Deborah M. Gordon

Ant Encounters: Interaction Networks and Colony Behavior (Primers in Complex Systems) Deborah M. Gordon

A complex global financial system

NAECAdrian Blundell-Wignall, Special Advisor to the OECD Secretary-General on Financial and Enterprise Affairs

Global finance is the perfect example of a complex system, consisting as it does of a highly interconnected system of sub-systems featuring tipping points, emergence, asymmetries, unintended consequences, a “parts-within-parts” structure (to quote Herbert Simon), and all the other defining characteristics of complexity. It is shaped by numerous internal and external trends and shocks that it also influences and generates in turn. And as the system (in most parts) also reacts to predictions about it, it can be called a “level two” chaotic system (as described, e.g. by Yuval Harari)

Numerous developments combined to contribute to the 2008 crisis and several of them led to structures and institutions that might pose problems again. Two important trends that would play a huge role in the crisis were the opening up of OECD economies to international trade and investment after 1945, and rapid advances in digital technology and networks. These trends brought a greater complexity of financial products and structures needed to navigate this new world, going well beyond the needs to meet the increased demand for cross-border banking to include new products that would facilitate hedging of exchange rate and credit default risks; financial engineering to match maturities required by savers and investors, and to take advantage of different tax and regulatory regimes; mergers and acquisitions not only of businesses, but of stock exchanges and related markets with global capabilities; and new platforms and technological developments to handle the trading of volatile new products.

The freeing up of financial markets followed the opening of goods markets, and in some respects was the necessary counterpart of it. However, the process went very far, and by the end of the 1990s policies encouraged the “financial supermarket” model, and by 2004 bank capital rules became materially more favourable to bank leverage as did rule changes for investment banks. The banking system became the epicentre of the global financial crisis, because of the under-pricing of risk, essentially due to poor micro-prudential regulation, excessive leverage, and too-big-to-fail business models. The rise of the institutional investor, the expansion of leverage and derivatives, the general deepening of financial markets and technological advances led to innovations not only in products but also in how securities are traded, for example high-frequency trading. The increasing separation of owners from the governance of companies also added a new layer of complexity compounding some of these issues (passive funds, ETFs, lending agents custody, re-hypothecation, advisors and consultants are all in the mix).

The trends towards openness in OECD economies were not mirrored in emerging market economies (EMEs) generally, and in Asia in particular. Capital controls remained strong in some EMEs despite a strengthening and better regulated domestic financial system. Furthermore, capital control measures have often supported a managed exchange rate regime in relation to the US dollar. When countries intervene to fix their currencies versus the dollar, they acquire US dollars and typically recycle these into holdings of US Treasuries, very liquid and low-risk securities . There are two important effects of the increasingly large size of “dollar bloc” EME’s: first, they compress Treasury yields as the stock of their holdings grows, second, their foreign exchange intervention means that the US economy faces a misalignment of its exchange rates vis-à-vis these trading partners.

Low interest rates, together with the more compressed yields on Treasury securities, have encouraged investors to search for higher-risk and higher-yield products. In “risk-on” periods this contributes to increased inflows into EME high-yield credit which, in turn, contributes to more foreign exchange intervention and increased capital control measures. The potential danger is that in “risk-offperiods, the attempt to sell these illiquid assets will result in huge pressures on EME funding and a great deal of volatility in financial markets.

The euro affects financial stability too, often in unexpected ways.. European countries trade not only with each other but with the rest of the world. However, the north of Europe is, through global value chains, more vertically integrated into strongly growing Asia due to the demands for high-quality technology, infrastructure, and other investment goods, while the south of Europe is competing with EMEs to a greater degree in lower-level manufacturing trade. Asymmetric real shocks to different euro area regions, such as divergent fiscal policy or changes in EME competitiveness, mean that a one-size-fits-all approach to monetary policy creates economic divergence. Resulting bad loans feed back into financial fragility issues, and interconnectedness adds to the complexity of the problem.

Population ageing adds to these concerns, notably due to the interactions among longer life spans, low yields on the government bonds that underpin pension funds, and lack of saving by the less wealthy who were hardest hit by the crisis and may also suffer from future changes in employment and career structures. To meet yield targets, institutions have taken on more risk in products that are often less transparent and where providers are trying to create “artificial liquidity” that does not exist in the underlying securities and assets.

However big and complex the financial system, though, it is not an end in itself. Its role should be to help fund the economic growth and jobs that will contribute to well-being. But despite all the interconnectedness, paradoxically, as the OECD Business and Finance Outlook 2016 argues, fragmentation is blocking business investment and productivity growth.

In financial markets, information technology and regulatory reforms have paved the way for fragmentation with respect to an increased number of stock trading venues and created so-called “dark trading” pools. Differences in regulatory requirements and disclosure among trading venues raise concerns about stock market transparency and equal treatment of investors. Also, corporations may be affected negatively if speed and complexity is rewarded over long-term investing.

Different legal regimes across countries and in the growing network of international investment treaties also fragment the business environment. National laws in different countries sanction foreign bribery with uneven and often insufficient severity, and many investment treaties have created rules that can fragment companies with respect to their investors and disrupt established rules on corporate governance and corporate finance.

Complexity is in the nature of the financial system, but if we want this system to play its role in funding inclusive, sustainable growth, we need to put these fragmented pieces back together in a more harmonious way.

Useful links

New Approaches to Economic Challenges (NAEC): The financial stream

OECD Business and Finance Outlook

The OECD organised a Workshop on Complexity and Policy, 29-30 September, OECD HQ, Paris, along with the European Commission and INET. Watch the webcast: 29/09 morning; 29/09 afternoon; 30/09 morning