The Future of Economics: From Complexity to Commons

Paul B. Hartzog, Futurist

This article looks at three crucial insights for the future of economics: Complex adaptive systems; how technologies of cooperation enable commons-based peer-to-peer networks; and why we need complex adaptive systems to understand new economies

COMPLEX ADAPTIVE SYSTEMS

The Edge of Chaos

Complex adaptive systems has enjoyed considerable attention in recent decades. Chaos theory reveals that out of turbulence and nonlinear dynamics, complex systems emerge: order from chaos.

We learned that complex systems are poised on the “edge of chaos” and generate “order for free” (Stuart Kauffman). They are composed of many parts connected into a flexible network. As matter and energy flow through, they spontaneously self-organize into increasingly complex structures. These systems, continuously in flux, operate “far from equilibrium” (Ilya Prigogine). Beyond critical thresholds, differences in degree become differences in kind. “More is different.” (Phil Anderson)

Complexity science reveals the difference between prediction and attraction. We can know that a marble in a bowl will reach the bottom even though we cannot predict its exact path because of sensitivity to initial conditions. Deterministic chaos means path dependence, where future states are highly influenced by small changes in previous states. A typical economic example is the lock-in of the now-standard “QWERTY” keyboard.

Networks

We see network effects: adding another node to a network increases the value of all other nodes exponentially, because many new connections are possible, economically “increasing returns to scale” (Brian Arthur). Reed’s Law goes even farther, because new groups can be formed, exhibiting a much greater geometric growth. We know about “small-world,” or “scale-free,” networks, so called because there is no statistic at any scale that is representative of the network as a whole, e.g. no bell-curve average, but instead a “long tail,” mathematically a logarithmic “power law.” Some networks are robust to random failures but vulnerable to selective damage, i.e. network attacks that target nodes with a higher centrality. Furthermore, “centrality” means different things inside different network topologies. Network structure affects the frequency and magnitude of cascades. Like avalanches in sand piles, power laws create “self-organized criticality” (Per Bak).

Information Landscapes

Complex systems constitute “fitness landscapes,” exhibit cycles of growth and decline, are punctuated by explosions of diversity and periods of stasis, and show waves of ebb and flow, seen in traffic patterns. On fitness landscapes, algorithms that pursue merely maximization, without the ability to observe remote information from the landscape, freeze in local optima. Without system diversity, there is no improvement. Swarms escape because they not only read information from the landscape but also write to it, creating shared information environments.

Landscapes and occupants impart selection pressures on each other. Good employees and good jobs both outperform bad ones. Agents and strategies evolve. Adaptation can become maladaptation when selection pressures change.

Dynamics and Time

When we study the spread of disease through a forest we see a slow progression of infected trees. However, when we study the spread of fire, we see the same pattern enacted much faster.

Complex systems and their dynamics are not new. What is new is that human systems have accelerated to the point where political, economic, and social changes now occur rapidly enough to appear within the threshold of human perception. We change from slow social movement to an era of “smart mobs.” Consequently, while it may be true that we did not need the tools of complex systems in the past, because economic change was slow and did not require a dynamical viewpoint, the current speed of economic change demands this new lens.

THE EMERGENCE OF COMMONS-BASED PEER-TO-PEER NETWORKS

A crucial global economic phenomenon is the rise of commons-based peer-to-peer networks. “Technologies of cooperation” (Howard Rheingold) enable people to self-organize in productive ways. Open-source software was one first clue to powerful new ways of organizing labor and capital. “Commons-based peer-production” is radically cost-effective (Yochai Benkler). By “governing the commons” (Elinor Ostrom), shared resources managed by communities with polycentric horizontal rules, without reliance on either the state or the market, escape the “tragedy of the commons.” Our thinking about production, property, and even the state, must evolve to reflect the growing participatory economy of global stewardship and collectively-driven “platform cooperatives” (Michel Bauwens). New commons include food, energy, “making,” health, education, news, and even currency.

The rise of 3D printing and the Internet of Things combined with participatory practices yields new forms of value production, paralleling new forms of value accounting and exchange. We witness a “Cambrian explosion” of new currency species, like BitCoin, and innovative trust technologies to support them: the blockchain and distributed ledgers. Just as 20th century electrical infrastructure remained fragmented until standards enabled a connected network (Thomas Hughes), new infrastructure matures when separate solutions merge and the parts reinforce the stability of the whole.

THE FUTURE FATE OF ECONOMICS

Economics as a discipline can only remain relevant as long as it can provide deep engagement with contemporary reality. Overly-simplified models and problematic axioms cannot guide us forward. The world is an interwoven, heterogeneous, adaptive “panarchy.”

Harnessing complexity requires understanding the frequency, intensity, and “sync” of global connectivity. Analyzing many futures demands better tools. To analyze “big data,” first we need data. Complexity science utilizes multi-agent simulations to investigate many outcomes, sweep parameters, and identify thresholds, attractors, and system dynamics. Complexity methods provide unique metrics and representations, animated visuals rather than static graphs.

This is not just big data; it’s dynamic data. With distributed systems, it becomes peer-to-peer data: shared infrastructure. Just as ants leave trails for others, shared infrastructure bolsters interoperability through a knowledge commons. Restricting connectivity and innovation, e.g. with intellectual property rights, carries extreme costs now. Fitness impedes uncooperative agents and strategies. Fortunately new commons have novel “copyleft” licenses already, promoting fairness and equity.

Complexity science shows us not only what to do, but also how to do it: build shared infrastructure, improve information flow, enable rapid innovation, encourage participation, support diversity and citizen empowerment.

Useful links

Panarchy 101, or How I Learned to Stop Worrying and Love Global Collapse Paul B. Hartzog

The OECD organised a Workshop on Complexity and Policy, 29-30 September, OECD HQ, Paris, along with the European Commission and INET. Watch the webcast: 29/09 morning29/09 afternoon30/09 morning

A Pragmatic Holist: Herbert Simon, Economics and “The Architecture of Complexity”

Vela Velupillai, Madras School of Economics

“Herb had it all put together at least 40 years ago – and I’ve known him only for 35.” Alan Newell, 1989.

And so it was, with Hierarchy in 1950, Near-Decomposability from about 1949, and Causality, underpinning the reasonably rapid evolution of dynamical systems into a series of stable complex structures. Almost all of these pioneering articles are reprinted in Simon’s 1977 collection and, moreover, the hierarchy and near-decomposability classics appear in section 4 with the heading “Complexity”. The cybernetic vision became the fully-fledged digital computer basis of boundedly rational human problem solvers implementing heuristic search procedures to prove, for example, axiomatic mathematical theorems (in the monumental Principia Mathematica of Russell & Whitehead) substantiating Alan Newell’s entirely reasonable claim quoted above.

In defining the notion of complexity in The Architecture of Complexity (AoC), Simon eschews formalisms and relies on a rough, working, concept of complex systems that would help identify examples of observable structures – predominantly in the behavioural sciences – that could lead to theories and, hence, theorems, of evolving dynamical systems that exhibit properties that are amenable to design and prediction with the help of hierarchy, near-decomposability and causality. Thus, the almost informal definition is (italics added): “Roughly, by a complex system I mean one made up of a large number of parts that interact in a nonsimple way. In such systems, the whole is more than the sum of the parts … in the … pragmatic sense that, given the properties of the parts and the laws of their interaction, it is not a trivial matter to infer the properties of the whole. In the face of complexity, an in-principle reductionist may be at the same time a pragmatic holist.”

Simon was always a pragmatic holist, even while attempting the reduction of the behaviour of complex entities to parsimonious processes that would exhibit the properties of  “wholes”, based on nonsimply interacting “parts”, that may themselves be simple. He summarised the way this approach could apply to economics in a letter to Professor Axel Leijonhufvud and me after reading my book Computable Economics. (You can see the letter here.) Simon argued that:

“Finally, we get to the empirical boundary … of the level of complexity that humans actually can handle, with and without their computers, and – perhaps more important – what they actually do to solve problems that lie beyond this strict boundary even though they are within some of the broader limits.

The latter is an important point for economics, because we humans spend most of our lives making decisions that are far beyond any of the levels of complexity we can handle exactly; and this is where satisficing, floating aspiration levels, recognition and heuristic search, and similar devices for arriving at good-enough decisions take over. [The term ‘satisfice’, which appears in the Oxford English Dictionary as a Northumbrian synonym for ‘satisfy’, was borrowed by Simon (1956) in ‘Rational Choice and the Structure of the Environment’ to describe a strategy for reaching a decision the decider finds adequate, even if it’s not optimal in theory.] A parsimonious economic theory, and an empirically verifiable one, shows how human beings, using very simple procedures, reach decisions that lie far beyond their capacity for finding exact solutions by the usual maximizing criteria.”

In many ways, AoC summarised Simon’s evolving (sic!) visions of a quantitative behavioural science, which provided the foundations of administering complex, hierarchically structured, causal organisations, by boundedly rational agents implanting – with the help of digital computers – procedures that were, in turn, reflections of human problem solving processes. But it also presaged the increasing precision of predictable reality – not amounting to non-pragmatic, non-empirical phenomena – requiring an operational description of complex systems that were the observable in nature, resulting from the evolutionary dynamics of hierarchical structures. Thus, the final – fourth – section of AoC “examines the relation between complex systems and their descriptions” – for which Simon returned to Solomonoff’s pioneering definition of algorithmic information theory.

AoC was equally expository on the many issues with which we have come to associate Simon’s boundedly rational agents (and Institutions) satisficing – instead of optimising, again for pragmatic, historically observable, realistic reasons – using heuristic search processes in Human Problem Solving contexts of behavioural decisions. The famous distinction between substantive and procedural rationality arose from the dichotomy of a state vs process description of a world “as sensed and … as acted upon”.

Essentially AoC is suffused with pragmatic definitions and human procedures of realistic implementations, even in the utilising of digital computers. Computability theory assumes the Church-Turing Thesis in defining algorithms. The notion of computational complexity is predicated upon the assumption of the validity of the Church-Turing Thesis. Simon’s algorithms for human problem solvers are heuristic search processes, where no such assumption is made. Hence the feeling that engulfed him in his later years is not surprising   (italics added):

“The field of computer science has been much occupied with questions of computational complexity, the obverse of computational simplicity. But in the literature of the field, ‘complexity’ usually means something quite different from my meaning of it in the present context. Largely for reasons of mathematical attainability, and at the expense of relevance, theorems of computational complexity have mainly addressed worst-case behaviour of computational algorithms as the size of the data set grows larger. In the limit, they have even focused on computability in the sense of Gödel, and Turing and the halting problem. I must confess that these concerns produce in me a great feeling of ennui.”

Useful links

A version of this article with added commentary and references is available here.

As mentioned above, Herbert Simon wrote to Professors Axel Leijonhufvud and Kumaraswamy Velupillai after reading Pr Velupillai’s Computable Economics. You can see the letter here.

The OECD organised a Workshop on Complexity and Policy, 29-30 September, OECD HQ, Paris, along with the European Commission and INET. Watch the webcast: 29/09 morning29/09 afternoon30/09 morning

Ants, algorithms and complexity without management

NAECDeborah M. Gordon, Department of Biology, Stanford University

Systems without central control are ubiquitous in nature. The activities of brains, such as thinking, remembering and speaking, are the outcome of countless electrical interactions among cells. Nothing in the brain tells the rest of it to think or remember. I study ants because I am interested in how collective outcomes arise from interactions among individuals, and how collective behaviour is tuned to changing environments.

There are more than 14,000 species of ants, which all live in colonies consisting of one or more reproductive females, and many sterile workers, which are the ants that you see walking around. Although the reproductive females are called “queens”, they have no power or political authority. One ant never directs the behaviour of another or tells it what to do. Ant colonies manage to collect food, build and maintain nests, rear the young, and deal with neighbouring colonies – all without a plan.

The collective behaviour of colonies is produced by a dynamical network of simple interactions among ants.  In most ant species, the ants can barely see. They operate mostly by smell. As an ant moves around it briefly contacts other ants with its antennae, or it may contact a short-lived patch of a volatile chemical recently left behind by another ant. Ants smell with their antennae, and when one ant touches another with its antennae, it assesses whether the other ant is a nestmate, and sometimes what task the other ant has been performing. The ant uses its recent experience of chemical interactions to decide what to do next. In the aggregate, these simple interactions create a constantly shifting network that regulates the behaviour of the colony.

The process that generates simple interactions from colony behavior is what computer scientists call a distributed algorithm. No single unit, such as an ant or a router in a data network, knows what all the others are doing and tells them what to do. Instead, interactions between each unit and its local connections add up to the desired outcome.

The distributed processes that regulate the collective behaviour of ants are tuned to environmental conditions. For example, harvester ants in the desert face high operating costs, and their behaviour is regulated by feedback that limits activity unless it is necessary. A colony must spend water to get water. The ants get water by metabolizing the fats in the seeds they eat. A forager out in the desert sun loses water while out searching for food. Colonies manage this tradeoff by a simple form of feedback. An outgoing forager does not leave the nest until it meets enough returning foragers with seeds. This makes sense because each forager searches until it finds food. Thus the more food is available, the more quickly they find it and return to the nest, stimulating more foragers to go out to search. When food is not available, foraging activity decreases. A long-term study of a population of colonies shows that the colonies that conserve water in dry conditions by staying inside are more successful in producing offspring colonies.

By contrast, another species called “turtle ants”, living in the trees of a tropical forest in Mexico, regulate their behaviour very differently. The turtle ants create a highway system of trails that links different nests and food sources. Operating costs are low because it is humid in the tropical forest, but competition from other species is high. These ants interact using trail pheromones, laying down a chemical trail everywhere they go. An ant tends to follow another and this simple interaction keeps the stream of ants going, except when it is deterred by encounters with other species. In conditions of low operating costs, interactions create feedback that makes ongoing activity the default state, and uses negative feedback to inhibit activity. Thus this is the opposite of the system for desert ants that require positive feedback to initiate activity.

What can we learn from ants about human society? Ants have been used throughout history as examples of obedience and industry. In Greek mythology, Zeus changes the ants of Thessaly into men, creating an army of soldiers, who would become famous as the Myrmidons ready to die for Achilles (from  myrmex – μύρμηξ – ant). In the Bible (Proverbs 4:4), we are told to “Look to the ant” who harvests grain in the summer to save for the winter. But ants are not acting out of obedience, and they are not especially industrious; in fact, many ants just hang around in the nest doing nothing.

Ants and humans are very different. Power and identity are crucial to human social behaviour, and absent in ants. Ants do not have relations with other ants as individuals. As an ant assesses its recent interactions with others, it does not matter whether it met ant number 522 or ant number 677.  Even more fundamental, an ant does not act in response to any assessment of what needs to be done.

However, we may be able to learn from ants about the behaviour of very large dynamical networks by focussing on the pattern or structure of interactions rather than the content. While we care about what our emails say, the ants care only about how often they get them. It is clear that many human social processes operate without central control. For instance, we see all around us the effects of climate change driven by many different social processes that are based on the use of fossil fuel. No central authority decided to pump carbon into the atmosphere, but the CO2 levels are the result of human activity.  Another obvious example is the internet, a huge dynamical network of local interactions in the form of email messages and visits to websites. The role of social media in the recent US election reflects how the gap between different networks can produce completely disparate views of what is happening and why.

The most useful insights may come from considering how the dynamics of distributed algorithms evolve in relation to changing conditions. The correspondences between the regulation of collective behaviour and the changing conditions in which it operates might provide insight, and even inspire thinking about policy, in human social systems.  For ants or neurons, the network has no content. Studying natural systems can show us how the rhythm of local interactions creates patterns in the behaviour and development of large groups, and how such feedback evolves in response to a changing world.

Useful links

The OECD organised a Workshop on Complexity and Policy, 29-30 September, OECD HQ, Paris, along with the European Commission and INET. Watch the webcast: 29/09 morning29/09 afternoon30/09 morning

Ants at Work: How an Insect Society is Organized Deborah M. Gordon

Ant Encounters: Interaction Networks and Colony Behavior (Primers in Complex Systems) Deborah M. Gordon

A complex global financial system

NAECAdrian Blundell-Wignall, Special Advisor to the OECD Secretary-General on Financial and Enterprise Affairs

Global finance is the perfect example of a complex system, consisting as it does of a highly interconnected system of sub-systems featuring tipping points, emergence, asymmetries, unintended consequences, a “parts-within-parts” structure (to quote Herbert Simon), and all the other defining characteristics of complexity. It is shaped by numerous internal and external trends and shocks that it also influences and generates in turn. And as the system (in most parts) also reacts to predictions about it, it can be called a “level two” chaotic system (as described, e.g. by Yuval Harari)

Numerous developments combined to contribute to the 2008 crisis and several of them led to structures and institutions that might pose problems again. Two important trends that would play a huge role in the crisis were the opening up of OECD economies to international trade and investment after 1945, and rapid advances in digital technology and networks. These trends brought a greater complexity of financial products and structures needed to navigate this new world, going well beyond the needs to meet the increased demand for cross-border banking to include new products that would facilitate hedging of exchange rate and credit default risks; financial engineering to match maturities required by savers and investors, and to take advantage of different tax and regulatory regimes; mergers and acquisitions not only of businesses, but of stock exchanges and related markets with global capabilities; and new platforms and technological developments to handle the trading of volatile new products.

The freeing up of financial markets followed the opening of goods markets, and in some respects was the necessary counterpart of it. However, the process went very far, and by the end of the 1990s policies encouraged the “financial supermarket” model, and by 2004 bank capital rules became materially more favourable to bank leverage as did rule changes for investment banks. The banking system became the epicentre of the global financial crisis, because of the under-pricing of risk, essentially due to poor micro-prudential regulation, excessive leverage, and too-big-to-fail business models. The rise of the institutional investor, the expansion of leverage and derivatives, the general deepening of financial markets and technological advances led to innovations not only in products but also in how securities are traded, for example high-frequency trading. The increasing separation of owners from the governance of companies also added a new layer of complexity compounding some of these issues (passive funds, ETFs, lending agents custody, re-hypothecation, advisors and consultants are all in the mix).

The trends towards openness in OECD economies were not mirrored in emerging market economies (EMEs) generally, and in Asia in particular. Capital controls remained strong in some EMEs despite a strengthening and better regulated domestic financial system. Furthermore, capital control measures have often supported a managed exchange rate regime in relation to the US dollar. When countries intervene to fix their currencies versus the dollar, they acquire US dollars and typically recycle these into holdings of US Treasuries, very liquid and low-risk securities . There are two important effects of the increasingly large size of “dollar bloc” EME’s: first, they compress Treasury yields as the stock of their holdings grows, second, their foreign exchange intervention means that the US economy faces a misalignment of its exchange rates vis-à-vis these trading partners.

Low interest rates, together with the more compressed yields on Treasury securities, have encouraged investors to search for higher-risk and higher-yield products. In “risk-on” periods this contributes to increased inflows into EME high-yield credit which, in turn, contributes to more foreign exchange intervention and increased capital control measures. The potential danger is that in “risk-offperiods, the attempt to sell these illiquid assets will result in huge pressures on EME funding and a great deal of volatility in financial markets.

The euro affects financial stability too, often in unexpected ways.. European countries trade not only with each other but with the rest of the world. However, the north of Europe is, through global value chains, more vertically integrated into strongly growing Asia due to the demands for high-quality technology, infrastructure, and other investment goods, while the south of Europe is competing with EMEs to a greater degree in lower-level manufacturing trade. Asymmetric real shocks to different euro area regions, such as divergent fiscal policy or changes in EME competitiveness, mean that a one-size-fits-all approach to monetary policy creates economic divergence. Resulting bad loans feed back into financial fragility issues, and interconnectedness adds to the complexity of the problem.

Population ageing adds to these concerns, notably due to the interactions among longer life spans, low yields on the government bonds that underpin pension funds, and lack of saving by the less wealthy who were hardest hit by the crisis and may also suffer from future changes in employment and career structures. To meet yield targets, institutions have taken on more risk in products that are often less transparent and where providers are trying to create “artificial liquidity” that does not exist in the underlying securities and assets.

However big and complex the financial system, though, it is not an end in itself. Its role should be to help fund the economic growth and jobs that will contribute to well-being. But despite all the interconnectedness, paradoxically, as the OECD Business and Finance Outlook 2016 argues, fragmentation is blocking business investment and productivity growth.

In financial markets, information technology and regulatory reforms have paved the way for fragmentation with respect to an increased number of stock trading venues and created so-called “dark trading” pools. Differences in regulatory requirements and disclosure among trading venues raise concerns about stock market transparency and equal treatment of investors. Also, corporations may be affected negatively if speed and complexity is rewarded over long-term investing.

Different legal regimes across countries and in the growing network of international investment treaties also fragment the business environment. National laws in different countries sanction foreign bribery with uneven and often insufficient severity, and many investment treaties have created rules that can fragment companies with respect to their investors and disrupt established rules on corporate governance and corporate finance.

Complexity is in the nature of the financial system, but if we want this system to play its role in funding inclusive, sustainable growth, we need to put these fragmented pieces back together in a more harmonious way.

Useful links

New Approaches to Economic Challenges (NAEC): The financial stream

OECD Business and Finance Outlook

The OECD organised a Workshop on Complexity and Policy, 29-30 September, OECD HQ, Paris, along with the European Commission and INET. Watch the webcast: 29/09 morning; 29/09 afternoon; 30/09 morning

Innovation and complexity

NAECAndrew Wyckoff, Director, OECD Directorate for Science, Technology and Innovation

Since its creation in 1961, the OECD has influenced how governments approach science, technology and innovation, and how economics as a discipline tries to understand these phenomena. The OECD Working Party of National Experts on Science and Technology Indicators (NESTI) was created in 1962, and in 1963, Science, economic growth and government policy convinced governments that science policy should be linked to economic policy. In 1971 Science, growth and society anticipated (also called the “Brook Report” after  the Chair, Harvey Brooks) many of today’s concerns by emphasising the need to involve citizens in assessing the consequences of developing and using new technologies.

For many experts though, the major contribution was the concept of national innovation systems, presented in 1992 in a landmark publication, Technology and the Economy: The Key Relationships. The origins of the concept go back to the 1970s crisis, which had provoked an in-depth re-examination of previous economic thinking on how growth came about and why growth in productivity was slowing. A 1980 OECD report, Technical Change and Economic Policy, is now widely recognised as the first major policy document to challenge the macroeconomic interpretations of the 1970s crisis, and to emphasise the role of technological factors in finding solutions, arguing for instance that innovation can be more powerful than wage competitiveness in stimulating an economy.

Economists working at the OECD were pioneers of a new approach that saw innovation not as something linear but as an ecosystem involving interactions among existing knowledge, research, and invention; potential markets; and the production process. In national innovation strategies, one of the key issues is the interactions among the different actors: companies, public research institutions, intermediary organisations, and so on. And contrary to the dominant thinking in policy circles in the 1980s and early 1990s, the OECD also saw it as something that governments should play a central role in – hence the term national innovation strategy.

Today, services are becoming the focus of innovation, with some companies even blurring the distinction between the value-added of products and services, smartphones being a good example. This is a logical outcome of the increasing digitalisation of the economy. Digital technologies are now so ubiquitous that it is easy to forget how recent they are. The World Wide Web we know today for example was created in the 1990s, and Microsoft thought it was possible to launch a rival to Internet (called MSN) as late as 1995.  Google was only founded in 1998 and it would be 6 years before it went public.

With the digital economy and society coming so far in such a short time, it is hard to predict what they will look like in the future. We can however identify some of the drivers of change. Big Data will be among the most important. In The phenomenon of data-driven innovation, the OECD quotes figures suggesting that more than 2.5 exabytes (EB, a billion gigabytes) of data are generated every single day, the equivalent of 167 000 times the information contained in all the books in the US Library of Congress. The world’s largest retail company, Walmart, already handles more than 1 million customer transactions every hour. Because so many new data are available, it will be possible to develop new models exploiting the power of a complexity approach to improve understanding in the social sciences, including economics. Also, the policy making process may benefit from new ways of collecting data on policies themselves and vastly improving our evaluation capabilities.

The analysis of data (often in real time), increasingly from smart devices embedded in the Internet of Things opens new opportunities for value creation through optimisation of production processes and the creation of new services. This “industrial Internet” is creating its own complex systems, empowering autonomous machines and networks that can learn and make decisions independently of human involvement. This can generate new products and markets, but it can also create chaos in existing markets, as various financial flash crashes have shown.

Two sets of challenges, or tensions, need to be addressed by policy makers to maximise the benefits of digitally-driven innovation, and mitigate the associated economic and societal risks. The first is to promote “openness” in the global data ecosystem and thus the free flow of data across nations, sectors, and organisations while at the same time addressing individuals’ and organisations’ opposing interests (in particular protecting their privacy and their intellectual property). The second set of tensions requires finding policies to activate the enablers of digital-driven innovation, and at the same time addressing the effects of the “creative destruction” induced by this innovation. Moreover, there is a question concerning the efficacy of  national policies  as digital-driven innovation is global by definition. As a policy maker you can promote something in your country, but the spillovers in terms of employment or markets can be somewhere else.

With so many new technologies being introduced, more firms and countries being integrated into global value chains, and workers becoming more highly educated everywhere, you would expect productivity growth to be surging. In fact it is slowing. But that average trend hides the true picture according to an OECD study on The Future of Productivity . Labour productivity in the globally most productive firms (“global frontier” firms) grew at an average annual rate of 3.5 per cent in the manufacturing sector over the 2000s, compared to 0.5% for non-frontier firms.

Diffusion of the know-how from the pioneering frontier firms to the bulk of the economy hasn’t occurred – either because channels are blocked or because we are in a transformative period and the expertise for how best to exploit the technologies is still in the heads of a few.  Most likely, it is a combination of the two.  We therefore have to help the global frontier firms to continue innovating and facilitate the diffusion of new technologies and innovations from the global frontier firms to firms at the national frontier. We can try to create a market environment where the most productive firms are allowed to thrive, thereby facilitating the more widespread penetration of available technologies and innovations. And we have to improve the matching of skills to jobs to better use the pool of available talent in the economy, and allow skilled people to change jobs, spreading the know-how as they move.

In a complex system, you can’t forecast outcomes with any great degree of certainty, but many of the unintended outcomes of interactions in the innovation system are beneficial. The policies mentioned above would each be useful in themselves and would hopefully reinforce each other beneficially.

Useful links

The OECD organised a Workshop on Complexity and Policy, 29-30 September, OECD HQ, Paris, along with the European Commission and INET. Watch the webcast: 29/09 morning; 29/09 afternoon; 30/09 morning

OECD work on innovation

The Innovation Policy Platform (IPP), developed by the Organisation for Economic Co-operation and Development (OECD) and the World Bank is a web-based interactive space that provides easy access to knowledge, learning resources, indicators and communities of practice on the design, implementation, and evaluation of innovation policies.

Sing for our time too, or what Homer can teach us about complexity

NAECLast week’s Workshop on Complexity and Policy organised by the OECD New Approaches to Economic Challenges (NAEC) team along with the European Commission and the Institute for new Economic Thinking (INET) included a discussion about how you build a narrative around complexity. As one participant pointed out, “complexity economics” isn’t the most thrilling of titles, except (maybe) to complexity economists. But “narrative” was one of the keywords of the discussions, along with “navigating” complexity. If you add to this Lex Hoogduin’s plea for modesty in his article on Insights and during the debate, I think we could learn something from an expert on narrative, navigation, and modesty: Homer.

The Iliad and Odyssey start with similar requests to the Muse to tell the tale of the hero, but with one striking exception. In the Iliad, she is asked to tell of the anger of Achilles, and the epic that follows is a more or less chronological account of ten days at the end of the Trojan War. In The Odyssey on the other hand, the poet suggests that the goddess start the tale wherever she thinks is best. One reason could be that, in our terms, The Iliad is a linear account, where one event causes and leads to the next, while The Odyssey is complex, jumping all over the place in space and time, with events far apart influencing each other, often in unintended ways.

Where you start a complex narrative determines what you describe and to some extent how you describe it. If, for example, you start your explanation of the financial crisis with the collapse of Lehmann Brothers, you will tell the story one way. If you start a few years earlier with market deregulation, the story will be different. Go back to the end of unlimited liability of stakeholders and yet another plot and set of characters become possible.  Wherever you started, you would tell the true story, but not the only story. So in telling a complex story, you have to first decide what you want the audience to remember, and then decide what combination of the limitless elements available would best allow them to understand the issues and agree with a course of action.

Another lesson we can learn from Homer is that in a non-complex telling, there can be a “God’s-eye view” of the narrative, as when Achilles contemplates the shield made for him by the god Hephaestus. In The Odyssey, the narrator doesn’t have this knowledge, and is in fact part of the story himself, influencing its outcome. Eric Beinhocker of INET, who co-organised the NAEC Complexity workshop, relates this to Gödel’s incompleteness theorems, arguing that it may be impossible for an agent embodied within the system to access information an agent outside the system with a God’s-eye view would have.

Once you have decided what you want to say and selected what you are going to use to say it, there remains the question of how to say it. Policy experts, like experts in other fields, often defend their poor communication by explaining that the subject is complicated and shouldn’t be dumbed down. Here’s an extract from Einstein’s critique of Newtonian cosmology in Relativity: The Special and General Theory: “If we ponder over the question as to how the universe, considered as a whole, is to be regarded, the first answer that suggests itself to us is surely this: As regards space (and time) the universe is infinite. There are stars everywhere, so that the density of matter, although very variable in detail, is nevertheless on the average everywhere the same. In other words: However far we might travel through space, we should find everywhere an attenuated swarm of fixed stars of approximately the same kind and density.”

Practically any adult or young person who can read can understand Einstein’s point, however complicated the subject. Here by way of contrast is the OECD explaining a fundamental concept in economics: “…the relative cost differences that define comparative advantage, and are the source of trade, disappear once one reaches equilibrium with free trade. That is, the two countries in the trading equilibrium in Figure 1.2 are both operating at points on their PPFs where the slope is equal to the common world relative price. Thus comparative advantage cannot be observed, in a free trade equilibrium, from relative marginal costs.” Can you tell from this if we’re for or against free trade?

It’s striking that in so many domains, the greatest experts are the greatest advocates for simplicity. David Hilbert set the agenda for 20th century mathematics at the 1900 International Congress of Mathematicians in Paris in a paper on 23 unsolved problems. Hilbert supported the view that: “A mathematical theory is not to be considered complete until you have made it so clear that you can explain it to the first man whom you meet on the street”. Maths genius Alan Turing was even more provocative, claiming that “No mathematical method can be useful for any problem if it involves much calculation.”  (Turing wrote a paper on computability without using any equations, basing his explanation on puzzles sold in toyshops.)

We can learn a final lesson from Homer in the character of his heroes. Achilles is arrogant, immature, impulsive, self-centred (“the best of the Achaeans”, making you wonder what the rest of them were like). He’s strong and is good at killing people but ends up dead. Ulysses is clever and is good at persuading people. He is modest and he listens to advice. He worries about others. And he navigates his way back to Ithaca and Penelope. In a complex world, today or as described by Homer, you will achieve more through strategy and resourcefulness than by brute force. The poet doesn’t just ask the goddess to “start from where you will”, he asks her to “sing for our time too”.

Useful links

You can see the webcast of the Workshop at these links: 29/09 morning; 29/09 afternoon; 30/09 morning

Navigating wicked problems

NAECJulia Stockdale-Otárola, OECD Public Affairs and Communications Directorate

Knowing there is a single clear solution to any problem is certainly a comforting idea. As children we would raise our hands in class to answer increasingly difficult questions – always hoping that we would “get it right”. But sometimes the question itself is ambiguous and the list of potential solutions endless.

Such is the case with wicked problems.

The term isn’t a moral judgement. Wicked problems are dynamic, poorly structured, persistent and social in nature. Difficult to define, highly intertwined with other social issues, and involving many actors, wicked problems reflect the complexity of the world we live in. For example, think of policy challenges such as climate change, immigration, poverty, nutrition, education, or homelessness. Each issue involves multiple drivers, impacting various policy domains and levels of government. To further complicate matters, any intervention could set off a chain of new unintended consequences. That’s a lot of moving parts.

All these factors make it difficult for anyone to agree on what the actual problem is, where it is rooted, who is responsible, and how to best address it. The scope of the problem is also vague. Entire systems can be involved in a seemingly local or regional problem like mass transit.

Clearly coming to grips with the issue is challenge enough, so how do we go about making decisions? So far, traditional approaches have proven unsatisfactory. In fact, many of these wicked problems seem to only get worse as we try to solve them.

The complexities involved force us to rethink our problem-solving strategy. Instead of trying to find a final solution we need to recognise that these challenges can, generally speaking, at best be managed but not solved. At least, not solved in a static sense. That doesn’t mean the situation can’t be improved. To some, it might even be “solved” depending on how the problem is defined. The bottom line is that we need to become more flexible to better manage the challenges posed by wicked problems. Policies should be adaptive, so that they can change as the issue evolves over time. We also need to avoid becoming too attached to our own solutions. They need to be dynamic, to change along with the problem at hand.

From the outset we need to look at problems more holistically. An increasing number of new approaches are developing in different fields to offer solutions. For example, complexity science is naturally adaptive as it looks at the way in which systems interact. To date this strategy has been helpful for example in improving traffic management. To improve traffic safety analytics techniques are applied to anticipate risks and traffic jams, and improve flow. Implementing pilot projects can also be useful in addressing wicked problems, when affordable, as they involve continuous monitoring and opportunity for adjustments. Though no magic formula exists, these approaches can help capture some of the intricacies of wicked problems.

Governments have already started using some of these adaptive strategies. Singapore’s government has introduced a mix of policy approaches to tackle wicked problems. For example, a matrix approach was implemented to help departments better share information and work horizontally; new departments reflecting the thorniest issues were established; and a computerised tool to help mitigate systemic risks. Though the island has the advantage of size, facilitating the implementation of new approaches, their experiences may provide some useful insights into best practices.

The OECD has also been looking at policy challenges as wicked problems. In a 2009 workshop on policy responses to societal concerns, Sandra Batie and David Schweikhardt of Michigan State University analysed trade liberalisation as a wicked problem. In this case, the role of stakeholders is typical of a wicked problem: different groups are likely to have differing ideas about what the real problem is and what its causes are. Some would say the issue is making the economy as open as possible while for others national sovereignty or protecting local producers may be more important.

Unlike a tame problem where scientifically based protocols guide the choice of solution, answers to the question of whether more trade liberalisation is needed depend on the judgements and values of whoever is answering. Many stakeholders will simply reject outright arguments to justify trade liberalisation based on neoclassical economics. Batie and Schweikhardt argued that the role of science, including economic science, is not to narrow the range of options to one (in this case trade liberalisation), but rather to expand the options for addressing the issue(s), and to highlight the consequences, including distributional consequences, of alternative options.

Wicked problems remind us that it isn’t always easy, or even possible, to “get it right”. There isn’t always a solution that can be implemented once and last forever. But that’s okay. We just need to stop thinking about achieving optimal solutions and learn how to sustain adaptive solutions.

Useful links

The OECD organised a Workshop on Complexity and Policy, 29-30 September, OECD HQ, Paris, along with the European Commission and INET. Watch the webcast: 29/09 morning; 29/09 afternoon; 30/09 morning

Please consult the draft Agenda for more information, and a preliminary background paper “Insights into Complexity and Policy” regrouping articles from the insights blog and other sources.