Intergenerational Justice in Scandinavia: Super Model?

Click to download the report
Click to download the report

Today’s post is by Mi Ah Schoyen of NOVA Norwegian Social Research and Bjorn Hvinden  Professor and Head of Research at NOVA and the University of Tromso, and director of the Nordic Centre of Excellence ‘Reassessing the Nordic Welfare Model’ (REASSESS). It is published in collaboration with Bertelsmann Stiftung’s Sustainable Governance Indicators (SGI) Network

When it comes to balancing the needs of current and future generations, the Nordic welfare states have done fairly well: reforms of the pension system, low child poverty levels and public debt, and work-friendly family policies. Yet, environmental considerations remain neglected – in the Nordic countries and elsewhere in the OECD.

Few would disagree that intergenerational justice is a goal that all governments and societies should adhere to. Beyond this general consensus, the issue undoubtedly raises a number of dilemmas, which are notoriously hard to solve. The Nordic societies are far from immune to these challenges. However, there are a number of indications that this region has developed public policies which are more balanced with respect to both age and generation than in most other OECD countries.

In advanced democracies, intergenerational justice is only one of the objectives public policies are expected to meet. There are also aims such as intra-generational solidarity and fairness, gender equity, and the creation of a competitive economy combined with macroeconomic soundness. For intergenerational justice to be achieved, a simple theory suggests that successive generations (birth cohorts) – also future ones – should be treated the same. “Makes good sense!” you may think. So why are issues of intergenerational justice so hard to resolve?

First, it is difficult to account for the unborn and consequently controversy surrounds the debates about what we need to do today to achieve justice for the future. Second, the concept of intergenerational justice is typically applied in an ambiguous manner. It sometimes refers to age groups; at other times, the point of reference is to the treatment and position of successive generations or cohorts. This blurs the important distinction between age groups and generations. While you pass from one age group to another as you move through the life course, you remain part of the same generation (or birth cohort) from the day you are born until you die. Therefore, differential treatment of age groups does not necessarily violate principles of intergenerational justice. Think, for instance, of a contributory old-age pension system which by design transfers money from the working age population to the elderly. As long as current workers receive a similar level of transfers when they reach retirement, this kind of redistribution will be neutral with respect to generations. In fact, this mechanism is sometimes referred to as an implicit intergenerational contract and has until now been the most common way of organising public old age pensions.

Finally, matters are further complicated if successive cohorts differ in size. Unfortunately, this represents the rule rather than the exception. The current situation in most of the OECD world is that larger generations are followed by smaller ones, creating problems especially for old age social protection systems, which were created under the assumption of steadily growing populations. With modest fertility rates and steadily increasing life expectancy, the tax base does not grow fast enough to continue to finance public pensions in the way that was done in the past. This is the basic reason why we have seen large pension reforms in a number of OECD countries.

Pension reform has been an important policy issue also for Nordic governments. Sweden (in 1999) and Norway (in 2011) implemented comprehensive reforms of their old age pension systems. However, setting public pensions on a financially sustainable path is only part of the story of how Nordic societies seem to balance the distribution between generations and across age in a sensible way.

As the findings of the SGI Study on Intergenerational Justice in Aging Societies show, from a comparative perspective, children in the Nordic countries are doing well. Child poverty levels are relatively low and government debt is well below the OECD average (which was equal to just over 50 per cent of GDP in 2010). Note also that in the peculiar case of Norway, government debt indicators – also as they are represented in the SGI Study – are less meaningful. The country has built up a massive public fund from petroleum revenues with a market value currently approaching €600 billion!

Moreover, work-friendly family policies (including an emphasis on providing affordable care services for dependent children and elderly) have long been a priority in the Nordic countries. As a result, even though they – with the exception of Iceland – do not quite reach the replacement fertility rate of 2.1, generally, more babies are born in the Nordic countries than further south on the European continent. At the same time, female employment is high, providing a broad tax base that helps in meeting the costs of population ageing.

The Intergenerational Justice Index (IJI) includes environmental, social, economic and fiscal dimensions, as well as a measure of pro-elderly bias. Estonia ranks highest of the 29 OECD countries included in the study, the US is bottom.

Click to see full size

Overall, while not seeing it as a super model – like the Economist actually did earlier this year – it seems fair to say that the Nordic social and economic model has managed to strike a sound balance between intra-generational and inter-generational concerns. This has been achieved by combining policies which contribute to the equalisation of life chances (e.g., free access to education, active labour market measures for the whole adult population, and a comprehensive system of social protection) with policies to foster economic competitiveness and efficiency.

The most innovative feature of the SGI Study is the inclusion of an environmental impact indicator in the assessment of intergenerational justice across countries. While a common dimension in discussions of environmental policy and sustainable development, ecological concerns rarely enter assessments of the welfare state and social justice more broadly.

The environmental impact of human activities reported in the SGI Study on the basis of countries’ Ecological Footprint, gives a rather mixed result for the Nordic countries. We get an intergenerational picture which is less positive than when considering only the welfare state more narrowly. It is important to note that several kinds of indicators have been defined for measuring different aspects of environmental performance. Country rankings are sensitive to the choice of indicator. For instance, if we instead rank countries according to the Environmental Performance Index, the Nordic countries all appear in the top quintile. We are not in a position to judge which indicator is superior, since that probably depends on your specific purpose. However, we strongly encourage further debates and research on the linkages between social and environmental policy and their outcomes. In this regard the SGI Study offers a welcome contribution.

Useful links

Paying for the Past, Providing For the Future: Intergenerational Solidarity OECD Ministerial Meeting on Social Policy, May 2011

From financial market deregulation to fragmentation: Ladies and gentlemen, you screwed up

Don't blame us
Don’t blame us

Today’s post is by Markus Schuller of Panthera Solutions, a participant in the recent OECD Financial Roundtable.

On October 17, delegations from the OECD’s 34 member countries, central bankers and financial market actors met to talk about financial fragmentation (geographical and sectoral segmenting) at the semi-annual OECD Financial Roundtable. It was an interesting discussion on how serious fragmentation was and how much financial integration we need, but the focus turned quickly on financial regulation, with the lobbyists talking their book (they could have sent a recording and saved themselves the time and expense of coming to Paris).

In the Eurozone for example, financial fragmentation means that a company in the peripheral countries will face higher financing costs than it would in one of the core countries of the currency union, for the same level of idiosyncratic risks and opportunities. This prompts the question of whether financial fragmentation hampers economic convergence and contributes to the painfully slow recovery.

Fragmentation isn’t necessarily negative: it can for instance allow more accurate pricing of the different, separable components of a system. But since the early 1980s, the increasingly global exchange of goods and services has relied on a more globally-integrated banking system and financial markets to allocate real economy resources. As a consequence, inter-connectivity increased, separability decreased.

Nationally-based regulation was clearly not the best framework to deal with the emerging financial system, but instead of a multilateral approach, governments listened to the banks’ “self-regulation” mantra, in harmony with the 1980s deregulation zeitgeist advocated by Reaganomics and Thatcherism.

It didn’t work. The dysfunctionality of self-regulation is not a specific characteristic of the banking industry. In numerous industries we got cartelization, price rigging and other forms of wheeling and dealing. In the banking industry, the combination of globalization, deregulation and digitization led to balance sheet inflation, followed by a fast-increasing gap between real economy utility and financial markets profiteering. We also got what Nassim Taleb and Warren Buffett called participants with “no skin in the game” – not playing with their own money. The people making the decisions didn’t stand to lose anything personally if they got it wrong. Investors and taxpayers took the hits for “Too-Big-To-Fail” institutions.

The blame game between market participants is still hard fought, especially between the big banks which are impacted most by the current re-regulation wave and governments. The banks complain about insufficient international coordination for the new set of rules. Governments blame the banks for socializing losses and accuse them of being responsible for the current regulatory countermovement.

Ladies and gentlemen, you both screwed up. But a couple of examples allow us to see that it didn’t happen overnight.

First, the repeal of the 1933 Glass-Steagall-Act (GSA) that prevented US investment banks operating as universal banks, contrary to their European peers. In the 1980s, European institutions aggressively entered the US underwriting market with competitive offers, thanks to significantly larger balance sheets and higher gearing (debt to equity) and started taking IPO business away from US banks. In 1999, insisting lobbying by US banks bore fruit and Glass-Steagall was repealed.

Second, investment banking partnership. For over 100 years, US investment banks were organized as private partnerships. Partners clearly understood the meaning of being finally responsible, as they ultimately had to pay for mistakes with their private wealth. Merrill Lynch went public in 1971, Bear Stearns followed in 1985, Morgan Stanley in 1986 and Lehman Bros in 1994. Goldman Sachs was waiting until GSA was repealed and got listed in 1999. These changes in status can be seen as a sign that the banks had less “skin in the game”. Or as the last stage of a rocket blasting them into irresponsibility.

Governments thought they were doing the market – or at least their largest party donors – a favor by loosening regulatory norms. Banks took this an invitation for unethical behavior and for unproductive profit maximization as far as the real economy is concerned. Big central banks had managed to position themselves as independent guardians of a stable monetary system. Now, almost as innocent bystanders, they (that is you and I) have to pick up the bill for both blame-game parties. The central banks pay with their independence.

The re-regulation wave symbolizes the attempts of governments and regulators to level the playing field with financial market participants. These attempts lack proper international coordination, for two reasons I explained in my contribution to the Roundtable.

Governments and regulators need to decide whether they want to use their systemically relevant banks as a tool for geostrategic power or for establishing an efficient market. Officially, the latter is the goal. In off-the-record conversations though, some officials express concern at the fast rise of banks from emerging economies, especially Asian banks, in terms of balance sheet and market capitalization, allowing their governments to use them as strategic options in the global power game. Given that, the argument goes, developed market representatives should not to be too restrictive regarding their own systemically relevant banks.

The second reason is more fundamental: economics is a social science, and the same goes for finance as a subset of economics. We are dealing with human behavior and social interactions that cannot be modeled using the deterministic approach of natural sciences, where for instance gravity always pulls the apple down from the tree. In systems theory, we would call that a trivial machine, but we’d have to call humans and human society “non-trivial”. With us, the same inputs can lead to different outputs. Pity the poor regulators trying to tame a moving, non-trivial social construct while being in motion themselves. Even when assuming high technical competence, high ethical standards and a strong will for implementation, this is an extremely complex task. Consequently, a lasting, single-shot version of re-regulation cannot be expected, and all financial market participants are currently paying the price for a culture of irresponsibility.

While developed market governments fail to make it clear explicitly whether they’re going for the geostrategic power option or for efficient markets, an implicit preference for the power option remains, and the foundation for further social and economic collateral damage is laid.

Useful links

OECD work on financial markets

Finanzielle Fragmentierung. Mein Beitrag. Markus Schuller

Rich Man, Poor Man: Are ‘the 1%’ worth it?

David and Victoria Beckham at their secret hideaway
David and Victoria Beckham at their secret hideaway

In the last of three postings on wealth distribution, we ask who benefits from the relentless rise of the rich.

Even in the world of high-flying soccer salaries, the deal announced late this summer between Real Madrid and Welsh player Gareth Bale was eye-popping – £85 million (about $120 million). The 24-year-old will now earn at least 10 times more in a week than the average British worker earns in a year.

Gosh.

Mr Bale is rich – not Bill Gates-rich – but rich. He’s also typical of many of today’s high-earners in that he’s making his own money. In previous centuries, high incomes typically came from inherited wealth. That’s why so many of Jane Austen’s characters never seem to work – they don’t need to: Their wealth is invested instead in government bonds that reliably pay an income of between 4 and 5% a year. In Pride and Prejudice, a would-be suitor reminds Elisabeth Bennet that unless she marries, her wealth will produce an income of only £40 a year: “… one thousand pounds in the 4 per cents, which will not be yours till after your mother’s decease, is all that you may ever be entitled to.”

If she were alive today, Lizzy Bennet might be running her own business and earning her own money. In that, she would be a typical member of today’s set of top earners – the 1% – which as Chrystia Freeland has written, “consists, to a notable degree, of first- and second-generation wealth. Its members are hardworking, highly educated, jet-setting meritocrats who feel they are the deserving winners of a tough, worldwide economic competition …”.

Over the past few decades, these winners have done quite nicely for themselves, most notably in English-speaking countries: In 1980, the top 1% of income recipients in the U.S. earned 8% of all pre-tax income; by 2008, their share had risen to 18% and it rose in many other OECD countries too. Several factors have worked in their favour: lower taxes; technological advances that reward skilled workers; the emergence of a global market for talent; and rising executive salaries.

But here’s a question: Are all these jet-setting meritocrats really worth it?

Historically, various justifications have been offered for income inequality – in other words, people earning more than others. As Branko Milanovic notes in The Haves and the Have-Nots, J.M. Keynes retrospectively justified 19th century inequalities by arguing that the rich had not wasted their money on fripperies but, instead, “like bees, they saved and accumulated”, so providing capital for investment, which ultimately benefited everyone.

Arguments today aren’t all that dissimilar. T.J. Rodgers, founder of Cypress Semiconductors, recently defended his own wealth by pointing to the money he had reinvested in his own firm and in new businesses, such as a restaurant in his home town that created 65 jobs. “How much more do I need?” he asked. “How many more jobs do you want?”

In essence this is an appeal to the idea of “economic efficiency” – inequality may not always be popular, the argument goes, but it ensures a society’s economic resources are put to their best use. The most influential thinker in this area was probably the economist Arthur Okun, who in the 1970s argued that there was a “big trade-off” between equality and efficiency: Reduce the wage gap by raising taxes or minimum wages and you kill people’s incentives to work hard and risk losing some of that tax money in the “leaky bucket” of government.

That argument still appeals to many, but it has its detractors. Based on an analysis of growth patterns in a number of countries, IMF economists Andrew Berg and Jonathan Ostry concluded that “when growth is looked at over the long-term the trade-off between efficiency and equality may not exist.” While some inequality is necessary to ensure markets run efficiently, the economists argue, too much can destroy growth.

Among the downsides of rising inequality, they say, are that it may pave the way for financial crises, as many argue it did in the run up to the 1929 Wall Street Crash; it may also fuel political instability, as in Brazil earlier this year; and it “may reflect poor people’s lack of access to financial services, which gives them fewer opportunities to invest in education and entrepreneurial activity.”

Indeed, that last point is increasingly invoked. As Joseph Stiglitz has written, “growing inequality is the flip side of … shrinking opportunity,” a view echoed earlier this year by Alan Krueger, then-chairman of the U.S. President’s Council of Economic Advisers: “In a winner-take-all society, children born to disadvantaged circumstances have much longer odds of climbing the economic ladder of success.”

But if we accept the idea – and not everyone does – that too much inequality benefits the rich and hurts the poor we’re left with another question: How much inequality is “too much” inequality? Economists may have their own views but, ultimately, that’s a question only politicians and societies can answer.

Previous articles in Rich Man, Poor Man:

Useful links

Reducing income inequality while boosting economic growth: Can it be done – from the OECD’s Going for Growth 2012

Less Income Inequality and More Growth – Are they Compatible? Part 4. Top

Incomes, by Peter Hoeller (OECD, 2012)

Divided We Stand: Why Inequality Keeps Rising (OECD, 2011)

OECD work on  income inequality

The Nobel Prize in Economics 2013: of the madness and the wisdom of crowds

Want to buy some Dutch tulips?
Want to buy some Dutch tulips?

Today’s post is from Gert Wehinger of the OECD’s Directorate for Financial and Enterprise Affairs

Investors’ behaviour on stock markets has been likened to the irrationality described in Charles Mackay’s 1841 classic Extraordinary Popular Delusions and the Madness of Crowds. But there is also a more positive view of what crowds can achieve. In his 2005 book The Wisdom of Crowds, James Surowiecki  argued that “diversity and independence are important because the best collective decisions are the product of disagreement and contest, not consensus or compromise.” Diversity and disagreement certainly characterise this year’s Nobel prize for economics, even if Eugene F. Fama, Lars Peter Hansen and Robert J. Shiller shared the award “for their empirical analysis of asset prices”.

Fama’s work is based on the idea that asset returns should be impossible to predict if asset prices reflect all relevant information. He tested empirically (and found new methods like event studies to do so) the efficient markets hypothesis (EMH), for which he and his followers found ample evidence. That is, in the very short run, like a day or a week, if all available information is incorporated in share prices, you cannot beat the market. While he accepts that there are factors like information or transaction costs that weaken the pure  EMH, any anomalies – like the difference between value and growth stocks he analysed in a 1998 paper – are explained within the same rational investor framework, and risk factors would account for the differences. However, such anomalies (or “market imperfections”) may open up short-run arbitrage opportunities (that Fama himself exploits in his fund-management firm). Hedge funds and algorithmic traders in particular thrive on these imperfections.

Thus, Fama’s EMH is perhaps a great insight for theorists and algo-traders, but it is, in principle, tautological and often becomes useless in practice (yes, algo traders can also go bust) where market anomalies can go beyond rationality. While these anomalies still tend to leave markets unpredictable, the reasons underlying such unpredictability may be quite different from the agnostic view of market rationality, that deprives the researcher almost by definition from gaining better insights into the “true” functioning of markets and a better understanding of longer-term price movements, including bubbles.

Shiller thought outside the EMH box by exploring departures from Fama’s efficient market rationality using insights from behavioural finance (many of the ideas were developed by the 2002 Laureates Daniel Kahneman and Amos Tversky). Such departures, if they can be identified in asset prices, may open up arbitrage opportunities by rational investors to take advantage of misperceptions of irrational investors.  While rational arbitrage trading would push prices back toward the levels predicted by non-behavioral theories, this is still not the world as described by Fama, where rational information is processed immediately. In Shiller’s framework, bubbles can not only exist, but there is also a possibility that they can be identified.

Hansen has tested many of these theories in his generalised method of moments (GMM) framework. If you cannot forecast stock prices, maybe you can find patterns in their volatility or other statistical moments that can be exploited. Hansen found, for example, that asset prices fluctuate too much for a rational expectations-based model to hold. This work has been carried forward in several ways, for example improving measures of risk and attitudes towards risk that may change depending on the economic situation. This is just one example of how this research can generate new insights about human behaviour more broadly.

Shiller showed the importance of social psychology for finance and economics using evidence from investor surveys. In the 2005 edition of his book Irrational exuberance, Shiller extended his analysis to real estate, arguing that the real estate market was irrationally overvalued, and he predicted large problems for financial institutions with the eventual burst of the real estate market “bubble”. But he was also aware of the problems a bursting stock market bubble would have on retirement income from pension plans that rely on equity investments, and he wondered about the “curious lack of public concern about this risk.” More generally, he also pointed out that the “tendency for speculative bubbles to grow and then contract can make for very uneven distribution of wealth” that may even make us “question the very viability of our capitalist and free market institutions”. He saw an important role for policy to address these issues.

The first line of policy defence against asset price bubbles is monetary policy, but bubbles are hard to identify and policy makers are reluctant to ‘prick’ a bubble. Robert Lucas, the 1995 Nobel Prize winner, noted that the main lesson from the efficient market hypothesis “for policymaking purposes is the futility of trying to deal with crises and recessions by finding central bankers and regulators who can identify and puncture bubbles. If these people exist, we will not be able to afford them.” (The Economist, Aug 6th 2009)

The Prize Committee in its decision seemed to want to reconcile a short-run and a long-run theory of asset prices, with Fama’s finding of that stock prices are unpredictable in the short run and Shiller showing that there is some predictability in the long run (Fama would not dispute that  idea in principle as his own research found that stock returns become more predictable the longer the period considered).

But knowing that Fama as recently as January 2010) defiantly denied the existence of asset price bubbles because they cannot be identified and predicted, and Shiller (along with others) recognises bubbles and calls the EMH argument that stock prices reflect fundamentals  “one of the most remarkable errors in the history of economic thought”, it will be interesting to see how the the three laureates interact on stage at the award ceremony in December in Stockholm.

Perhaps Hansen will have to play the role of a mediator with a humble remark like the one he made shortly after hearing about his award: “We are making a little bit of progress, but there’s a lot more to be done.”

Useful links

OECD work on financial markets

OECD Journal Financial Market Trends

OECD project on long-term investment


High seas, ICs and health care

ICTs and Health
Click to read

“John Stone, aged 25, private marine; feels pain in the throat… difficulty of swallowing… diluted sulphuric acid used as a gargle”. The British Royal Navy medical officer who wrote that report didn’t say if the patient got better, and presumably the patient never said anything ever again, so we’ll never know if the cure worked. His colleague who treated fever cases with a tepid salt water bath did however note the salutary effects on all his patients, even though, technically speaking, they all died soon afterwards. If you’ve got an hour or two to waste (as I hadn’t when I was supposed to be writing this article) take a look at this collection of reports compiled by Royal Navy Surgeons and Assistant Surgeons from ships, hospitals, naval brigades, shore parties and on emigrant and convict ships in the period 1793 to 1880.

For the casual reader like me, the most interesting aspects are stuff like being attacked by a walrus, stealing a skull, being struck by lightning, or accidentally circumcising yourself when playing a joke on your shipmates (anything for a laugh). For the less frivolous, the records are a mine of information on a whole range of medical topics. Mortality for instance: half the deaths on warships were from disease, ten times the proportion killed in battle. Or medical methods and technologies. Apart from the descriptions of acid gargles or blowing tobacco smoke into a drowned sailor’s lungs to revive him (it worked), the reports list more mundane treatments. “Amputation” appears a lot. In fact, on a quick reading it seems to be the main surgical activity on the ships, and other sources describe how during a naval battles, the surgeon’s assistants (“loblolly men”) could fill tubs with severed limbs.

Many of those who survived the surgery died later from infections, and this was the case for civilian patients operated on shore as well. That would change by the end of the period covered by these reports, with the Victorian “surgical revolution”, brought about by the introduction of anaesthesia in the 1840s, antisepsis in the 1860s and x-rays in the 1890s. We often talk about “medical science”, but what happened over that half century had nothing much to do with science. It was a technological and organisational revolution that would mark medical care well into our times.

In this article, Jim Connor reminds us that knowledge of chemical or physiological principles had little to do with the advent of anaesthesia or explaining how it works. There was little scientific basis for antisepsis when Lister introduced it. And Roentgen was pioneering radiology before any theoretical explanation for x-rays was available. Technology was not the servant of science, or of the market. As Connor argues, there was no call for these innovations from practitioners, and many patients were even reluctant to undergo anaesthesia for fear of what the doctors might do when they were unconscious.

The revolution came from the interactions of science, technology and medicine among each other and with economic and social trends. If the state and business get involved in such a process, the result is what John Pickstone calls “technoscience”, where scientific knowledge and its applications become marketable commodities. The result in the 19th century was to change the position of the surgeon from an independent artisan, owning his own tools and premises and seeking out clients, to essentially an employee of a large organisation that centralised demand and supplied the equipment, bought from an increasingly powerful group of outside suppliers.

This fundamentally changed health care as a calling and as a business. Today, another revolution is underway, driven by ICTs, or what we might call tele-technoscience, and responding to what a new OECD report identifies as “social and demographic changes, the rise in chronic diseases, and the need to improve the efficiency and quality of healthcare delivery”. ICTs and the Health Sector: Towards Smarter Health and Wellness Models looks at how mobile devices, the Internet and ICT in general can be used to support self-management, behavioural modification (not as sinister as it sounds) and “participatory healthcare”, and allow health care systems to learn.

The streams of data flowing from medical devices and research programmes are the raw materials the new systems will be built on. But data as such are of relatively little value if they can’t be processed, turned into useful information and shared. Those old naval records contained masses of data on cases, living conditions, even climate and geography, as well as potentially life-saving information on best (and worst) practices. But this information couldn’t be shared significantly using the technologies available at the time.  The OECD report talks about a similar situation with today’s technology and analytical tools, arguing that they can’t effectively manage or even capture the many data streams available and turn them into useful information.

The Victorian revolution produced the hospital-centred, doctor-oriented system we know today. The ICT revolution could produce a system that is based on the patients, their family and community, with more emphasis on evidence-based approaches and personalised care, and a less prominent role for the clinician’s training and experience in diagnosis and treatment. These will remain important though, as will the practitioners’ attitude to their job. Bruno Pappalardo, the specialist in charge of the Royal navy collection, told The Independent newspaper that despite the harsh environment the naval medical officers worked in, “their compassion shines through – they did their utmost to care for people.”

Useful links

OECD work on health policies and systems

In Search of Elusive Growth: Making the Most of R&D Tax Incentives

Click to read
Click to read

Today’s post is from Andrew Wyckoff, head of the Directorate for Science, Technology and Industry (STI) at the OECD. A version of this article is also being published by the Huffington Post.

Finding new sources of growth right now is tough. And in a time of rising inequality, to do so equitably and fairly is even tougher.  Innovation – which fosters competitiveness, productivity, and job creation – can help, but with budgets stretched to the limit how can governments boost innovation in their economies?

Tax incentives for business R&D is a good place to start. As of 2011, 27 of the OECD’s 34 members provided tax incentives to support business R&D – more than double the number in 1995. By 2011, over a third of all public support for business R&D in OECD countries came through tax incentives – a share that jumps to more than half when the US – with its large direct procurement of defence R&D – is excluded. Other economies – including Brazil, China, India, Singapore and South Africa – have also instituted new tax provisions to stimulate investment in R&D.

As they have proliferated, R&D tax incentives have become more generous. Over the period 2006-2011, about half of the 23 countries for which complete data are available increased their generosity, with R&D tax support rising by almost 25% in some countries.  This probably underestimates the shift towards greater generosity because the economic crisis caused a decline in both profits (and hence taxes) and R&D.   This growing popularity of R&D tax incentives as a policy instrument is due to a variety of reasons including being exempt from EU and WTO “state aid” rules, and the fact that tax expenditures tend to be “off budget, ” meaning they  escape the scrutiny that applies to direct expenditures.

A new OECD report shows that in a relatively short period of time, R&D tax incentives have become among the most widely used policy instruments to promote innovation. Some have asked “is this too much of a good thing?” and in this era of tight public budgets “are governments (and citizens) getting value for money?”  The answer depends on the exact design of the R&D tax incentive.

Most firms engaging in R&D are multinationals that can use cross-border tax planning strategies that result in tax relief that may exceed what was originally intended. This in turn may cause an unlevel playing field vis-à-vis purely domestic firms that do not benefit from these same tax planning strategies. This may also disadvantage young firms that have been the disproportionate source of net job growth and tend to be the origin of radical new innovations that spur growth.

Evidence from 15 OECD countries over 2001-11 suggests that young businesses, many of which are among the most innovative, play a crucial role in employment creation regardless of their size. Over this period, young firms (less than or equal to five years of age) accounted for almost 20% of total (non-financial) business sector employment but generated about 50% of all new jobs created. And, during the economic crisis the majority of jobs destroyed generally reflected the downsizing of large mature businesses, while most job creation was due to young enterprises.

Some will argue that R&D tax incentives are preferable to direct support policies so as to avoid picking winners.  But this isn’t an either/or situation. A mix of incentives could be the smartest path forward. Recent OECD analysis shows that well-designed direct support measures – contracts, grants and awards for mission-oriented R&D – may be more effective in stimulating R&D than previously thought, particularly for young firms that lack upfront funds.   Direct support that is non-automatic and based on competitive, objective and transparent criteria can stimulate innovation.

It’s the policy package that matters. Tax incentives should be designed to better meet the needs of domestic companies and young, innovative companies that do not benefit from cross-border tax planning opportunities. There should be a balance between indirect support for business R&D (tax incentives) and direct support measures to foster innovation. And governments should ensure that R&D tax incentive policies provide value for money.

Do this and growth might be a bit less elusive than we think.

Useful links

Andrew Wyckoff talks about innovation, growth and jobs:

OECD work on innovation

What is BEPS and how can you stop it?

Why Finland isn’t fragile – and three reasons for linking gender equality to statebuilding in the post-2015 framework

Click to see OECD Wikigender
Click to see OECD Wikigender

Today’s post is by Diana Koester, a consultant working with the International Network on Conflict and Fragility (INCAF).

On Thursday, 26th September, the UN’s Conference Room 1 was packed with over 25 ministers from around the world. They had accepted an invitation by the Peacebuilding Commission (PBC) and UN Women to discuss “women’s economic empowerment for peacebuilding” only a day after the UN General Assembly (UNGA) Special Event on achieving the Millennium Development Goals (MDGs).

You may wonder why that’s especially worth noting. After all, outside of this event the UNGA week heard pleas for related causes: a post-MDG framework that would “make the 21st century the century of women” and a post-MDG framework that would “make the 21st century the century of peace”. And these pleas echoed the proposals for respective standalone goals that Secretary-General Ban Ki-moon and his High-Level Panel had already expressed in their visions for the post-2015 development framework.

What makes the PBC/UN Women event especially worth noting is that discussions of the post-MDG approach to building peaceful and effective states have typically proceeded as though the century of women and the century of peace would take place in parallel worlds. There has been little emphasis on the specific links between these goals and their achievement.

We need to work to bridge this gap by emphasizing women’s important role – and challenges – in peacebuilding and statebuilding, as well as the need for targeted and integrated responses in the post-2015 approach to institutions and conflict.  There are at least three good reasons why.

First, statebuilding in fragile and conflict-affected situations can provide critical opportunities to pursue gender equality. Empowering the world’s women requires special efforts to tackle the severe and specific challenges women face in fragile situations. Sexual and domestic violence, economic marginalisation, and exclusion from the decisions that determine women’s futures help explain why fragile and conflict-affected states have made relatively slow progress on the MDGs overall, but also have notably lagged on most of the gender-specific MDG areas.

The good news is that post-conflict situations also offer immense opportunities to “build back better”, for example by supporting women’s participation in peace negotiations, constitution-making and emerging political processes. In this context it is interesting to note that about one-third of the countries with 30% or more women in parliament are also countries that have experienced conflict, fragility or recent transitions to democracy. Taking the example of Rwanda and Burundi, the Assistant Secretary-General for Peacebuilding Support Judy Cheng-Hopkins highlighted during the PBC/UN Women event how such increased participation can in turn lead to better outcomes for women, thus transforming vicious into virtuous circles.

Second, gender equality is not only “smart economics” – it’s also smart peacebuilding and statebuilding. The fundamental aim of statebuilding should be a state that is legitimate, responsive and accountable to all. Tackling the marginalisation of women and girls is a precondition for realising this vision.

What’s more, women’s empowerment can help achieve internationally agreed peacebuilding and statebuilding goals. “Women’s political participation is associated with lower levels of corruption, more inclusive decision-making, greater investment in social services, job creation for women, and family welfare”, the new Executive Director of UN-Women Phumzile Mlambo-Ngcuka pointed out.  In like manner, PBC Chair and Croatian Foreign Minister Vesna Pusić recalled the strong evidence that “women’s access to land and productive assets, to jobs and markets, results in improvements in family well-being, community stability and poverty reduction.”

In other words, gender equality goes beyond “smart economics”. It can strengthen key pillars of peace. Reflecting on his own country the day before the PBC event, Finland’s Foreign Minister, H.E. Mr. Erkki Tuomioja, affirmed these links: “If I was asked to give one specific reason why Finland is rated in the index of failed states as the least failed state in the world, I would answer that it is gender equality and the empowerment of women.”

Finally, the post-2015 framework offers a historic opportunity to realize women’s rights in fragile states and make smarter peacebuilding and statebuilding the norm. Current approaches tend to neglect women’s potential and priorities. “Let’s face it”, Cheng-Hopkins proposed, “women play peacebuilding roles every day (…) Sadly though, when negotiations get serious, when stakes get high and when money shows up, women are pushed into the background.“ The OECD INCAF’s forthcoming policy paper on Gender and Statebuilding aims to address this gap by offering a set of specific recommendations to help donors integrate a gender perspective into their work on statebuilding.

The post-2015 framework is one of the key opportunities the new INCAF publication highlights in this regard. In the words of the President of the UNGA, John W. Ashe, this is a “historic opportunity to define development.” The post-MDGs can therefore also be a historic opportunity to make women’s full participation in peacebuilding and statebuilding the norm and the PBC declaration’s call for “further measures to improve women’s participation during all stages of peace processes” a reality. We can and must seize it.

Useful links

OECD work on gender equality and development

OECD work on conflict and stability

OECD work on gender