The city is humanity’s greatest invention. An artificial ecosystem that enables millions of people to live in close proximity and to collaborate in the creation of new forms of value. While cities were invented many millennia ago, their economic importance has increased dramatically since the Industrial Revolution until they now account for the major fraction of the global economy. All human life is there and so the study of cities crosses boundaries among economics, finance, engineering, ecology, sociology, anthropology, and, well, almost all forms of knowledge. Yet, while we have great knowledge in each of these domains individually, we have little scientific knowledge of how they come together in the overall system of systems that is a city. In brief: How does a city work?
Such knowledge would be helpful in the coming decades. In the last sixty to seventy years, globalisation has spread the Industrial Revolution ever more widely, creating in cities new opportunities that attract hundreds of millions of internal and international migrants. This process is lifting many of these migrants out of deep poverty, while causing cities from London to Nairobi to struggle in differing ways with the unending influx.
Further, cities are responsible for large fractions of greenhouse gas emissions, for the consumption of natural resources such as water and air, and the resulting discharges of pollution into the environment. If the battle against climate change is to be won, it will be won in cities. Cities are also the principal centres for innovation and economic development, both of which are needed to continue lifting migrants out of poverty.
While the roots of urban planning can be traced back more than three thousand years in terms of the master plans of cities, it was the tremendous growth of cities in the late 19th century that transformed that field into considering the many services and affordances that are required for urban dwellers. But urban planning emerged mainly from the humanities and works primarily through extensive case studies, although it has adopted many digital tools. The notion of the city as an object of scientific study is more recent and still in its infancy, triggered in part by developments in complexity theory such as network theory, scaling laws, and systems science, and the growing availability of urban data.
Urban scaling laws have been explored at least since the early 20th century, when cities were found seen to be an example of Zipf’s law. In this case Zipf’s Law states that “for most countries, the number of cities with population greater than S is proportional to 1/S”. The understanding of scaling was greatly expanded in recent years by the works of West and Bettencourt and Batty. Their work showed that many properties of cities such as the number or lengths of roadways, the numbers of amenities such as restaurants, and so forth follow scaling laws over population ranges from ten thousand to tens of millions. Moreover these scaling laws have exponents in the ranges 0.85 to 1.15 that show large cities to be more productive, innovative, efficient in energy consumption, expensive, but also better paying than small cities. Likewise negative attributes such as crime, disease, and pollution also scale superlinearly, that is they don’t rise in strict proportion to the increase in city size. For example, GDP is proportional to the Size (S) of a city raised to a power that is slightly greater than 1, thus S1.15, while other attributes like energy consumption per capita scale sublinearly, at S0.85. Network laws also describe well the evolution over long time scales of roadways and railways in cities.
While scaling laws and network laws have great descriptive power, opinions vary on whether they apply across different countries or have predictive power. That is, the scaling of attributes is a snapshot of frequency versus size at a given time. If a city grows and “moves up the scale”, it may not achieve, in the short term, all of the positive benefits and negative impacts described. Nor do the laws provide explanations for the observed behaviours. Nonetheless, this is an important area for planners and developers seeing their cities growing or shrinking.
As urban data has become more pervasive, it is now possible to study cities as complex systems of interactions. We may view the city as a myriad of interactions among its inhabitants, its infrastructures and affordances, its natural environment, and its public, private, and civic organisations. Some of these interactions involve the exchange of goods or services for money, but many of them involve the exchange or transmission of information, enabling inhabitants and organisations to make choices. Public transportation is often studied in this way, revealing for example that small and medium sized cities evolve networks enabling commuting between small numbers of residential and business districts, while very large cities, such as London, have much richer networks that permit greater flexibility in where people live and work.
The operation of cities is also modelled using synthetic populations of software agents that represent the distribution of behaviours or preferences of much larger, real populations. Such agent-based models, with agents representing patterns of origin, destination, travel times, and modality preferences, are used to examine the overall impact of new services such as London’s Crossrail.
As the Internet of Things provides greater visibility into how inhabitants choose to exploit the opportunities offered by a given city, we may hope to discover abstract principles about how cities work. We may envision being able to construct agent-based models representing the complete spectrum of choices a city’s inhabitants make at timescales from minutes to years and spatial scales from meters to kilometers. Equally, given the increasing availability of real-time information, we might hope one day to understand the effective use of a city’s services in terms of a Nash Equilibrium, a game theory concept (often used to describe poker games), where no player can gain anything by changing their chosen strategy if other players don’t change theirs – all the players’ strategies are optimal. These are far in the future, but the EC’s Global Systems Science programme is the beginning of that journey.
The OECD is organising a Workshop on Complexity and Policy, 29-30 September, OECD HQ, Paris, along with the European Commission and INET. Watch the webcast: 29/09 morning; 29/09 afternoon; 30/09 morning
Fifteen years after 9/11, the world is now facing the threat of systemic terrorism. Apparently mindless, random attacks are in fact part of a strategy developed over a number of years, whose origins can be traced back to three major turning points, one ideological, one political, one military, that occurred at the end of the 1970s.
Traditionally, terrorism was the work of relatively small groups with clearly identifiable political or ideological goals, ranging from national liberation to animal rights. It was used as a bargaining counter to attain a clearly defined objective such as the freeing of prisoners or the withdrawal of the army from an occupied zone, or for vengeance. Of course, there were campaigns designed simply to destabilise the political climate, but these were the minority. The terrorism of Al-Qaeda represented a radical break from this, in that its aim was sustained opposition to the entire “Western” economic, cultural, and belief system, with no negotiable end to their campaigns, and whole populations seen as legitimate targets. Attacks, and the possibility of attacks, are supposed to change enemy policy by means other than the traditional method of battlefield superiority. One of their aims is to convince public opinion that the price for supporting a particular policy is too high, as well attracting support from potential sympathisers following retaliation for the initial attack.
Conflicts are fought worldwide in a complex arena across the whole spectrum of political, social, economic, and military networks, and involve a mix of national, international, transnational, and subnational actors, motivated not only by politics or ideology, but also profit. This grey area, combining aspects of traditional warfare with organised crime, is a major aspect of 21st century terrorism. But the major way in which terrorism has evolved beyond the Al-Qaida model is the strategy of Daesh to create a state by conquering and holding territory, using traditional military confrontation in some cases, and isolated attacks far from its main bases in others. The roots of this strategy can be found in the combination of the three events mentioned in the introduction.
In November 1977, Egypt’s President Sadat travelled to Jerusalem to prove his willingness to sign a peace deal with Israel. This marked the end of pan-Arabism as a viable ideology. Sadat also broke with the USSR and encouraged the rise of the Muslim brotherhood to counter the influence of the left, especially in the universities. Shortly afterwards, the Shah of Iran was overthrown by a popular uprising that the Islamists came to dominate, eventually creating an Islamic State. Then the Soviets invaded Afghanistan, but were defeated by a Western-backed coalition that the Taliban came to dominate.
Pan-Arabism was promoted by Nasser, and the intellectual origins of today’s Islamist terrorism can be traced back to the writings of one of his opponents, Sayyid Qutb, an Egyptian intellectual jailed by the regime, although his books were not banned. For Qutb, the world is living in a state of ignorance and idolatry, Jahiliyyah, a term normally reserved for pre-Islam Arabia. This includes even those who claim to be Muslims, but who are in fact apostates and thus legitimate targets: “this is not Islam and they are not Muslims”. The evil is due to the fact that men have denied God one of his attributes, Hakemeyya, divine sovereignty. Muslim scholars are scandalised by the claim that man can deprive God of anything, but Qutb’s position is echoed by Daesh’s sinister black flag, where the “Mohammed-Messenger-Allah” you would expect is replaced by “Allah-Messenger-Mohammed”, if read in the usual top-to-bottom order.
Daesh are also influenced by Qutb’s idea that divine sovereignty will be restored by a self-proclaimed elite, and that the declaration of faith is not enough to define someone as Muslim, and must be completed by jihad. The practical manual for bringing about this new, truly Islamic state, was written by Daesh in the mid-2000s. The Management of Savagery: The Most Critical Stage through Which the Umma Will Pass sets out the thinking behind the terrorist campaigns we’re seeing just now. The idea is to create such chaos, by whatever means necessary, that the jihadi are seen as the only group capable of restoring and maintaining order, similar to the initial support for the Taliban regime from Afghanis exhausted by the corruption and incompetence of the warlords.
Terrorism is one part of this strategy and Daesh have learned at least one lesson from the totalitarian regimes in Europe before and after the Second World War, namely that terror succeeds best when it is accepted on its own terms by its enemies. It’s not possible to physically terrorise everybody, but if everybody thinks they could be the next random victim, that is just as efficient.
Another major strand of Daesh’s approach is finance. The recommendations of The Management of Savagery for winning people over emphasize: “Uniting the hearts of the world’s people by means of money”. The financial power of Daesh is another significant difference with previous terrorist organisations, with some estimates putting its annual turnover at around USD 2 billion. It obtains its income through extortion, theft, and the black market – the same means described in a 2010 Rand Corporation report into Al-Qaida’s finances for the US Office of the Secretary of Defense.
Black market sales of oil probably remain Daesh’s main source of income, but as this dries up due to the success of the military forces opposing them, they will turn to other means. (Al Shabab in Somalia for instance controlled the sugar trade). Whatever it is, corruption will still be the “enabling technology” that enables the terrorists to operate. Two of the 9/11 hijackers allegedly obtained fraudulent driver’s licences from a branch of Virginia’s Division of Motor Vehicles which they used as identity cards to board the aircraft. The same branch had also sold licenses to illegal immigrants in exchange for bribes. “Nigerian troops were denied weapons to fight Boko Haram and thousands of lives were lost because of rampant fraud in the procurement process”, Nigerian President Muhammadu Buhari declared when a corrupt multi-billion dollar deal for weapons and equipment was revealed in the press in November 2015. The deal has not materialised, leaving troops without proper equipment to fight terrorist groups.
The OECD published work on the economic consequences of terrorism as long ago as 2002, and since then has examined regional, sectoral and broader aspects of the issues, for example terrorism and conflict over resources in West Africa, the implications for the transport industry, and how to help fragile states. In an analysis published earlier this year, Terrorism, corruption and the criminal exploitation of natural resources, the OECD argues that since terrorism is a multidimensional challenge, tackling it efficiently requires integrating social, economic, and political factors into the security analysis and response. Speaking personally, I would add that since the aims of Daesh include destroying democracy and dividing society along religious grounds, we should not do this for them in the name of the “war against terror”.
A session at the 2016 OECD Forum entitled “Teaching & Learning with Robots” brought Nao, a humanoid robot, to meet with a class of young students from the Sections Internationales de Sèvres (SIS) school. Catherine Potter-Jadas, head of the primary school, noted the children’s reactions to the robot.
Educators will take comfort from views such as: “For the moment, the robot can’t replace teachers because, in a country like France, children are too immature, and because they need a real human to control them. A robot wouldn’t have the authority.” In fact, most of the children saw the robot as useful assistant, rather than a substitute: “I think it’s great that a robot helps children in schools. They’ll find it interesting and become more open” said one, echoed by a classmate who thought that “Nao could be really helpful to education.” And of course some were more attracted to the entertainment value: “When the presenter said ‘Nao can carry anything lighter than a wooden spoon,’ he flexed his muscles! I liked the way the robot laughed and showed his muscles, it made me think of an odd little creature.”
Robots first made their appearance in industry, starting in the automobile sector in the 1960s. For decades, industrial robots were bulky and expensive. They were operated from stationary posts inside the workshop, and they carried out a small number of repetitive tasks, sometimes dangerous ones like soldering and cutting metal. With improvements in technologies, a second generation of robots was born. Less bulky and expensive, more autonomous, adaptable and cooperative, these robots are programmable and can be used by workers without any specific qualifications. They can also play new roles in services, health (surgical operations), education, training, commercial information, services to the elderly…
The children’s comments and questions raised a number of issues about the evolving role of technology in society and the economy, and how to equip people to take advantage of the profound impacts digital technologies will have on all of us. The majority of concerns related to robots are based on loss of jobs in developed economies. That said, there could be a “relocation” of low-skilled jobs to countries that have robots.
It is quite clear that challenges to the development of robots remain, in particular in the areas of perception, specific object recognition in a visually cluttered environment, object manipulation, and cognition. But smarter and more autonomous robots will soon be a reality thanks to improvements in a number of areas, including computational performance, electromechanical design tools and computer numerical control machines, storage of electrical energy and energy efficiency of power electronics, availability and quality of local digital (wireless) communication, scale and effectiveness of the Internet, and data storage capacities and their computational power.
In the commercial and industrial sector, beyond the improvements in reliability of manufacturing processes, robots have already shortened delays in the manufacturing of finished products, which allows for greater reactivity to detailed variations in demand. The market for personal domestic robots is growing from year to year (20% a year), while the prices should drop in the near future.
But even if people with no particular training will be able to use the next-generation of robots, those who have not mastered ICTs will find themselves more and more limited in their access to many basic services, to rewarding jobs, and to opportunities to improve their skills through training. Without ignoring the problems that arise from the disappearance of certain jobs and the serious repercussions on people and society, we must acknowledge that these innovations are full of opportunities for productivity development; they could create the new jobs of tomorrow.
It stands to reason that workers who acquire the skills necessary to adapt to changes in their line of work will be less vulnerable to replacement by digital technology. The innovations sparked by digital technologies could also present the potential for development and management of social improvements, in areas like public administration, health, education, and research. The creation of huge amounts of data and the capacity to extract knowledge and information from this data (known as big data) will launch a new wave of innovation, the creation of new services, the emergence of new products and markets…
Employee skills management will become vital in order for companies to adapt to rapid technological change, with support from complementary public investment in, for example, education and training. Primary and secondary schools will be responsible for preparing young people for an interconnected world where they will live with people of different origins and cultures, an undeniably “globalised” world.
The children from SIS didn’t sound at all worried by the prospects of more and better robots, quite the contrary: “I found it quite amazing and fabulous that technology is able to do such things. The people who built Nao must be very proud of their invention.”
Rough waters for container shipping. Why Hanjin, the world’s seventh largest container line, went under
Sad news. After months – even years – of pain and suffering, the South Korean container shipping company Hanjin finally sank and passed away. Not just any casualty, but the largest shipping bankruptcy in history: Hanjin was the world’s seventh biggest container line with a fleet of 90 ships. Was this an accident, an isolated case of bad luck, or is something more structural going on?
Like with any bereavement, there are the immediate arrangements to make. Terminal operators and maritime service providers were not paid for their services and need their money, so they have seized Hanjin ships in ports to have some sort of guarantee. Hanjin’s clients are eager to know that their goods will be delivered and not be stuck on ships. Competitors are circling around the deceased to pick up some of the ships that Hanjin leaves behind.
At the same time, people are starting to wonder how all this could have happened. Forensic analysts talk about the sluggish demand for container transport, hit by declining trade from China, the overcapacity in container shipping and the resulting low ocean freight rates that have made it very difficult to make profits in container shipping. All this sounds very logical, but also pretty abstract, and – more fundamentally – it obscures an uncomfortable truth: this was not an accident, but market forces at play – and it will happen again.
The story starts – in a way – in a corporate boardroom in Copenhagen in 2010. Then, the world’s largest container shipping company, Maersk Line, decided to order a set of new container ships that were larger than the world had ever seen, able to carry 18,000 standard containers. Putting more containers on a more fuel-efficient ship would save costs and thus give it a better position in a very competitive market.
For a weekly container service between Asia and Europe – the route on which the largest ships are deployed – ten to eleven ships are needed; a lot of capital that smaller companies would not be able to collect. As the order for the new mega-ships was placed while the global economic crisis was still unfolding, banks were unwilling to lend much to a risky business like shipping, especially the smaller ones with high risk profiles. Timing was excellent, with ship prices low due to overcapacity in shipbuilding yards. The new mega-ships were smartly marketed as “Triple E” ships, providing economies of scale, energy efficiency and environmental performance. They also provided a once in a lifetime opportunity “for the market consolidation that big players hoped for“.
Yet things worked out differently: other firms reacted by ordering similar mega-ships and by organising themselves in alliances. They agreed to share slots on each other’s vessels, which means they can offer networks and connections that they would not be able to offer if they would go it alone. Alliances had existed before, but the Triple E-strategy involuntarily resulted in stronger alliances in which more carriers were involved. These consortia were also used to share newly acquired mega-ships, so individual carriers would only need to buy a few of these, instead of having to shoulder a whole set of ten ships. Consequently, many carriers were able to rapidly catch up and also order mega-ships, many more than expected. The alliances became such powerful mechanisms that even the largest companies found themselves forced to find alliance partners.
This gave a different twist to the play, but with a similar outcome. The combined mega-ship orders in a period of sluggish demand created a sensational amount of overcapacity: way more ships than were needed. This overcapacity resulted in lower freight rates, lower revenues and several years of losses, which we have not started to see the end of yet. Whoever has the longest breath and biggest pockets will survive; the others won’t and will suffer death by overcapacity, like Hanjin.
There will very likely be more Hanjins. Hardly any container shipping line is making profit nowadays and the perspectives are bleak. Sputtering trade growth and gigantic ship overcapacity will continue to depress ocean freight rates. Banks, creditors and governments might well get impatient with some of the liners and cut life lines again.
Economic theory champions the notion of “creative destruction”, in which inefficient firms are replaced by more efficient ones. So, even if it is hardly any comfort for employees that lose their jobs in the process, one could consider it a natural thing that weaker shipping firms disappear.
There is just one problem. If this process continues, it will soon lead to a very small group of powerful carriers dominating an already concentrated market, enabling them to put a lot of pressure on clients and ports. We are starting to see what the results of this are: less choice, less service and fewer connections for shippers, the clients of shipping lines. The ports that accepted the offer they could not refuse and invested in becoming mega ship-ready may find out that they placed their fate in the hands of a few big players who frequently change loyalties at fast as the wind.
Hanjin is gone; the problem is still very much there.
The impact of mega-ships Olaf Merk on OECD Insights
The Hanjin case is a practical illustration of the complexity of sectors such as international shipping. The OECD is organising a Workshop on Complexity and Policy, 29-30 September, OECD HQ, Paris, along with the European Commission and INET. Watch the webcast: 29/09 morning; 29/09 afternoon; 30/09 morning
Shayne MacLachlan, OECD Environment Directorate
Anyone else feeling exhausted by all this drum humming about air pollution? Indeed it appears the fumes won’t be dissipating any time soon as we consider the extent to which tyre and brake rubbish exacerbate the problem. The European Commission says exhaust and non-exhaust sources may contribute almost equally to total traffic-related PM10 emissions. A few months ago, I was proposing (on this very Insights blog) that electric cars are essential in fighting filthy air pollution in urban areas because humans are unwilling to relinquish the comfort of their vehicles. Since then, I find myself mulling hard after this “alarmingly obvious” realisation that electric cars use tyres and brakes too! Even if they emit less of the harmful fine particles than conventional vehicles, please do feel free to file that blog in the “seemed like a good idea at the time” folder. And to turn insult to injury, I see that my own colleagues at the OECD have just published new data on PM2.5 emissions which did little to ease my blushes.
A lot of non-exhaust pollution from tyres and brakes winds up in rivers, streams and lakes. They produce particulate matter (PM10 and PM2.5) which is more harmful for humans than gas pollutants like ozone and NO2. Fine particulate matter penetrates deep into your lungs and cardiovascular system. New research has even discovered tiny particles of pollution inside samples of brain tissue. The OECD is amongst a few international organisations proudly leading the fight against ambient air pollution. And rightly so, with 80% of the world population exposed to PM2.5. Outdoor air pollution causes 3.7 million premature deaths a year and 1 in 8 people die from filthy air. OECD Environment Director, Simon Upton recently stated that air pollution is not just an economic issue, but also a moral one. He urges governments to stop fussing over the costs of efforts to limit pollution and start worrying more about the even larger costs they will incur if they continue to allow it to go unchecked.
Dead “tyred” but rolling on
Tyre rubbish is the 13th largest source of air pollution in Los Angeles, California, a city famous for its smog. A recent study showed links between PM2.5 particles and the daily death rate in 6 Californian counties. When the PM2.5 count was high, so was the death rate. Then there’s nanoparticles, ultrafine particles used in tyres. Manufacturers didn’t know it at the time but research now contends possible links to lung cancer from recycling some of the 1 billion dead tyres used in, for example, the surfaces of playgrounds. Some are calling it “the new asbestos”. The complexity of the problem is evident: there are over 1 billion cars on the road globally and on top of that just as many motorbikes and scooters. Add to that the pneumatic tyres used on trucks and public transport such as metro train systems and buses and we have a considerable source of road rubber. A road with 25,000 vehicles using it each day can produce up to nine kilograms of tyre dust per kilometre. That’s only ¼ of the 100,000 cars that use the Champs-Elysées each day so that makes at least 36 kilograms of tyre pollution a day on the world’s most famous street.
Bliss ignorance until my tyre burst
When I think back 10 years, sharing my time between the “not so clean” cities of London and Paris, I really had no idea that the air in these places was so bad. I recall often emptying my nostrils of its black contents after using underground transport, but now learning about the added impact of tyre and brake rubbish, I’m not really sure being better informed is better—at least from a personal health standpoint. I have friends in Paris that actively avoid Châtelet and other central metro stations for a number of reasons, one of those being the eye-watering pollution. The metro trains’ brakes and tyres are contributing to this “perfect pollution storm in a subterranean teacup”. Sometimes you can find between 70-120 micrograms of PM10 per m3 down there with peaks at 1,000 micrograms per m3 trapped in the station. In comparison, the average concentration of PM10 outside is around 25-30 micrograms per m3.
So what can we do?
In an ideal world, we would ditch cars completely, but I’m not sure we’re ready to take that step yet. However, several cities are working on implementing policies that will ban or severely reduce the amount of cars. Oslo announced a plan to ban all cars from its city centre in 2019; and Norway is in the process of preparing a bill that would issue a nation-wide ban of the sale of petrol-powered cars. In places such as Tuscany, cars are banned in city centres except for residents. Others park their car just outside and then take public transport. This is common in the UK too. This means that when there are more people in the centre during the day, there are fewer cars, meaning fewer people are exposed. Hopefully, other cities and nations will be inspired by such drastic changes in transportation methods and follow suit. There are certainly enough reasons to do so.
Play the cards dealt and work towards a better hand
It’s hard not to feel we’ve exhausted our current options. I’ve gone through several cycles of choosing my methods of transportation and have ended up cycling—literally and figuratively. Do bicycle tyres contain rubber (though they emit precious little)? Yes; and so do bus and some metro train tyres, as well as motorbikes and scooters. We are left with only imperfect options. They won’t solve the problem, but they can reduce it and that’s something to be optimistic about. As with many actions that influence health and the environment, human behaviour and choices matter massively. Choosing the least damaging option of getting around your town means the bicycle is still a great option. It might also be worth trying to avoid times in which the pollution levels are the highest: 9h, 12h and 18h in many cities. But of course the exercise and associated heavy breathing whilst riding, exposes you to the risk, even though you are contributing least to the problem. So while the thought of all that damaging pollution is ever so “tyring”, it seems that the pollution, including from brakes and tyres itself might also leave you feeling worse for wear.
An international deal on air pollution
WHO guidelines indicate that by reducing PM10 pollution from 70 to 20 micrograms per m3, air pollution-related deaths could be reduced by roughly 15%. Staging a climate COP (Conference of the Parties) style conference to address air pollution emissions seems like a good start. Who could disagree that setting limits for polluting emissions from all sources is an absolute minimum requirement to give our lungs and environment a breather. Moving forward, it’s crucial we keep pushing governments to come up with innovations and policies that vigorously tackle air pollution issues. Governments also need to ensure that people are aware of the issues and help them make the best choices. In the meantime, we all have to play the cards we’re dealt and make a conscious effort to choose least polluting options.
OECD data on emissions:
Bettina Wistrom, OECD Statistics Directorate
Economic growth (GDP) always gets a lot of attention, but when it comes to determining how people are doing it’s interesting to look at other indicators that focus more on the actual material conditions of households. Let’s see how households in Australia are doing by looking at a few alternative indicators.
GDP and household income
Real household disposable income per capita increased 0.9% in Q1 2016 compared to the previous quarter (the index increased from 113.7 in Q4 2015 to 114.7 in Q1 2016), outpacing GDP per capita which increased 0.6% from the previous quarter (the index increased from 108.1 in Q4 2015 to 108.8 in Q1 2016). The rise in household disposable income mainly reflected a growing compensation of employees and income from self-employment, and – to a slighter extent – an increase in interest received by households.
This recent development continues the trend observed since the first quarter of 2007. Indeed Australian real household disposable income per capita has grown considerably more than real GDP per capita: 14.7% versus 8.8% (chart 1). The growth in household income over that period partly reflects government interventions during the first years of the crisis which, it is worth noting, did not impact the Australian economy as severely as it did other OECD countries. Even though there was a dip in economic activity in Q4 2008, the Australian economy was never formally in recession (defined as two consecutive quarters of negative growth).
Chart 2 shows that net cash transfers to households increased sharply at the start of the financial crisis, mainly due to increases in social benefits. Since the peak in Q2 2009, net cash transfers to households exhibit a downward trend corresponding to the time period when economic growth and household disposable income have started to move (more or less) in tandem.
Confidence, consumption and savings
Household disposable income is a meaningful way to assess material living standards, but to get a fuller picture of household economic well-being, one may also want to look at households’ consumption behaviour. Consumer confidence (chart 3) remained broadly stable at 99.7 in Q1 2016. This contributed to sustaining real household consumption expenditure per capita which increased by 0.3% in Q1 2016 (the index increased from 107.7 in Q4 2015 to 108.0 in Q1 2016) (chart 4).
The households’ savings rate (chart 5) shows the proportion that households are saving out of current income. In Q1 2016, the savings rate ticked-up to 15.8% showing that households chose to save some of their additional income, rather than spending it on goods and services. Overall, Australian households have a relatively high savings rate. Over the whole period observed, the average rate is 16.2%, one of the highest among OECD countries. In Q4 2008, the savings rate increased sharply, to its highest level (19%), partly reflecting households’ responses to deteriorations in financial markets and an increased level of uncertainty over future income.
Debt and net worth
The households’ indebtedness ratio, i.e. the total outstanding debt of households as a percentage of their disposable income, may reflect (changes in) financial vulnerabilities of the household sector and provides a useful yardstick to assess their debt sustainability. In Q1 2016, household indebtedness increased to 198.5% of disposable income (chart 6), doubling over the past 20 years and one of the highest levels among OECD countries. This rise in household’s indebtedness followed the introduction of mortgage packages in the 1980s and 90s, which allowed homeowners to draw down on their mortgages when needed, without having to sell their house. In addition Australian households have increasingly borrowed money to finance house purchases (Chart 6).
When assessing households’ economic vulnerabilities, one should also look at the availability of assets, preferably taking into account both financial assets (saving deposits, shares, etc.) and non-financial assets (for households, predominantly dwellings). Because information on households’ non-financial assets is generally not available on a quarterly basis, financial net worth (i.e. the excess of financial assets over liabilities) is used as an indicator of the financial vulnerability of households.
In Q1 2016, financial net worth of households was 183.9% of disposable income (chart 7), 4.9 percentage points less than in the previous quarter. This decrease was predominately driven by the rise in household debt (chart 6), and holding losses on pension assets, equity and investment fund shares.
The unemployment rate and the labour underutilisation rate (chart 8) also provide indications of potential vulnerabilities of the household sector. More generally, unemployment has a major impact on people’s well-being. In Q1 2016 the unemployment rate in Australia remained at 5.8%, while the labour underutilisation rate, which takes into account underemployed workers and discouraged job seekers, decreased slightly to 21.3% (from 21.8% in Q4 2015). Notwithstanding this small decline, the gap between the unemployment rate and the labour underutilisation rate remains large in Australia – the largest among OECD countries, indicating unmet aspirations for more work among Australian workers.
One should keep in mind that households’ income, consumption and savings may differ considerably across various groups of households; the same holds for households’ indebtedness and (financial) net worth. The OECD is working on these distributional aspects and preliminary results can be found here and here. The Australian Bureau of Statistics (ABS) has also a long history analysing households’ developments broken down by income and household characteristics.
Overall, the first quarter of 2016 saw a continued increase of Australian households’ material well-being with still expanding income and consumption per capita. The upward trend in the household debt-to-income ratio and the high labour underutilisation rate – reflecting the challenge for some groups of workers to bounce back after displacement – remain a source of concern. Despite increasing household debt, financial net worth is slowly getting back to its pre-crisis level. However, to fully grasp people’s overall well-being, one should go beyond material conditions, and also look at a range of other dimensions of what shapes people’s lives, as is done in the OECD Better Life Initiative.
For many years, OECD has been focusing on people’s well-being and societal progress. To learn more on OECD’s work on measuring well-being, visit the Better Life Initiative.
Interested in how households are doing in other OECD countries? Visit our household’s economic well-being dashboard.
Noe van Hulst, Ambassador of the Netherlands to the OECD
It was a unique event, for sure: China hosting its first G20 summit in Hangzhou on 4-5 September. The city where Chinese leader Mao Zedong half a century ago regularly met with Third World guerrilla leaders to discuss the battle against US “imperialism”. China has come a long way since then, now leading the G20 push to escape from the “low-growth trap”, stalling globalization and the tide of rising protectionism. With growth persistently too low and trade even lagging this low rate, it is time for more decisive policy action. The overall result in the final communique has been coined The Hangzhou Consensus: linking a vision based on innovative, sustainable economic growth and a well-balanced policy mix with forcefully tackling inequalities and promoting an open global economy. It is encouraging to see China make the case for a rules-based global system of open trade and investment. Of course, this commitment also should have important consequences for domestic policies in China, as well as in other G20 countries. In this context, it’s a remarkable step that G20 leaders have now agreed to tackle the excess capacity in the steel market.
What was striking in China’s approach to the G20 presidency is the welcome focus on medium- and long-term structural economic policies, in combination with an orientation on policy-action. Trade and investment moved up on the policy agenda, resulting in a G20 Strategy for Global Trade Growth and G20 Guiding Principles for Global Investment Policymaking. Completely new was the emphasis on innovation, digital economy and the New Industrial Revolution, nicely brought together in a G20 Blueprint on Innovative Growth. In addition, there was a drive to deliver an Enhanced Structural Reform Agenda, identifying priority areas for structural reforms and monitoring a new set of quantitative indicators.
As always, implementation will be key, especially on reversing adverse trends in trade, investment, structural reform and Internet control. Although these elements were in my view the most remarkable, of course many other policy areas have also been advanced: taxation, finance, employment, entrepreneurship, sustainability, energy, green finance and climate change. The announcement by both China and the US of their ratification of the Paris Agreement on Climate Change was widely welcomed as an important step in the transition to a low-carbon economy.
Another remarkable factor about China’s G20 presidency is the continuing important role of the OECD Secretariat. Although China is not a member of OECD, it nevertheless relied substantially on the analytical support and assistance of the OECD Secretariat in a substantial number of key areas, e.g. innovation, trade and investment, structural reform, employment, inequality, green finance and taxation. This can be interpreted as an important sign of appreciation for the quality of the work. We can observe a welcome rapprochement between China and the OECD Secretariat, as well as with the IEA in the energy field. The OECD Secretariat also showed laudable flexibility and adaptability in responding timely to the G20’s call to pull together a new Inclusive Framework on Base Erosion and Profit Shifting (BEPS), which now has 85 countries, including many developing countries, committed to the BEPS roadmap.
Of course, much work is still ahead of us. Apart from the existing work streams, the OECD is tasked with taking forward the work on innovation/digital economy, overcapacity in the steel market (within a new Global Forum led by OECD) and designing tax policies for inclusive growth, among other things. The Netherlands views the supporting role of the OECD Secretariat in the G20 as very useful, as it provides non-G20 OECD countries an important window on and bridge to the G20.
Where is the G20 heading? Created as a mechanism for crisis management, it is now moving in the direction of a “steering committee” of the global economy. China’s emphasis on medium- and long-term structural policies has been helpful in this respect. Some observers express disappointment with the G20’s effectiveness in the face of persistent weak growth and a widely proliferating G20 agenda. This is partly understandable, in particular where it comes to insufficient implementation of agreed G20 commitments, like on trade and structural reform. That’s why structural policy measures following the 2014 G20 commitment to raise global GDP by an additional 2% by 2018 have so far only resulted in 1% according to OECD and IMF calculations. And it has been widely reported that G20 protectionism has been on the rise, contrary to what has been agreed. However, as the G20 expert Tristram Sainsbury (of the Australian Lowy Institute) says: “if the G20 did not exist, we would have to invent it”. With so many global problems requiring coordinated collective policy responses or new global standard-setting, not even big countries can go at it alone. At the same time, it is in the G20’s own interest to find constructive channels of engagement with non-G20 countries, some of whom are at the vanguard of policy innovation, implementing first what later often becomes mainstream in the rest of the world.
After the successful G20 summit in Hangzhou, we already start looking forward to the German presidency of the G20 in 2017. We would expect Germany to further advance the G20 attention for structural policies, including on topics like the digital economy, health care and responsible business conduct in global value chains. Undoubtedly, the OECD Secretariat will again provide the presidency useful analytical support. Wir werden bald sehen!