The end of the bargain? And should we worry?

Price elasticity. And vice versa.
Price elasticity. And vice versa.

In the run-up to an OECD Competition Committee roundtable discussion on 30 November 2016, Chris Pike of the OECD Directorate for Financial and Enterprise Affairs looks at the concerns and the opportunities created by the increased scope for personalised pricing in the digital economy.

We’ve all felt it – the rush you get when you find a great bargain at a price way less than you would happily have paid. But will these moments continue in the digital world as shopping moves online and the scope for firms to charge different prices to different customers increases?

In the era of big data and automated pricing algorithms, firms, using increasingly sophisticated analytical tools, can – with increasing accuracy – use huge volumes of information to model and predict our willingness to pay for a product. This helps them identify how much extra profit they might have extracted from any given transaction, and if they wish, to adjust and personalise their prices accordingly to extract that profit.

Reports suggest that more than 500 sellers on Amazon Marketplace have been identified as using algorithmic pricing and that at some retailers prices can differ by 10-30% for identical products based on the customer’s location, the device they’re using, proximity to a rival’s brick-and-mortar store (paywall) and characteristics including browser configuration.

An early example is Safeway, a grocery store with a mobile app that sends personalised promotional offers on specific products to shopper’s mobile devices as they walk around the store. These could be based on the current or forecast weather, products previously purchased, regularity and history of purchasing that product, complementary products purchased that day, and whether the location of their mobile device suggests they have already passed the aisle. Whether the customer uses the offer or not, the firm can use that information in its pricing algorithm to predict the discount required for like-minded consumers.

Moreover, this information might be added to in future, particularly as data is generated by the ‘internet of things’ (online devices in cars, kitchen devices, and health devices). For example, what were they doing prior to shopping? When did they last eat? Is there an appointment they are late for? Which rival stores do they use? Which rival stores are located on their route home? What’s in their calendar for the next week? What’s on their digital shopping list (or that of other family members)? Have they ordered takeaway that week? Do they have family visiting?

So this could be really bad news for consumers that would in any case buy the product; what if Amazon now realises that I’m willing to pay a frankly irrational amount of money for the new Hot Chip album? Are they seriously going to charge me 300 euro for it? Well, if they had an exclusive on it, then they might do. Would that be exploitative? Price gouging? Should competition agencies investigate? Maybe.

But, it could also mean they set a price of 3 euro to my dad who would never dream of paying 15 euro for it, since in the digital world reproduction is effectively costless so this is all profit. It would also increase the incentive for the band to make another album, which would be great, though at some point we might wonder whether this incentivises the right things.

Moreover, if I can get it elsewhere then the information that I value the product at 300 euro has little value. So big data should be less valuable in more competitive markets? Might its value even help us identify uncompetitive markets? Again, perhaps. But, if I can’t get it elsewhere, that is if they have market power, then what?

Well firstly we should recognise that consumers might not be defenceless, they might react by withholding information and services have been developed to provide anonymity (paywall). By making it more difficult for firms to estimate a consumer’s valuation these can disrupt discriminatory pricing schemes. Alternatively, consumers might start to demand compensation for providing the information.

And even if consumers are individually vulnerable, there might be small things we can do to help empower them. For example while personalised pricing is not yet widespread, a more common approach is to send personalised coupons. These change the effective price without changing the list price and this framing can make it more acceptable to consumers who would otherwise resent being charged personalised prices. The risk of offending a sense of fairness is perhaps key, and may lead to firms facing boycotts. This may explain the proposal that firms using personalised pricing schemes should have to declare transparently to consumers that they are doing so. They might for example be required to specify whether the personalised price is higher than average or the range of prices that they are charging for the product in question.

So, the effects of personalised pricing could go either way, and they could often benefit consumers, particularly those on small budgets, so we can’t go blundering in and prohibit all price discrimination. What to do then? Traditionally competition agencies would look at price discrimination where it was used to exclude a rival, and perhaps if it threatened to distort competition in a downstream market. They have been understandably reluctant to challenge price discrimination in final consumer markets. As a default position, a presumption that price discrimination is often competitive and good for consumers is entirely sensible. However, as this discrimination becomes near perfect in its execution, and the scale of potential harm to consumers increases, the risk calculus changes and so agencies and lawmakers may need to become more open to complaints and more active in requiring transparency on pricing policy.

Links

This post is based on an OECD Secretariat paper on price discrimination

OECD Roundtable on Price Discrimination (30 November 2016)

Big Data: Bringing competition policy to the digital era

Data-driven innovation: Big data for growth and well-being

OECD Competition Committee Best Practice Roundtables

How does big data impact education?

mathematical-equations-and-charts-on-blackboard_xlMarc Fuster Rabella ,  OECD Centre for Educational Research and Innovation. Today’s post is also being published by OECD Better Life Initiative partner Sodexo.

Big data refers to the high volume of varied information that our societies produce today. The amount of data generated is so vast that it is even difficult to capture, manage and process it through conventional means. Big data increasingly influence our lives as better forms of data processing appear and storage capacity improves. Individuals’ interaction in social media, for example, serves companies to better understand consumer demand and find new ways to reach their targeted public. Similarly, city planners use urban mobility data to better address public transportation needs. Big data is changing the way decisions are made everywhere, and education is no exception.

Education systems produce a huge amount of information about our students and schools. This includes data such as students’ attendance and attainment, their performance and socioeconomic background, as well as schools’ population composition and instruction time. These and other kinds of information are important for the administration of education systems, but it can be also very helpful for analysing system’s functioning and supporting its improvement.

Access to data can help students define their learning goals and strategies. It can give families information to help make decisions and support their children’s educational paths and improve, teachers and schools ability to better adapt teaching methods to suit students’ specific context and needs. It can aid researchers in identifying what works best and new ways for data to further improvement, as well as give decision makers the evidence to design policies that better support their districts and schools.

However, the single fact of having the information is not enough to take advantage of it. An OECD case study shows that self-evaluation processes for development and improvement in some Polish schools were hindered by an excessive emphasis on in-between schools comparisons and competition. The absence of a culture of evaluation led these schools to narrow their focus on students’ test results. Given that the quality of schools is not determined by these results alone, their efforts were not able to bring about improvement.

As the Polish case exemplifies, the process of collection, analysis and use of data comes with its own challenges. The combination of descriptive data, research findings and practitioner knowledge is what creates robust knowledge environments for decision making. Results from standardised tests, for example, only provide a snapshot of performance at a particular moment in time. It is when these are combined with other kinds of information that we can actually use this information to improve our practices where it matters the most: the classroom. The Knewton platform is a good example of how such a combination can play a crucial role in finding tailored solutions to students’ individual learning needs.

Teachers, schools and other stakeholders involved in decision making need to transform available data into knowledge, which is to say, they need to assimilate this information and understand how to use it. But this is easier said than done. Creating robust knowledge environments that effectively support decision making requires building capacity for stakeholders across the system. Big data can surely support educational change, but knowing which information to use, why and how is as fundamental as its availability.

Useful links

The Simple, the Complicated, and the Complex: Educational Reform through the Lens of Complexity Theory

Governing Education in a Complex World

Education Governance in Action: Lessons from Case Studies

Big Data, Complexity Theory and Urban Development

NAECRicardo Herranz, Managing Director, Nommon Solutions and Technologies, Madrid

We are living in the era of cities: more than 50% of the world population is already living in urban areas, and most forecasts indicate that, by the end of this century, the world’s population will be almost entirely urban. In this context, there is an emerging view that the global challenges of poverty eradication, environmental sustainability, climate change, and sustainable and secure energy are all intimately linked to cities, which are simultaneously places where these global problems emerge and solutions can be found. In the short term, cities are facing the major challenge of overcoming the financial and economic crisis and emerging stronger from it. In the long term, they need to deal with structural challenges related to globalisation, climate change, pressure on resources, demographic change, migration, and social segregation and polarisation. Many of these challenges are shared by cities from developed and developing countries, while others depend on geographical, institutional, socioeconomic and cultural differences.

When addressing these problems, policy makers and society at large face a number of fundamental problems. The many components of the urban system are strongly interwoven, giving rise to complex dynamics and making it difficult to anticipate the impact and unintended consequences of public action. Cities are not closed systems, but they are part of systems of cities. Urban development policies are subject to highly distributed, multi-level decision processes and have a profound impact on a wide variety of stakeholders, often with conflicting or contradictory objectives.

In the past few years we have seen the emergence of concepts such as the smart city, urban informatics, urban analytics and citizen science, which are seen to hold great promise for improving the functioning of cities. However, arguably most of this potential still remains to be realised. The concept of the smart city has been coined as a fusion of ideas about how information and communication technologies can help address critical issues relating to cities. Essential to this concept is the notion of an integrated approach to the synergies and trade-offs between different policy domains that are closely interrelated, but have traditionally been addressed separately, such as land use, transport and energy. This integrated approach would be facilitated by the ability to analyse the increasingly large data streams generated by the ubiquitous sensorisation of the built environment and the pervasive use of personal mobile devices. In parallel, smart devices and social media are also producing new forms of public participation in urban planning. The opportunities are vast, but so are the challenges.

Much hope has been placed in the explosion of big data for establishing the foundations of a new science of cities. During the last 20 years, the dominant trend in urban modelling has changed from aggregate, equilibrium models to bottom-up dynamic models (activity-based and agent-based models) that seek to represent cities in more disaggregated and heterogeneous terms. This increasing model sophistication comes with the need for abundant, fine-grained data for model calibration and validation, hindering the operational use of state-of-the-art modelling approaches. The emergence of new sources of big data is enabling the collection of spatio-temporal data about urban activity with an unprecedented level of detail, providing us with information that was not available from surveys or census data. This has already yielded important practical advances in fields like transportation planning, but it is more questionable, at least for the moment, that big data has produced substantial advances in our understanding of cities. In principle, the potential is there: while research on cities has historically relied on cross-sectional demographic and economic datasets, often consisting of relatively small samples, we have now large-scale, detailed longitudinal data that can allow us to test new hypotheses about urban structure and dynamics. On the other hand, there is a risk that big data leads to a shift in focus towards short-term, predictive, non-explanatory models, abandoning theory. Connecting the smart city and big data movements with the knowledge developed in the last decades in fields like regional science, urban economics and transportation modelling appears as an essential condition to overcome this problem and take advantage of the opportunities offered by big data for the formulation of better theories and policy approaches.

Both empirical work and theoretical advances are needed to cope with the new challenges raised by energy scarcity and climate change, emerging technologies like self-driving cars, and the changes in social relationships, the new activities and the new forms of sharing economy enabled by social media and electronic communications, among other factors that are leading to profound changes in urban structure and dynamics. Equally challenging is to integrate data and models into governance processes: policy assessment and participatory planning are still largely based on qualitative considerations, and there is a sense that state-of-the-art urban models are immature with respect to institutional integration and operational use. New forms of data sharing and visualisation, digital participation and citizens’ engagement are promising tools to tackle this question, but here again, we still have to figure out how to share data and specialised knowledge in a form that fluidly intersects participatory decision making process and bridges the gap between implicit and explicit knowledge. Recent advances in areas such as network theory, agent-based computational modelling and group decision theory, and more generally the intrinsically holistic and eclectic approach advocated by complexity science, appear as a suitable framework for the development of a new science of cities which can in turn lead to new advances in the way cities are planned and managed, allowing us to address the enormous challenges related to urban development in the 21st century.

Useful links

OECD work on urban development

The OECD is organising a Workshop on Complexity and Policy, 29-30 September, OECD HQ, Paris, along with the European Commission and INET. Watch the webcast: 29/09 morning29/09 afternoon30/09 morning

There’s an algorithm for that. Or there soon will be

OECD Forum 2016Marina Bradbury, OECD Public Affairs and Communications Directorate and one of the organisers of this year’s OECD Forum

Would you like a machine to decide on your medical treatment, whether you could insure your house, if you should be hired, or what news stories you read? It may be happening to you already. Every time you go online to make a purchase, search for a restaurant, access your bank account or simply interact with your mobile device, you are creating a digital trail of data that is being tracked and stored. This “big data” is fodder for machine learning algorithms that will for example suggest what to buy.

Traditionally in computer science, algorithms are a set of rules written by programmers. Machine learning algorithms are different: they can improve the software in which they are embedded without human intervention. The more data they receive, the higher their ability to “understand” and predict patterns, including patterns in human behaviour. They are another step along the road to creating artificial intelligence (AI), even if we don’t know where this road is leading. As Stephen Hawking and his colleagues writing in The Independent, claimed “Success in creating AI would be the biggest event in human history” before going on to say, “Unfortunately, it might also be the last, unless we learn how to avoid the risks.”

We are living in an algorithmic society, and many argue that this is a positive thing. On an economic level, machine learning algorithms could help stimulate innovation and productivity growth. According to OECD research, big data used to feed machine learning algorithms can boost industries including advertising, health care, utilities, logistics, transport and public administration. When it comes to our day-to-day lives, algorithms can save us time and effort, for example online search tools, Internet shopping and smartphone apps leveraging “beacon” technology to provide timely recommendations based upon our whereabouts. Computer scientist Pedro Domingos even predicts that in five years’ time, digital personal assistants will be more important than smart phones, with their capacity to aggregate information from various apps to predict our needs before we even know them.

However, the large-scale use of algorithms can also be threatening to us as citizens. For example, if algorithms allow companies to predict our purchases before we even make them, what implications does this have for our personal choices and privacy? Critics point towards the dangers of allowing companies to exploit vast amounts of personal data and restrict individual liberties.

Take the realm of insurance, loans and legal advice. Nowadays, our credit rating or health insurance record is often assessed by a machine, not a person, whilst virtual legal assistants are becoming increasingly common. On the one hand, this can be advantageous to companies, enabling higher levels of efficiency, and in turn more accessible prices. The legal industry is undergoing a veritable transformation thanks to algorithmic technology, with quantitative legal prediction (QLP) being a prime example. Making information-based predictions is at the heart of the legal profession. In addition, legal cases often require the analysis of large-scale data or document sets, which can pose a challenge to the cognitive limitations of humans. Since algorithms are able to make predictions based on “big data” with increasing accuracy, QLP is arguably set to play an increasing role.

On the other hand, when it comes to ordinary customers looking for legal support or a loan, automated systems may not be helpful. Critics warn that even if an algorithm is designed to be neutral, bias can creep in. This can be due to unconscious bias of computer programmers. With machine learning algorithms, this is also due to the fact that they are fed by data. Even if they absorb this data in a completely rational way, they will still reproduce forms of discrimination that already exist in society. For example, if you are looking for a bank loan, you might be offered a higher or lower rate depending on your postal address, name, age or gender.

In the same way, whilst “talent analytics” is being used in HR to help build fairer recruitment practices, these new technologies do not offer a quick fix. For example, studies have found that women or people with “foreign” sounding names receive different kinds of job advertisements than white males. Nevertheless, global companies such as Google and McKinsey are already developing “talent algorithms” to recruit the best staff and assess performance. Moreover, some argue that companies that fail to move in this new direction may lose out later on.  Overall, it seems that algorithms could have a positive impact on the future of recruitment, but only when used judiciously as part of a wider process towards inclusiveness.

The healthcare industry is another key area in which the paradigm of the algorithmic society is played out. For example, a recent study in the US revealed how machine learning can offer a faster and less resource intensive method of detecting cancer, with machines automatically extracting crucial meaning from plaintext reports. Arguably, if machines can be used to review and analyse data, this frees up humans’ time to provide better clinical care. However, the ethical sensitivities of using algorithms to make critical health decisions must be addressed when developing innovative new models.

Trading algorithms are transforming the financial world as we know it. Algorithmic trading has given rise to companies such as Quantopian, which invites “talented people everywhere” to create their own algorithms for free, and pays those for the ones that perform best, and Rizm, which lets those new to trading test and even trade using their own algorithms. However, the field is not without dangers: just one typo could lead to significant financial losses in a short amount of time. The ethics of algorithmic trading are also questioned by critics. With computer-driven or “quantitative” hedge funds enjoying success despite volatile markets, their business models will not escape scrutiny as algorithms continue to permeate our economic systems.

Finally, algorithms that drive search engines can influence the information we receive, impacting upon our outlook on the world and even our well-being. Take the phenomenon of “filter bubbles”. This relates to the way algorithm-based search tools are likely to show us information based upon our past behaviour, meaning it is unlikely to challenge our existing views of spark serendipitous connections. More worrying still, Facebook conducted an experiment in 2014 to test the reaction of users to negative or positive content. The results revealed that those shown more negative comments posted more negative comments, and vice versa. However, the way the experiment was conducted was criticised for its lack of transparency.

The paradigm of the algorithmic society is very much bound up in the unknown. In many ways, this is exciting, capturing how data is becoming the raw material of our era, a source of many possibilities for innovation and even the means to address social problems. Yet it can also be a threat. As Pedro Domingos puts it, “You can’t control what you don’t understand, and that’s why you need to understand machine learning”. The challenge will be to ensure that we live in a society which reaps the benefits that algorithms can bring, whilst ensuring that their implications are understood by all.

Useful links

OECD Policy Brief on the future of work: Automation and independent work in a digital economy

The 2016 OECD Forum on 31 May – 1 June, is entitled “Productive economies, Inclusive societies”. The Forum is organised around the three cross-cutting themes of OECD Week: inclusive growth and productivity, innovation and the digital economy, and international collaboration for implementing international agreements and standards. Register now, it’s free!

Using big data in the fight against dementia

Big data vs dementiaIn today’s post, Elettra Ronchi of the OECD Science, Technology and Innovation Directorate argues that our current model of innovation has so far failed to deliver the effective treatments that we urgently need for the 44 million people living with dementia worldwide, and asks if recent advances in information technology can come to the rescue.

There’s a quiet revolution afoot: health data are increasingly collected, stored and used in digital form. Doctors, nurses, researchers, and patients are all producing on a daily basis huge amounts of data, from an array of sources such as electronic health records, genomic sequencing, high-resolution medical imaging, ubiquitous sensing devices, and smart phone applications that monitor patient health. In fact the OECD predicts more medical information and health and wellness data will be generated in the next few years than ever before.

The remarkable expansion of digital health data is largely driven by technological developments, not least the expansion of broadband access, smart mobile devices and smart ICT applications. Improvements in data analytics have also played a significant role, as has the provision of super-computing resources through cloud computing.

This revolution could prove particularly helpful for neurodegenerative diseases like dementia. Because of dementia’s clinical and biological complexity, the studies needed to underpin drug discovery and develop new therapeutic strategies aimed at slowing disease progression will require massive and diverse data collection, storage and processing. And large quantities of broad and deep data are being generated across laboratories worldwide –the information is behavioural, genetic, environmental, epigenetic, clinical, administrative, and more. Harnessing this data, advocates argue, would present advantages across the board: for research, patient care, health system management, and public health.

So how can we foster this environment where data aids dementia innovation? Today, researchers’ willingness to share data is often constrained by uncertainty. Several issues are at play.

First, ethical concerns need to be accounted for. Currently, informed consent permissions, which cover the consent for the use of the participant’s data, tend to be limited to the research questions related to the primary study focus. This means they exclude potentially unrelated investigations that could follow from open access to these data in the wider research community. New tiered step-by-step or dynamic consent models are needed to meet ethical and legal requirements and at the same time accommodate the changes in data use and research practices.

Second, there are broader challenges to data sharing, related to the lack of an open data culture. Open science has an enormous potential to avoid wasteful duplication of effort, to enable the verification or scientific results and the re-analysis of data for different purposes, and to promote competition of ideas and research. In 2013, the G8 Science Ministers statement called for publicly funded scientific research data to be open.

Yet there are still considerable disincentives that researchers and scientists face with respect to the disclosure of data, particularly at the pre-publication stage. Credit sharing in the academic economy presents dilemmas for researchers. Publications by whole consortia or with numerous authors are a challenge for academics concerned about how these publications will be credited and recognised for career promotion by their institutions. This raises the question of the actions needed to promote data access and openness to boost research and innovation without discouraging data collection from individual researchers.

Third, there is a need for investment in order to harness the potential of data for dementia. The costs of collecting, storing, linking, organising, and analysing data require considerable investment and collaboration, and appropriate funding needs to be set aside. Sustaining the big data infrastructures will also require financing: for many big data projects, networks or federated research platforms, the most significant challenge once the initial funding runs out is the development of a sustainable business model, that as a bare minimum, would sustain the curation and maintenance of data in an accessible form.

Big data also requires large numbers of people who are very highly trained and in huge demand from other sectors. Data specialist skills could become the most critical enabler for big data dementia research. Incentives are needed to promote education and training of data analysts and bioinformatics experts to use big data effectively for health research.

Of course, the explosion of promising new technological opportunities and data generation will not automatically translate into new products and care solutions for dementia and other neurodegenerative diseases. In order to deliver this promise, these new developments will have to be accompanied by organisational, infrastructural and governance changes throughout the health innovation system. The current R&D process is fragmented, costly, unpredictable and inefficient. Funding for dementia and other neurodegenerative diseases accounts for less than 1% of research and development budgets in the G7. These, and other issues, will also need to be addressed.

Researchers in industry, hospitals and universities continue to make significant contributions to scientific understanding. But without better data sharing, interpretative capacity, and co-ordination of knowledge, we can make only limited progress in our understanding of the molecular basis of neurodegenerative diseases or whether treatments or interventions work. Radical improvements in information technologies and the increasing gathering and sharing of electronic health data not only make it timely to assess and improve global capacity to undertake multidisciplinary research – they make it imperative.

Useful links

Addressing Dementia: The OECD Response, OECD Health Policy Studies

Big Data for Advancing Dementia Research: An Evaluation of Data Sharing Practices in Research on Age-related Neurodegenerative Diseases OECD Digital Economy Papers, No. 246

Dementia Research and Care: Can Big Data Help? Edited by G. Anderson and J. Oderkirk

Unleashing the Power of Big Data for Alzheimer’s Disease and Dementia Research: Main Points of the OECD Expert Consultation on Unlocking Global Collaboration to Accelerate Innovation for Alzheimer’s Disease and Dementia, OECD Digital Economy Papers, No. 233

The road to better data

Paris21 road mapToday’s post is by Johannes Jütting, Manager of the Partnership in Statistics for Development in the 21st Century (PARIS21), which promotes the better use and production of statistics in developing countries. PARIS21’s new report, A Road Map for a Country-led Data Revolution, sets out a detailed programme to ensure developing countries can monitor the Sustainable Development Goals and benefit from technological and other innovations in data collection and dissemination.

Tradition tells us that more than 3,000 years ago, Moses went to the top of Mount Sinai and came back down with 10 commandments. When the world’s presidents and prime ministers go to the top of the Sustainable Development Goals (SDGs) mountain in New York late this summer they will come down with not 10 commandments but 169. Too many?

Some people certainly think so. “Stupid development goals,” The Economist said recently. It argued that the 17 SDGs and roughly 169 targets should “honour Moses and be pruned to ten goals”. Others disagree. In a report for the Overseas Development Institute, May Miller-Dawkins, warned of the dangers of letting practicality “blunt ambition”. She backed SDGs with “high ambition”.

The debate over the “right” number of goals and targets is interesting, important even. But it misses a key point: No matter how many goals and targets are finally agreed, if we can’t measure their real impact on people’s lives, on our societies and on the environment, then they risk becoming irrelevant.

Unfortunately, we already know that many developing countries have problems compiling even basic social and economic statistics, never mind the complex web of data that will be needed to monitor the SDGs. A few examples: In 2013, about 35% of all live births were not officially registered worldwide, rising to two-thirds in developing countries. In Africa, just seven countries have data on their total number of landholders and women landholders, and none have data from before 2004. Last but not least, fast-changing economies and associated measurement challenges mean we are not sure today if we have worldwide a billion people living in extreme poverty, half a billion or more than a billion.

Why does this matter? Without adequate data, we cannot identify the problems that planning and policymaking need to address. We also cannot judge if governments and others are meeting their commitments. As a report from the Centre for Global Development notes, “Data […] serve as a ‘currency’ for accountability among and within governments, citizens, and civil society at large, and they can be used to hold development agencies accountable.”

So data matters. Despite this, blank spaces persist in the statistics of many developing countries. And they persist even at a time when the world is experiencing a “data revolution” – a rising deluge of data matched by ever-increasing demand for data.

Despite the challenges, we are optimistic that all countries, including the poorer ones, can make quick, dramatic progress in meeting their data challenges. Firstly, there is not only a growing awareness of the problems countries are facing but also a growing willingness to do something about it. Statistical offices in almost 40 developing countries have signed up to our Data Declaration, in which they state that “the time is now to bring the data revolution to everyone, everywhere”.

Second, new technologies are already helping to revolutionise the world of data. PARIS21’s Innovations Inventory has compiled hundreds of ways in which technology is making it easier and less costly to collect data and providing new sources of data, like “big data”. Examples abound, from NGO to private sector initiatives. As part of its Data for Development (D4D) challenge, Orange Senegal opened up its mobile-phone call-log data for researchers to generate insights into health, transportation, demographics, income inequality, and more. Another truly “Big Idea” comes from Restless Development, a youth-led development agency that equips young people with knowledge, skills, and platforms necessary to effectively interpret data in order to mobilise citizens to take action.

Third, we are optimistic because we want to build on what is already there – existing national statistical systems. Clearly, many are far from ready to join the data revolution; a colleague recalls visiting one national statistical office that couldn’t pay its power bill and had to negotiate with a neighbour to string an extension cord from his home to the office. That may be an extreme example, but other challenges – including technology gaps, shortages of trained staff, weak data dissemination and poor relations with users – are all too common. Nevertheless, national statistical agencies are the only entities with the expertise and legal frameworks to play the lead role in collecting, processing and disseminating data. It is on them that the data revolution for development for sustainable development must be built.

Of course, our Road Map for a Country-led Data Revolution is only a start. Much else needs to happen. This includes designing pilot projects, finding better ways to integrate new sources of data in existing national systems and – unsurprisingly – finding extra funding. But here again we are optimistic. We don’t accept that the cost of monitoring the SDGs will be “crippling”. With our colleagues in the UN Sustainable Development Solutions Network, we have calculated that additional donor funding of $200 million a year, matched by a similar rise in domestic funding, would enable the 77 IDA countries (“The World Bank’s Fund for the Poorest”) to successfully monitor their progress the SDGs – yes, even the proposed 17 goals and 169 targets!

We don’t yet know if that will turn out to be the final number of SDG “commandments”. But here’s something we do know – developed and developing countries are on the cusp of a huge and dramatic change in how they collect and disseminate. True, unlike Moses, we don’t live in a time of miracles. But with the aid of a clear road map, strong political will and “miraculous” technologies, we really are much closer to the promised land of better data than we realise.

Useful links

Informing a Data Revolution – PARIS21

Watch the launch of A Road Map for a Country-led Data Revolution at the Cartagena Data Festival on Monday 20 April from 1700 hours UTC (noon in Cartagena, 1pm in New York, 6pm in London, 7pm in Paris, 2am in Tokyo).

Smart networks: coming soon to a home near you

The Webcup 1.0 tells your Internet fridge if the milk’s too cold

Today’s post is from Rudolf Van der Berg of the OECD’s Science, Technology and Industry Directorate.

In 2017 a household with two teenagers will have 25 Internet connected devices. In 2022 this will rise to 50, compared with only 10 today. In households in the OECD alone there will be 14 billion connected devices, up from 1.7 billion today and this doesn’t take into account everything outside the household and outside the OECD. All this leads to the smart world discussed in a new OECD publication, Building Blocks of Smart Networks.

The OECD defines “smart” as: “An application or service able to learn from previous situations and to communicate the results of these situations to other devices and users. These devices and users can then change their behavior to best fit the situation. This means that information about situations needs to be generated transmitted, processed, correlated, interpreted, adapted, displayed in a meaningful manner and acted upon.

Smart networks are the result of three trends coming together (and all being studied by the OECD). Machine to Machine communication means devices connected to the Internet (also known as the Internet of Things). This generates “Big Data” because all those devices will communicate and that data will be processed, stored and analyzed. And to enable the analysis, Cloud Computing will be necessary, because when entire business sectors go from no connectivity to full connectivity within a few years, they will need scalable computing that can accommodate double digit growth. Underlying these trends is the pervasive access to Internet connectivity.

smart network devices

New devices connected to the Internet may be invented, but you’ll see that the table only has everyday objects you may already have, but if you replace it in the coming years, the new version will be connected. (The ever-popular, but never seen in a shop near you, Internet connected fridge doesn’t make the list.) Connected lightbulbs may well be the Trojan horse of the smart home. Some companies estimate that connected lightbulbs will be the same price as normal lightbulbs five years from now. These lights will be able to dim and change color and fit in a regular socket. They can also serve as hubs, extending the communication network in the home to all devices.

Connecting machines and devices to telecommunications networks is nothing new. Even at the dawn of the Internet there were Internet connected coffee pots and coke-machines. It is the scale of the trend that forces us to pay more attention. Dutch company TomTom now has millions of GPS-navigation devices on the road, which have generated 5000 trillion data points. When systems need to be smart, the number of datapoints goes up. A dumb electricity meter can do with one reading per year. A smart meter needs a reading every 15 minutes for the electricity company, while for home automation a sampling frequency of once every 1 to 5 seconds is proposed, which could be a 31 million times increase over traditional datasets.

There are, however, challenges that need to be faced when introducing smart systems.

Human challenges. The way people interact with networks and systems may limit their use. For eHealth, smart systems can allow people to lead a normal life. However, a portable heart monitor that sends alarms every time it loses the signal or measures a false positive can have the opposite effect. Privacy and security concerns of users have prompted the Dutch parliament for example to change the rules for smart meters.

Lifecycle challenges. A car should last for 15 years. A mobile phone works for 2-4 years. Mobile phone networks move to new protocols every 15 years. Energy networks have a 15-50 year lifecycle. When a technology is introduced in a vehicle today, the first cars with that technology may reach the end of their lifecycle in 2028, the last ones in 2038. What’s more, if the lifecycles of two distinct sectors meet, the effect can be even more pronounced. Think of the charge point for electric vehicles. It may have to function for 30 years or more, meaning that all vehicles in the coming 30 years will have to be compatible and that the infrastructure needs to be active for another 15 years. Today’s choices for smart systems will be long-term decisions.

Business Challenges. A previous OECD report concluded that users of M2M systems that make use of mobile technology are locked-in with their mobile networks. They can’t change networks and when the devices go across borders they are locked in with their operators. And according to Norwegian research, as many 30% of devices can be offline for 10 minutes per day. To solve these problems the OECD advises governments to change their numbering policies, so that large scale M2M users can become independent of mobile operators and use multiple networks at the same time.

Another business challenge is that it is unclear who has the lead in the smart networks sector. For smart metering, energy companies, meter manufacturers, ICT-companies and telecom companies have all said they will lead.

Regulatory challenges. Governments will be confronted with difficult policy issues, notably concerning privacy and security. A recent review of industrial control systems of five major manufacturers showed that all five could be hacked and sometimes very easily. If companies that supply multi-million dollar systems cannot get essential elements of security correct, than how can you trust systems bought in a DIY store? Would it be possible for a hacker to turn up the airconditioning or heating in a million homes to bring down the electricity grid?

Other questions governments face are regarding access to data. Who owns the data, is it the company or the consumer? If a government collects a dataset, can it share that data for other uses?

Useful links

OECD work on the Internet economy

OECD work on information and communications policy

OECD work on smart sensors

OECD work on smart grids