“AI: Intelligent Machines, Smart Policies”: three conference takeaways

Clara Young, OECD Public Affairs and Communications Directorate

Listening to the radio this morning, I heard a story about a former FBI agent who had come out of retirement to reopen a very old case: who tipped off the Gestapo to Anne Frank’s whereabouts? There have been two investigations into the circumstances leading up to the arrest of the young diarist and her family on 4 August 1944 in Amsterdam, but this newest attempt is using artificial intelligence (AI). “The artificial intelligence programme will be able to make connections and associations of dates, persons and locations that would take a human investigator a minimum of 10 years to come up with,” lead investigator Vince Pankoke told the Canadian Broadcasting Corporation (CBC).

Artificial intelligence can solve the most intractable of puzzles. But with it come many new, possibly more intractable, questions. At a recent OECD conference “AI: Intelligent Machines, Smart Policies”, researchers, economists, policymakers, advisors, and labour and corporate representatives came to grips with the vastly different landscape AI is beginning to create. With their algorithmic ability to navigate through the noise of big data, machine-learning AI robots are commonplace in biotech labs. They formulate scientific hypotheses, devise and conduct experiments, and analyse test results, probing deeply and around the clock. AI can pilot vehicles, determine what your car insurance premium should be, detect malicious cyberactivity, improve medical diagnoses through image recognition like radiography and ultrasonography, and even compose music.

But will such tremendous computational and learning capacities upend human society? Stuart W. Elliott, who is Director of Board on Testing and Assessment at the US National Academy of Science, observes that AI currently has literal and numerical levels that are as good as if not better than 89% of adults in OECD countries. What implications does that have for competition in the labour market? How can policy makers and legislators plan for the magnitude of labour disruption automisation will bring?

Another conference takeaway is the need for transparency in AI decision-making. When software is making decisions on whether, for example, a driverless car should swerve away from an oncoming bicyclist and hit a pedestrian on the sidewalk, or if a job applicant should be hired or rejected, people should be able to look at the chain of reasoning leading up to an AI decision. There is also the concern that the algorithms in AI software distort natural biases implicit in data. For instance, Science reported that tests have shown that machine learning software absorbs societal racial biases in data, and makes stereotyped associations between European American names and positive or pleasant terms, and African-American names and negative or unpleasant terms. A related study showed that job applicants with European American names were 50% more likely to be accorded an interview by AI software.

But perhaps the biggest preoccupation at the conference is the data conundrum. In a forthcoming OECD interview, Dudu Mimran, CTO of Telekom Innovation Laboratories and Cyber Security Research Center at Ben-Gurion University in Israel, described the current data environment as the “…Wild West with all companies collecting any data”. Data is used to train artificial intelligence, and the more of it the better. But do we always know where it is coming from? And, who owns it? Digital advisor to the Estonian government Marten Kaevats stood up during a panel discussion and said, “The people own their own data.” In embracing digitalised government so early on, Estonia may be considered as a leader on data issues. Its citizens’ health and tax records are online, protected by a closed blockchain system. Online voting was introduced in 2005. But outside such digitally advanced regimes, most people do not know where their personal data reside, how it is being used, and whether its integrity is being safeguarded. One example of data carelessness is the discovery in 2016 that the UK’s National Health Service had given Google-owned AI company DeepMind access to the healthcare data of 1.6 million patients without adequately informing them.  

Safeguards exist against such errors. These include the 1980 OECD Privacy Guidelines revised in 2013, the EU’s General Data Protection Regulation, which comes into effect in 2018, and the 2016 signing of the US-EU data protection “Umbrella Agreement” which governs data-sharing in criminal investigations. But, AI raises potentially new and specific privacy risks that may not be covered by these data protection regulations and agreements.

In the case of Anne Frank, the data surrounding her and her family’s capture is 73 years old. Privacy is no longer an issue. For the rest of us, however, the ever-broadening and creative reach of data mining requires vigilance.

References and links

Bohan, John (2017), “A new breed of scientist, with brains of silicon”, Science. See: www.sciencemag.org/news/2017/07/new-breed-scientist-brains-silicon.

Hodson, Hal (2016), “Revealed: Google AI has access to huge haul of NHS patient data”, New Scientist. See https://www.newscientist.com/article/2086454-revealed-google-ai-has-access-to-huge-haul-of-nhs-patient-data.

Elliott, Stuart W. (2017), “Artificial intelligence and the future of work and skills: will this time be different?” at: https://www.oecd-forum.org/channels/722-digitalisation/posts/21601-artificial-intelligence-and-the-future-of-work-and-skills-will-this-time-be-different.

European Commission (2016), “Signing of the ‘Umbrella’ Agreement: A major step forward in EU-U.S. relations”, Brussels. See: http://ec.europa.eu/justice/newsroom/data-protection/news/160602_en.htm.

Caliskan, Aylin; Bryson, Joanna J.; Narayanan, Arvind, “Semantics derived automatically from language corpora contain human-like biases”, Science, 14 Apr 2017: Vol. 356, Issue 6334. See: http://science.sciencemag.org/content/356/6334/183.

There’s an algorithm for that. Or there soon will be

OECD Forum 2016Marina Bradbury, OECD Public Affairs and Communications Directorate and one of the organisers of this year’s OECD Forum

Would you like a machine to decide on your medical treatment, whether you could insure your house, if you should be hired, or what news stories you read? It may be happening to you already. Every time you go online to make a purchase, search for a restaurant, access your bank account or simply interact with your mobile device, you are creating a digital trail of data that is being tracked and stored. This “big data” is fodder for machine learning algorithms that will for example suggest what to buy.

Traditionally in computer science, algorithms are a set of rules written by programmers. Machine learning algorithms are different: they can improve the software in which they are embedded without human intervention. The more data they receive, the higher their ability to “understand” and predict patterns, including patterns in human behaviour. They are another step along the road to creating artificial intelligence (AI), even if we don’t know where this road is leading. As Stephen Hawking and his colleagues writing in The Independent, claimed “Success in creating AI would be the biggest event in human history” before going on to say, “Unfortunately, it might also be the last, unless we learn how to avoid the risks.”

We are living in an algorithmic society, and many argue that this is a positive thing. On an economic level, machine learning algorithms could help stimulate innovation and productivity growth. According to OECD research, big data used to feed machine learning algorithms can boost industries including advertising, health care, utilities, logistics, transport and public administration. When it comes to our day-to-day lives, algorithms can save us time and effort, for example online search tools, Internet shopping and smartphone apps leveraging “beacon” technology to provide timely recommendations based upon our whereabouts. Computer scientist Pedro Domingos even predicts that in five years’ time, digital personal assistants will be more important than smart phones, with their capacity to aggregate information from various apps to predict our needs before we even know them.

However, the large-scale use of algorithms can also be threatening to us as citizens. For example, if algorithms allow companies to predict our purchases before we even make them, what implications does this have for our personal choices and privacy? Critics point towards the dangers of allowing companies to exploit vast amounts of personal data and restrict individual liberties.

Take the realm of insurance, loans and legal advice. Nowadays, our credit rating or health insurance record is often assessed by a machine, not a person, whilst virtual legal assistants are becoming increasingly common. On the one hand, this can be advantageous to companies, enabling higher levels of efficiency, and in turn more accessible prices. The legal industry is undergoing a veritable transformation thanks to algorithmic technology, with quantitative legal prediction (QLP) being a prime example. Making information-based predictions is at the heart of the legal profession. In addition, legal cases often require the analysis of large-scale data or document sets, which can pose a challenge to the cognitive limitations of humans. Since algorithms are able to make predictions based on “big data” with increasing accuracy, QLP is arguably set to play an increasing role.

On the other hand, when it comes to ordinary customers looking for legal support or a loan, automated systems may not be helpful. Critics warn that even if an algorithm is designed to be neutral, bias can creep in. This can be due to unconscious bias of computer programmers. With machine learning algorithms, this is also due to the fact that they are fed by data. Even if they absorb this data in a completely rational way, they will still reproduce forms of discrimination that already exist in society. For example, if you are looking for a bank loan, you might be offered a higher or lower rate depending on your postal address, name, age or gender.

In the same way, whilst “talent analytics” is being used in HR to help build fairer recruitment practices, these new technologies do not offer a quick fix. For example, studies have found that women or people with “foreign” sounding names receive different kinds of job advertisements than white males. Nevertheless, global companies such as Google and McKinsey are already developing “talent algorithms” to recruit the best staff and assess performance. Moreover, some argue that companies that fail to move in this new direction may lose out later on.  Overall, it seems that algorithms could have a positive impact on the future of recruitment, but only when used judiciously as part of a wider process towards inclusiveness.

The healthcare industry is another key area in which the paradigm of the algorithmic society is played out. For example, a recent study in the US revealed how machine learning can offer a faster and less resource intensive method of detecting cancer, with machines automatically extracting crucial meaning from plaintext reports. Arguably, if machines can be used to review and analyse data, this frees up humans’ time to provide better clinical care. However, the ethical sensitivities of using algorithms to make critical health decisions must be addressed when developing innovative new models.

Trading algorithms are transforming the financial world as we know it. Algorithmic trading has given rise to companies such as Quantopian, which invites “talented people everywhere” to create their own algorithms for free, and pays those for the ones that perform best, and Rizm, which lets those new to trading test and even trade using their own algorithms. However, the field is not without dangers: just one typo could lead to significant financial losses in a short amount of time. The ethics of algorithmic trading are also questioned by critics. With computer-driven or “quantitative” hedge funds enjoying success despite volatile markets, their business models will not escape scrutiny as algorithms continue to permeate our economic systems.

Finally, algorithms that drive search engines can influence the information we receive, impacting upon our outlook on the world and even our well-being. Take the phenomenon of “filter bubbles”. This relates to the way algorithm-based search tools are likely to show us information based upon our past behaviour, meaning it is unlikely to challenge our existing views of spark serendipitous connections. More worrying still, Facebook conducted an experiment in 2014 to test the reaction of users to negative or positive content. The results revealed that those shown more negative comments posted more negative comments, and vice versa. However, the way the experiment was conducted was criticised for its lack of transparency.

The paradigm of the algorithmic society is very much bound up in the unknown. In many ways, this is exciting, capturing how data is becoming the raw material of our era, a source of many possibilities for innovation and even the means to address social problems. Yet it can also be a threat. As Pedro Domingos puts it, “You can’t control what you don’t understand, and that’s why you need to understand machine learning”. The challenge will be to ensure that we live in a society which reaps the benefits that algorithms can bring, whilst ensuring that their implications are understood by all.

Useful links

OECD Policy Brief on the future of work: Automation and independent work in a digital economy

The 2016 OECD Forum on 31 May – 1 June, is entitled “Productive economies, Inclusive societies”. The Forum is organised around the three cross-cutting themes of OECD Week: inclusive growth and productivity, innovation and the digital economy, and international collaboration for implementing international agreements and standards. Register now, it’s free!

Intuition and ingenuity: Alan Turing’s work and impact

Don't blame Alan Turing
Don’t blame Alan Turing

Legend has it that Apple’s rainbow-coloured logo showing the apple with a bite out of it is in homage to Alan Turing “the father of modern computing”. Turing died of cyanide poisoning on 7 June 1954, two years after being convicted of homosexuality and accepting chemical castration instead of prison. A half-eaten apple was found next to him, and one theory is that he’d laced it with cyanide, his own homage to the wicked queen in Snow White, his favourite Disney cartoon. Another theory is that he died accidentally after inhaling cyanide fumes from apparatus he had in his bedroom for electroplating spoons. A third explanation is that he really did commit suicide, but set up the apparatus so his mother would think it was an accident. The coroner didn’t test the apple for cyanide, so we’ll never know for sure.

If there are doubts about Turing’s death, his life is fairly well-known, or at least some aspects of it. His most noteworthy exploit for the general public was helping to break the code of the Enigma machines the Germans used to communicate with their submarines during the Second World War. If you’d like to get some idea of how he did it, take a look at the excerpts from the “Enigma Paper” in Alan Turing, His Work and Impact, just published by Elsevier. Cryptography is the second of four parts of this thousand-page overview presenting Turing’s most significant works from the four-volume Collected Works along with comment, analysis and anecdote from leading scholars. The other three parts are on Turing’s contributions to computability, artificial intelligence, and biology.

That simple naming of the parts already gives you some idea of the breadth of Turing’s influence, and we could also add economics. I actually got the Elsevier book thanks to Professor K. Vela Velupillai who wrote for us about Turing’s economics here.  That article described the foundations of computable economics, while here at the OECD the project on new approaches to economic challenges was being launched. “New Approaches” revisits some of the fundamental assumptions about the functioning of the economy, and the implications for policy. It also addresses how to extend the capabilities of existing tools for structural analysis and analysing trends over the long term to factor in key linkages and feedback – for example between growth, inequality, and the environment.

Vela cites Turing’s Precept, an idea that should be kept in mind by economic theorists, analysts and policy makers everywhere: “the inadequacy of reason unsupported by common sense”. There’s a corollary to that in how you present the reasoning, best summed up by the German mathematician David Hilbert at the 1900 conference of the International Congress of Mathematicians in Paris. Presenting a paper on 23 unsolved problems that would help set the research agenda for mathematics in the new century, Hilbert quoted an old French mathematician as saying: “A mathematical theory is not to be considered complete until you have made it so clear that you can explain it to the first man whom you meet on the street”. And Turing himself claimed that “No mathematical method can be useful for any problem if it involves much calculation.”

Gregory Chaitin recalls Hilbert’s remark when he presents Turing’s “Solvable and unsolvable problems”, that ends with Turing’s Precept. In this “lovely paper” Turing explains the notion of computability and proves the unsolvability of a decision problem without using any mathematical formalism. His “models” are two puzzles that were popular at the time: a picture made up of a number of movable squares set in a frame, with one square missing so you can move the squares around to form the image; and two pieces of intertwined wire you can separate without bending or breaking them.

That said, much of what’s presented is for specialists and you’d need a good grounding in mathematics to follow it. But there’s still plenty even for a non-mathematician like me, some of it surprisingly moving, for example when Bernard Richards describes how he presented his and Turing’s work on morphogenesis to Turing’s mother shortly after his death. Some of it is intriguing – why does the UK government still refuse to declassify the two 1946 papers “Report on the applications of probability to cryptography” and “Paper on statistics of repetitions”? But no matter how well you know the life and work of Turing, you’ll learn something from this book.

By the way, that Apple story at the beginning is only a legend. Rob Janoff who designed the logo explained that he was asked to come up with something simpler than the (hideous) picture of Newton sitting under an apple tree that was the company’s first logo, and the bite was just to show that it was an apple, not a cherry or a tomato. The gay-friendly rainbow was to advertise the colour graphics capabilities of Apple’s computers. On British TV show QI XL, Stephen Fry recalled asking  his friend Steve Jobs about the Turing story “It isn’t true, but God we wish it were!” was Jobs reply.

Useful links

Models used in the OECD Economics Department