Clara Young, OECD Public Affairs and Communications Directorate
Listening to the radio this morning, I heard a story about a former FBI agent who had come out of retirement to reopen a very old case: who tipped off the Gestapo to Anne Frank’s whereabouts? There have been two investigations into the circumstances leading up to the arrest of the young diarist and her family on 4 August 1944 in Amsterdam, but this newest attempt is using artificial intelligence (AI). “The artificial intelligence programme will be able to make connections and associations of dates, persons and locations that would take a human investigator a minimum of 10 years to come up with,” lead investigator Vince Pankoke told the Canadian Broadcasting Corporation (CBC).
Artificial intelligence can solve the most intractable of puzzles. But with it come many new, possibly more intractable, questions. At a recent OECD conference “AI: Intelligent Machines, Smart Policies”, researchers, economists, policymakers, advisors, and labour and corporate representatives came to grips with the vastly different landscape AI is beginning to create. With their algorithmic ability to navigate through the noise of big data, machine-learning AI robots are commonplace in biotech labs. They formulate scientific hypotheses, devise and conduct experiments, and analyse test results, probing deeply and around the clock. AI can pilot vehicles, determine what your car insurance premium should be, detect malicious cyberactivity, improve medical diagnoses through image recognition like radiography and ultrasonography, and even compose music.
But will such tremendous computational and learning capacities upend human society? Stuart W. Elliott, who is Director of Board on Testing and Assessment at the US National Academy of Science, observes that AI currently has literal and numerical levels that are as good as if not better than 89% of adults in OECD countries. What implications does that have for competition in the labour market? How can policy makers and legislators plan for the magnitude of labour disruption automisation will bring?
Another conference takeaway is the need for transparency in AI decision-making. When software is making decisions on whether, for example, a driverless car should swerve away from an oncoming bicyclist and hit a pedestrian on the sidewalk, or if a job applicant should be hired or rejected, people should be able to look at the chain of reasoning leading up to an AI decision. There is also the concern that the algorithms in AI software distort natural biases implicit in data. For instance, Science reported that tests have shown that machine learning software absorbs societal racial biases in data, and makes stereotyped associations between European American names and positive or pleasant terms, and African-American names and negative or unpleasant terms. A related study showed that job applicants with European American names were 50% more likely to be accorded an interview by AI software.
But perhaps the biggest preoccupation at the conference is the data conundrum. In a forthcoming OECD interview, Dudu Mimran, CTO of Telekom Innovation Laboratories and Cyber Security Research Center at Ben-Gurion University in Israel, described the current data environment as the “…Wild West with all companies collecting any data”. Data is used to train artificial intelligence, and the more of it the better. But do we always know where it is coming from? And, who owns it? Digital advisor to the Estonian government Marten Kaevats stood up during a panel discussion and said, “The people own their own data.” In embracing digitalised government so early on, Estonia may be considered as a leader on data issues. Its citizens’ health and tax records are online, protected by a closed blockchain system. Online voting was introduced in 2005. But outside such digitally advanced regimes, most people do not know where their personal data reside, how it is being used, and whether its integrity is being safeguarded. One example of data carelessness is the discovery in 2016 that the UK’s National Health Service had given Google-owned AI company DeepMind access to the healthcare data of 1.6 million patients without adequately informing them.
Safeguards exist against such errors. These include the 1980 OECD Privacy Guidelines revised in 2013, the EU’s General Data Protection Regulation, which comes into effect in 2018, and the 2016 signing of the US-EU data protection “Umbrella Agreement” which governs data-sharing in criminal investigations. But, AI raises potentially new and specific privacy risks that may not be covered by these data protection regulations and agreements.
In the case of Anne Frank, the data surrounding her and her family’s capture is 73 years old. Privacy is no longer an issue. For the rest of us, however, the ever-broadening and creative reach of data mining requires vigilance.
References and links
Bohan, John (2017), “A new breed of scientist, with brains of silicon”, Science. See: www.sciencemag.org/news/2017/07/new-breed-scientist-brains-silicon.
Hodson, Hal (2016), “Revealed: Google AI has access to huge haul of NHS patient data”, New Scientist. See https://www.newscientist.com/article/2086454-revealed-google-ai-has-access-to-huge-haul-of-nhs-patient-data.
Elliott, Stuart W. (2017), “Artificial intelligence and the future of work and skills: will this time be different?” at: https://www.oecd-forum.org/channels/722-digitalisation/posts/21601-artificial-intelligence-and-the-future-of-work-and-skills-will-this-time-be-different.
European Commission (2016), “Signing of the ‘Umbrella’ Agreement: A major step forward in EU-U.S. relations”, Brussels. See: http://ec.europa.eu/justice/newsroom/data-protection/news/160602_en.htm.
Caliskan, Aylin; Bryson, Joanna J.; Narayanan, Arvind, “Semantics derived automatically from language corpora contain human-like biases”, Science, 14 Apr 2017: Vol. 356, Issue 6334. See: http://science.sciencemag.org/content/356/6334/183.