Responsible Algorithms in Business: Robots, fake news, spyware, self-driving cars and corporate responsibility

Roel Nieuwenkamp, Chair of the OECD Working Party on Responsible Business Conduct (@nieuwenkamp_csr)

Why is the topic of robots frequently being raised at recent conferences on responsible business conduct?  For example, October last year the Polish Deputy Prime Minister noted the connection between robotisation and corporate responsibility during the opening of the Conference in Warsaw celebrating the 40 years anniversary of the OECD Guidelines for Responsible Business.

The potential negative impacts of robots or automated systems have proved cause for concern. In May 2010 there was a trillion dollar stock market crash, a ‘flash crash’, attributed to algorithm trading or in other words: robot investors. And let’s not forget the mathematical models that contributed to the financial crisis of 2007 and 2008. Recent events surrounding fake news, with Pizzagate as the most extreme example, are also contributing to these concerns.

What is the common denominator of these automated systems? Algorithms! These rule-based processes for solving mathematical problems are being applied to more and more areas of our daily lives. Likely, we are only at the beginning of the era of algorithms and their widespread application is raising many ethical questions for society and businesses in particular.

For example “killer robots”, weapons systems that select and attack targets without meaningful human control raise questions about dehumanisation of killing and who is responsible? In December the United Nations decided to set up an expert group, in order to look into this issue following a campaign ‘Stop Killer Robots’ by Human Rights Watch and other NGOs.  While self-driving cars will never be at risk of driving while intoxicated they can make decisions that might pose moral dilemmas for humans.  Online face recognition technology raises concerns around privacy.  These are just a few examples.

The pervasiveness of the use of algorithms may result in many unintended consequences.  In her book ‘Weapons of Math Destruction’ Cathy O’Neil describes how algorithms in combination with big data increase inequality and threaten democracy. She provides examples of the financial crisis and the housing market, but also of a college student who does not get a minimum wage job in a grocery store due to answers provided on a personality test, people whose credit card spending limits are lowered because they shopped at certain stores, etc. She also discussed predictive policing models such as those that predict recidivism and algorithms that send police to patrol areas on the basis of crime data, which can have a racist effect because of harmful or self-fulfilling prophecy feedback loops.

Scholars and practitioners in this field are beginning to consider the ethical implications of application of algorithms. Julia Bossmann of the Foresight institute described her top 9 ethical issues in artificial intelligence. Prof Susan Leigh Anderson of the University of Connecticut stated: “If Hollywood has taught us anything, it’s that robots need ethics.”  Cathy O’Neil proposes a ‘Hippocratic oath’ for data scientists. Recently a group of scholars developed Principles for Accountable Algorithms.  In the private sector Elon Musk, SpaceX CEO and other business leaders have founded OpenAI, an R&D company created to address ethical issues related to artificial intelligence. Amazon, Facebook, DeepMind, IBM and Microsoft founded a new organisation called the Partnership on Artificial Intelligence to Benefit People & Society. The partnership seeks to facilitate a dialogue on the nature, purpose of artificial intelligence and its impacts on people and society at large.  It is encouraging that certain industry efforts are being undertaken in this area. Additionally one thing should be clear for businesses that create and use these technologies: when things go wrong, using algorithms as a scapegoat won’t do the trick.

What guidance on these issues can be found in the most important instrument on business ethics, the OECD Guidelines for Multinational Enterprises (MNE), a multilateral agreement of 46 states on corporate responsibility. Cases brought to National Contact Points, the globally active complaints mechanism of the Guidelines, provide a good illustration of what the Guidelines recommend with respect to these issues.  For example, in February of 2013 a consortium of NGOs led by Privacy International (PI) submitted a complaint to the UK National Contact Point (NCP)  alleging that Gamma International had supplied a spyware product – Finfisher – to agencies of the Bahrain government which then used it to target pro-democracy activists.

The NCP concluded that Gamma had not acted consistently with the provisions of the OECD Guidelines requiring enterprises to do appropriate due diligence, to undertake a policy commitment to respect human rights and to remediate human rights impacts. Furthermore the company’s approach did not meet with the OECD Guidelines’ standards to respect human rights and the engagement of the company with the NCP process was unsatisfactory, particularly in view of the serious nature of the issues. The NCP recommended that the company engage in human rights due diligence.

What is human rights due diligence and what does it mean for companies developing algorithms? Under the Guidelines due diligence is a process that should be carried out by corporations as part of a broader range of actions to respect human rights. The right to privacy, freedom of speech, freedom from torture and arbitrary detention are examples of the many potential human rights that could be impacted. Due diligence is the process of identifying, preventing and mitigating actual and potential adverse human rights impacts, and accounting for how these impacts are addressed. If there is a risk of severe human rights impacts a heightened form of due diligence is recommended. For example, significant caution should be taken with regard to the sale and distribution of surveillance technology when the buyer is a government with a poor track record of human rights. Due diligence should be applied not only to a company’s activities but across its business relationships. In the context of a company producing algorithms therefore it is not sufficient that they behave responsibly in the context of their own operations but due diligence should also be applied to ensure buyers of the technology are not using it irresponsibly. In instances where this is the case, the company that created and sold the technology is expected to use its leverage in the value chain to prevent or mitigate the impact.

A number of valuable tools to respect human rights and implement the ’know your client’ principle have been developed in the context of ICT business operations. For example, the European Commission has developed a useful guide for companies on respecting human rights in the ICT sector. TechUK, an industry association of ICT companies in the UK, in partnership with the UK government has published a guide on how to design and implement appropriate due diligence processes for assessing cyber security export risks. Additionally the Electronic Frontier Foundation has developed a guide on How Corporations Can Avoid Assisting Repressive Regimes and the Global Network Initiative has developed Principles on Freedom of Expression and Privacy.

Beyond the human rights related recommendations, the OECD Guidelines make other relevant recommendations for companies developing algorithms. For example the Environment Chapter recommends environmental, health and safety impact assessments.[1] The Consumer Chapter advises companies to provide accurate, verifiable and clear information to consumers.[2] In addition companies should respect consumer privacy and take reasonable measures to ensure the security of personal data that they collect, store process or disseminate.[3]

Businesses that create algorithms should do their due diligence on potential human rights impacts. Companies should also carry out due diligence on labour, environmental and health and safety impacts. They should provide accurate verifiable and clear information about their algorithms and take measures to protect personal data. Collaborative industry efforts on responsible algorithms are highly needed to shape these expectations in concrete terms. Responsible algorithms will not only generate profit, but protect the rights of individuals worldwide while doing so.

Useful links

There’s an algorithm for that. Or there soon will be Marina Bradbury on Insights

[1]               OECD Guidelines for Multinational Enterprises, Chapter VI.3

[2]               OECD Guidelines for Multinational Enterprises, Chapter VIII.2

[3]               OECD Guidelines for Multinational Enterprises, Chapter VIII.6

There’s an algorithm for that. Or there soon will be

OECD Forum 2016Marina Bradbury, OECD Public Affairs and Communications Directorate and one of the organisers of this year’s OECD Forum

Would you like a machine to decide on your medical treatment, whether you could insure your house, if you should be hired, or what news stories you read? It may be happening to you already. Every time you go online to make a purchase, search for a restaurant, access your bank account or simply interact with your mobile device, you are creating a digital trail of data that is being tracked and stored. This “big data” is fodder for machine learning algorithms that will for example suggest what to buy.

Traditionally in computer science, algorithms are a set of rules written by programmers. Machine learning algorithms are different: they can improve the software in which they are embedded without human intervention. The more data they receive, the higher their ability to “understand” and predict patterns, including patterns in human behaviour. They are another step along the road to creating artificial intelligence (AI), even if we don’t know where this road is leading. As Stephen Hawking and his colleagues writing in The Independent, claimed “Success in creating AI would be the biggest event in human history” before going on to say, “Unfortunately, it might also be the last, unless we learn how to avoid the risks.”

We are living in an algorithmic society, and many argue that this is a positive thing. On an economic level, machine learning algorithms could help stimulate innovation and productivity growth. According to OECD research, big data used to feed machine learning algorithms can boost industries including advertising, health care, utilities, logistics, transport and public administration. When it comes to our day-to-day lives, algorithms can save us time and effort, for example online search tools, Internet shopping and smartphone apps leveraging “beacon” technology to provide timely recommendations based upon our whereabouts. Computer scientist Pedro Domingos even predicts that in five years’ time, digital personal assistants will be more important than smart phones, with their capacity to aggregate information from various apps to predict our needs before we even know them.

However, the large-scale use of algorithms can also be threatening to us as citizens. For example, if algorithms allow companies to predict our purchases before we even make them, what implications does this have for our personal choices and privacy? Critics point towards the dangers of allowing companies to exploit vast amounts of personal data and restrict individual liberties.

Take the realm of insurance, loans and legal advice. Nowadays, our credit rating or health insurance record is often assessed by a machine, not a person, whilst virtual legal assistants are becoming increasingly common. On the one hand, this can be advantageous to companies, enabling higher levels of efficiency, and in turn more accessible prices. The legal industry is undergoing a veritable transformation thanks to algorithmic technology, with quantitative legal prediction (QLP) being a prime example. Making information-based predictions is at the heart of the legal profession. In addition, legal cases often require the analysis of large-scale data or document sets, which can pose a challenge to the cognitive limitations of humans. Since algorithms are able to make predictions based on “big data” with increasing accuracy, QLP is arguably set to play an increasing role.

On the other hand, when it comes to ordinary customers looking for legal support or a loan, automated systems may not be helpful. Critics warn that even if an algorithm is designed to be neutral, bias can creep in. This can be due to unconscious bias of computer programmers. With machine learning algorithms, this is also due to the fact that they are fed by data. Even if they absorb this data in a completely rational way, they will still reproduce forms of discrimination that already exist in society. For example, if you are looking for a bank loan, you might be offered a higher or lower rate depending on your postal address, name, age or gender.

In the same way, whilst “talent analytics” is being used in HR to help build fairer recruitment practices, these new technologies do not offer a quick fix. For example, studies have found that women or people with “foreign” sounding names receive different kinds of job advertisements than white males. Nevertheless, global companies such as Google and McKinsey are already developing “talent algorithms” to recruit the best staff and assess performance. Moreover, some argue that companies that fail to move in this new direction may lose out later on.  Overall, it seems that algorithms could have a positive impact on the future of recruitment, but only when used judiciously as part of a wider process towards inclusiveness.

The healthcare industry is another key area in which the paradigm of the algorithmic society is played out. For example, a recent study in the US revealed how machine learning can offer a faster and less resource intensive method of detecting cancer, with machines automatically extracting crucial meaning from plaintext reports. Arguably, if machines can be used to review and analyse data, this frees up humans’ time to provide better clinical care. However, the ethical sensitivities of using algorithms to make critical health decisions must be addressed when developing innovative new models.

Trading algorithms are transforming the financial world as we know it. Algorithmic trading has given rise to companies such as Quantopian, which invites “talented people everywhere” to create their own algorithms for free, and pays those for the ones that perform best, and Rizm, which lets those new to trading test and even trade using their own algorithms. However, the field is not without dangers: just one typo could lead to significant financial losses in a short amount of time. The ethics of algorithmic trading are also questioned by critics. With computer-driven or “quantitative” hedge funds enjoying success despite volatile markets, their business models will not escape scrutiny as algorithms continue to permeate our economic systems.

Finally, algorithms that drive search engines can influence the information we receive, impacting upon our outlook on the world and even our well-being. Take the phenomenon of “filter bubbles”. This relates to the way algorithm-based search tools are likely to show us information based upon our past behaviour, meaning it is unlikely to challenge our existing views of spark serendipitous connections. More worrying still, Facebook conducted an experiment in 2014 to test the reaction of users to negative or positive content. The results revealed that those shown more negative comments posted more negative comments, and vice versa. However, the way the experiment was conducted was criticised for its lack of transparency.

The paradigm of the algorithmic society is very much bound up in the unknown. In many ways, this is exciting, capturing how data is becoming the raw material of our era, a source of many possibilities for innovation and even the means to address social problems. Yet it can also be a threat. As Pedro Domingos puts it, “You can’t control what you don’t understand, and that’s why you need to understand machine learning”. The challenge will be to ensure that we live in a society which reaps the benefits that algorithms can bring, whilst ensuring that their implications are understood by all.

Useful links

OECD Policy Brief on the future of work: Automation and independent work in a digital economy

The 2016 OECD Forum on 31 May – 1 June, is entitled “Productive economies, Inclusive societies”. The Forum is organised around the three cross-cutting themes of OECD Week: inclusive growth and productivity, innovation and the digital economy, and international collaboration for implementing international agreements and standards. Register now, it’s free!