Bill Below, OECD*
Big data is in the spotlight again, this time with controversies surrounding Facebook’s handling of personal data raising eyebrows. As technologies and social media continue to evolve, are users’ best interests being looked after? Bill Below looks at some of the issues.
Growing up, I had a front-row seat to the rise of the computer age. One of my first memories in the early 1960s was going to my dad’s place of work—he was an engineer for Howard Hughes in Los Angeles. There, I listened to a computer singing via voice synthesis. It was just like HAL 9000, the one who wouldn’t open the pod doors for Dave in Stanley Kubrick’s 2001: A Space Odyssey (our picture). My father would go on to apply his considerable programming skills to the world of politics. He used computers to assist in the reapportionment of congressional districts in the state of California, and pioneered the use of demographic data and voter lists to generate political direct mail. He still has one of the first political form letters he created hanging on his office wall. It’s from the early 1970s, the time of punch cards, tape drives and “big iron” (mainframes). It’s in Spanish, addressed to a woman with a Hispanic surname, promoting a local candidate. Sprinkled seamlessly in the body of the letter is a handful of customised fields. It was the state-of-the-art in voter targeting at the time.
Getting to know you
Since those early days, the data explosion has radically transformed what we can know about pretty much anyone. And data input from social media accounts is changing the game once again. With big data comes big responsibilities. Are political campaigns living up to this?
In a 2013 paper, researchers at Cambridge University presented the results of a study that showed Facebook “Likes” to be strong predictors of personal attributes. For example, based on Likes alone, the model correctly discriminated between homosexual and heterosexual men in 88% of cases (homosexual and heterosexual females in 75% of cases), between African-Americans and Caucasian-Americans in 95% of cases, and between Democrat and Republican in 85% of cases. Some of the other attributes deduced were surprising and even disturbing, such as predicting cigarette, alcohol and drug use—and IQ (for reasons unknown, “liking” curly French fries correlates to high intelligence).
According to the study, relatively basic digital records of behaviour can be used to “automatically and accurately estimate a wide range of personal attributes that people would typically assume to be private”.
“A bit scary”
One author of the 2013 report felt that the fact that information can now be automatically inferred for millions of people without them even noticing is “a bit scary”. He also suggested that such data techniques might expose individuals “to a degree potentially well beyond what they expect or would find comfortable”. At least until certain changes were made in Facebook’s data-sharing policy, this information could be accessed without the express permission of the individual being profiled.
Similar technology would eventually make its way into the political arena through the firm, Cambridge Analytica.
“Scraping” or extracting large amounts of data from social media accounts may be relatively new, but political campaigns have long borrowed techniques from consumer marketing. In one sense, marketing a candidate presents the same challenges as marketing dish powder or breakfast cereal. And just like products, a candidate can be “versioned”, with different features and messages tailored to different audiences.
Taking a closer look
The author of the Cambridge University study isn’t the only one asking questions about the potential power of these technologies. The growth of political microtargeting is causing watchdogs to take a closer look on both sides of the Atlantic.
In May 2017, the UK’s Information Commissioner ordered an enquiry into the use of online political microtargeting leading up to the referendum on whether or not the UK should stay in the EU. The enquiry is ongoing and results should be delivered in the spring. In the US, prior to the present headlines of potentially breached data, both the Federal Trade Commission (FTC) and Congress have looked into the activities of data brokers, the companies that aggregate consumer, or voter, information, as the case may be. The extent to which the public believes that this issue is critical may have been expressed by the massive sell-off of Facebook shares in the last weeks.
Consumer awareness and control of the data used in tracking and targeting is one of the issues of public concern. In the consumer world, certain profiles could easily be blackballed, while others given high priority. The same is potentially true with voters. According to US Senator Ed Markey, co-founder of the Bipartisan Congressional Privacy Caucus, private data and the way it is used has the potential to determine an individual’s access to education, healthcare, employment and economic opportunities. For example, using correlations from your Facebook Likes, a prospective employer could conclude that, despite what you say on your resume, organisational skills are not your forte. It could be a deal-breaker.
Taken to an extreme, information that is ostensibly off-limits could be used surreptitiously against you, such as private information on health, sexual orientation and personal beliefs.
Shifts in public trust inform online strategies
With public trust in major political parties low, targeting voters through topics they care about can be a strategy for reaching individuals with weak party affiliation. On top of this, diminished trust in public institutions and the media gives more weight to the attitudes and behaviour of friends when we are making decisions. Indeed, research by the American Press Institute found that “a trusted [online] sharer of news has more significant effects on beliefs about news than a reputable media source”. So-called “targeted sharing“ applications try to benefit from this by encouraging online users to loop friends into a given campaign promotion, very probably grabbing data as it goes. Yet, other online factors can turn off voter trust. This can include unfavourable attitudes towards the idea of targeted political messaging and the invasion of privacy it entails. In one study, 86% of US respondents said they did not want political advertising tailored to their interests.
Playing by the rules?
Those using microtargeting techniques could run afoul of consumer privacy protections if they fail to inform customers or voters of how they are using their data. The current European Data Protection Directive, for instance, which will be supplanted by the General Data Protection Regulation in the EU in May 2018, explicitly protects a person’s right to privacy, particularly with respect to the processing of personal data.
The difference between voter targeting and voter manipulation is a fine line, often a question of integrity or ethics rather than outright illegality. Yet, there are forms of influence that, while not necessarily illegal are undesirable nonetheless. Influencing voter opinion based on clearly false information, or appealing to racist, xenophobic or hateful sentiments are a few that come to mind.
Only now is social media waking up to the need to be vigilant about those willing to use their platforms to exert an illegal or corrosive influence on politics. It poses important questions regarding one of the most ubiquitous business strategies of the online world: free services in exchange for personal data.
Should social networks that are existentially bound to a model of monetising user data be trusted to always keep the user’s best interest in mind? Not as long as money is being made selling that data to third parties.
*Bill Below works as writer and editor for both the OECD Directorate for Public Governance, and the OECD Directorate for Financial and Enterprise Affairs
References and links
©OECD Insights March 2018