Login to access subscriber-only resources.
Fakebook: why Facebook makes the fake news problem inevitable
Dr Paul Bernal
When Mark Zuckerberg failed to appear before the ‘international grand committee’ convened by the DCMS, bringing together committee chairs from the UK, Canada, Australia, Argentina and Ireland at the end of November 2018, it was somehow both symbolic and entirely predictable. The international grand committee was part of the ‘ongoing inquiry by the DCMS Committee into Disinformation and “fake news”’ – and, as my recent article the Northern Ireland Legal Quarterly explores, Facebook is central to the current problem we have with fake news. The reason his non-appearance was not unexpected for those of us who study the fake news phenomenon is that it is difficult to see what he could usefully say without undermining his entire business.
As I outline in my article, fake news on Facebook is not an accident, nor is it the result of malicious manipulators abusing Facebook’s system. Rather, it is the inevitable consequence of the way that Facebook works. The reason it is successful is not that the producers of ‘fake news’ have found a bug in the system, a loophole in their controls – but that they have understood how Facebook works and have used it exactly as it is intended to work. Data-mining, profiling, micro-targeting and manipulating people’s views, nudging their opinions and playing with their emotions is the point of Facebook, not some clever abuse of an otherwise neutral platform serving the public good. If Facebook continues to function in its present form – with its present technological and business models – the ‘fake news’ phenomenon is not going away.
‘Fake news’ is not really new. Though the recent phenomenon – and the ‘fake news’ label – appeared in 2016, in the early stages of Donald Trump’s presidential campaign, it appears to have always existed. My article charts some of the history, starting with the transformation of fifteenth-century Wallachian prince Vlad Țepeș from a tough but effective ruler into, first, Vlad the Impaler, the personification of brutality and inhumanity, and then Dracula, demonic vampire and Prince of Darkness, through campaigns of what amounted to ‘fake news’ by his various enemies, including woodcut pamphlets with gory depictions of events that almost certainly never took place. There are similar examples at every stage of history, using whatever means were available. In seventeenth-century France, by colporteurs singing in the streets, and in nineteenth-century Germany through the phenomenon of the unechte Korrespondenz or ‘fake foreign correspondent’s letter’, when newspapers pretended to have correspondents all around Europe, but in fact to save money they just invented the stories. In the Second World War the Germans used Lord Haw Haw’s radio broadcast – a technique echoed in the Korean and Vietnam Wars through ‘Seoul City Sue’ and ‘Hanoi Hannah’ respectively. In Iraq, we had ‘Comical Ali’ doing the same on TV. In each age, the most effective form of media was used – and in the current era that means Facebook.
This is where the problem really takes off. Facebook is so much more efficient and so much more all-encompassing than any of the other forms of media developed to date that the fake news on Facebook can be much more effective. In the article, I outline some of the recent empirical research into the effectiveness of fake news in this context. It is believed more often than real news. It will be spread more than real news. It can be spread to the places that it needs to go to have an effect more easily than any former form of information – Facebook’s own systems of sharing, profiling, tailoring news feeds and so forth see to that. And Facebook’s own research shows that it can get people to register to vote, to actually vote and more. Emotions can be measurably manipulated – and the targeting can and does include racial, political, religious and more, and even reach people in vulnerable states at their most manipulable. All this is designed for advertising, but the application, as the spreaders of fake news realised early on, can be much broader.
This is the essence of the problem. Fake news and Facebook are natural companions. Facebook’s systems help people design fake news – crafting it according to interests and views determined through data-mining. These systems help people create that fake news, present that fake news, target that fake news and deliver that fake news. It is part of the system – a feature, not a bug – and the only real way to stop it would be to break Facebook’s whole business model. The question is whether that is either possible – the Facebook juggernaut may be unstoppable – or desirable. Are the benefits to freedom of speech, to communication and so forth that arise from Facebook so great that the risks of ‘fake news’ are a price worth paying? Should we accept these risks and do what we can to limit the damage, rather than tackle them directly. That is something that the politicians – the grand international committees and more – will have to grapple with over the next few years. It will not be easy.