Press release: How to Fight Fake News

close up of journalists writing in notepads

Looks at solutions for growing online disinformation

Should platforms be responsible for removing fake news?  Are we seeing the death of satire and parody?

Embargo: Tues 6 Dec 7pm

I would like to invite you to a lecture on How to Fight Fake News by Gresham IT Professor Victoria Baines

In her lecture Professor Baines will argue that  the term ‘fake news’ is “actually quite unhelpful, as it bundles together several different types of content and behaviour that have different aims, tactics and impacts, and may therefore require different countermeasures.”

In the lecture she will explore the differences between disinformation, misinformation, propaganda and low quality junk news “As well as presenting information that is simply untrue, the fake news ecosystem abounds in the practices of manipulation and distortion”. 

Does the solution lie with platforms?
Professor Baines goes on to look at whether the solution lies with the platforms to identify false content.  “Focus on platforms’ removal of content at scale presupposes that those in power always post objective facts online and do not engage in influence operations, which unfortunately is not the case. Some governments use disinformation tactics against their own citizens,” she will say. 

“When a source that we trust starts to manipulate the narrative, whether that’s a public figure, political party, or a media outlet, this can be harder to spot and we may require specialist assistance. Political debates in the UK are now routinely fact-checked by academics, non-profits, and mainstream media outlets alike. Ahead of the US mid-term elections in 2018, right-wing media organisations pushed false narratives about the Hungarian born billionaire George Soros, a leading Democrat donor. These included accusations that he had funded the migrant caravan then heading towards the US, that he had been a Nazi SS officer, and anti-Semitic conspiracy theories apparently endorsed by celebrities and the family of the then President. Around the same time, automated ‘bot’ accounts on Twitter posing as Democrat supporters began to discourage people from voting in the mid-term elections. On this occasion, Twitter removed an estimated 10,000 bot accounts.

“Another tactic is to encourage real account holders to copy and paste a message that seems authentic, complete with grammatical errors, but which on closer inspection is anything but. So-called ‘keyboard armies’ are paid to do this. Platforms are able to use tools that identify and remove the same false content being shared by large numbers of users. In both cases – whether bots or real people acting just like them – tech companies look for inauthentic behaviour in addition to seeking to verify the accuracy of the content shared.”

On satire/ parody: “The inclusion of satire and parody in definitions of fake news assumes that members of the public are unable to distinguish between fact and fiction, authoritative news reporting and parody. Quite apart from insulting the intelligence of citizens, this assessment also assumes that humans can’t spot context or filter out junk content but machines can, when in fact the opposite is more likely,” Baines will argue.

She will go on to say: “there are technical measures that can and should be deployed by platforms. A name change for the social media account of a political party or politician is unusual, and should in most cases be flagged as suspicious – not least because it may indicate that the account has been hacked. Increasingly, tech companies use tools such as Microsoft’s Video Authenticator that allow them to identify when a video has been artificially manipulated. Major journalistic outlets including the BBC, the New York Times and Reuters have partnered with Microsoft and Meta in the Trusted News Initiative. As well as sharing reports of disinformation in real-time for rapid response, the partners are working on Project Origin to create a common standard for establishing whether a piece of video content is authentic.”

On Deepfake videos
“The advent of deepfakes has arguably made this need more pressing. Deepfakes are fake videos generated by machine learning. They are becoming more sophisticated and more convincing, and are finally being used in anger in political contexts. Recent research indicates that humans and computers are equally able to spot deepfakes, but humans significantly outperform leading deepfake detection technology when it comes to videos of well-known political leaders. This in turn suggests that the most promising model for deepfake identification is human-machine collaboration. We can’t sit back and let computers do all the thinking for us just yet.”

ENDS

Notes to Editors

This lecture is hybrid as you can watch online, in person or on replay at a later date. Sign up to watch through the link below, or email us  for an embargoed transcript or to talk to Professor Baines. l.graves@gresham.ac.uk / 07799 738 439.
 
You can read more about Professor Victoria Baines, who has provided research expertise for Interpol, UNICEF and the Council of Europe here.