Best ways of reducing the effects of misinformation on social media platforms?

The content you see below was taken as evidence for the House of Lords Democracy and Digital Technologies Committee (@HLDemoDigital) evidence for their “Digital Technology and the Resurrection of Trust” Report.


The full Report

https://publications.parliament.uk/pa/ld5801/ldselect/lddemdigi/77/7702.htm


My evidence

https://committees.parliament.uk/writtenevidence/357/html/


What might be the best ways of reducing the effects of misinformation on social media platforms?


One of the central approaches that would be of benefit to understanding malicious fake news and misinformation is allowing researchers such as myself access to public Facebook data. Recently Facebook has been shutting down access to API tools that can extract data from pages, including posts that may be deliberate fake news. Facebook is closing its doors to researchers in the wake of the Cambridge Analytica scandal, using the scandal as a smokescreen to make Facebook much harder to analyse. This is a huge problem as researchers need post data in order to understand what misinformation is, who sends it and what impacts it is having.


The latest casualty is the app Netvizz, which was shut down on the 21st of August 2019. Netvizz was a research tool used by hundreds of academics to gather public Facebook data. The app gathered more than 300 academic citations and was used to produce studies on everything from Norwegian political party videos, to public opinion about the London 2012 Olympic Games. The tool was also used to gather fake news posts and examine the effects of misinformation on Facebook (see Senaweera & Dissanayake, 2018; Oklay, 2018; Burger, Kanhai, Pleijter & Verberne, 2019). In its place Facebook has partnered with Social Science One, however approaches have failed to provide a clear solution to data access, with reports of data not delivered and avenues for data access deliberately restrictive. Facebook appears to think we can fight misinformation via restraining researcher’s access to data. This is a tactical error, if we are still barely scratching the surface of misinformation during the open-API era; how will we be able to understand the problem after access is further shut down? Overall, the best way we can reduced the impact of misinformation is by understanding it better as it is still a relative unknown. Government can help support the fight against misinformation by pressuring Facebook to create a public, searchable archive of public posts akin to its Ad Library, but for organic posts. This would allow both the public and researchers to understand what they are being sent by the powerful, and reduce the incentive of pages sending false information as it can be held to account.


Facebook is hollowing out our abilities to hold the corporation and the powerful who spread fake news and misinformation on the platform to account. Facebook is being turned into a black box that no-one will have access to. Those who will get to research Facebook are going to become a specially chosen group of researchers (Social Science One). What chance have researchers got to ask difficult questions about the platform if Facebook, through Social Science One, are the arbiters? All of this presents an end to understanding Facebook at a time when its influence on participation is growing (Boulianne, 2018).

Email

socialmediaresearchcentre@gmail.com

Phone

+44 7563395450

  • Facebook
  • Twitter
  • YouTube

© 2020 by The Social Media Research Centre