They say a lie gets halfway around the world before the truth has its boots on. On social media, false information can spread particularly effectively – even more so if it’s sensational claims about the viability of an election or the risk profile of a coronavirus vaccine.
In 2016, Mark Zuckerberg estimated around 1% of information on Facebook was a hoax. Critics of the platform pointed out that with the huge volumes of content posted to Facebook daily, this still represents a massive amount of fake information being promoted around the platform to a large audience.
Recently, Facebook took the controversial decision to start moderating content posted on its platform after accusations the site was helping to promote damaging falsehoods. Content is flagged as inaccurate by third-party moderators and labelled as such. Most importantly, the transmission of this content around the site is then suppressed. It’s also clamping down on paid ads that aim to suppress voting and labelling state-controlled information.
How the algorithm boosts falsehoods
Facebook’s algorithm has always favoured particular types of content – if you post an engagement announcement that very quickly gets a flurry of ‘likes’ from your friends, Facebook will seize on that as popular or important content and make sure it’s seen by all of your friends.
Less popular content, such as your local café’s boring ‘happy Friday’ post, is effectively hidden by the algorithm as it judges few of the café’s followers will be interested.
Unfortunately, this mechanism has also helped promote controversial content such as outrageous claims that get an instant reaction from users. It’s a mechanism that can be highly dangerous in the real world. In fact, in one study, a single piece of coronavirus misinformation was traced to 800 deaths.
The platform has now adjusted this mechanism to help suppress certain types of inaccurate information about hot topics such as election validity and vaccination. Although disputed content is still visible, its reach is curtailed. Insiders at Facebook claim that a post rated as false by fact-checkers can have its reach curtailed by up to 80%.
The program is run by several dozen independent fact-checking organisations but has come under fire from some groups and governments for its lack of transparency. Speed remains a problem – content can travel quickly before fact-checkers can get to work. That’s a concern in itself because people may be more likely to believe content that hasn’t been flagged as misleading, even though it may not have been assessed yet. There’s also a huge volume of content to get through, and artificial intelligence doesn’t quite seem to be ready to help yet.
Inconsistent fact-checking
Facebook doesn’t appear to have implemented the same level of content scrutiny across different languages. In the US, where the platform has come under fire particularly for its influence in the 2016 elections, there’s a difference between how it moderates English and Spanish language content.
According to figures quoted in The Guardian, 70% of misinformation shared in English is flagged with warning labels while only 30% is flagged in Spanish. That’s significant because of the large Hispanic population of Spanish speakers actively using Facebook across the States.
The issue of social media misinformation has recently become even more pertinent because of the particular vulnerability of minority populations to coronavirus. There are concerns that unscientific advice and conspiracy theories are undermining attempts to control the virus in all communities.
Studies during the 2020 election showed that political misinformation spread more widely and persisted for longer when it was shared in Spanish rather than in English. It’s concerning that some communities are less protected from harmful information merely because of the language they speak.
Facebook stands accused of not implementing the same community standards to all users in the US, specifically those who consume their content in minority languages.
So why is moderation less effective in Spanish compared to English content?
It seems the problem is that Facebook leans on non-human (AI) moderation tools for Spanish-language content more than it does English-language content. AI is simply less effective at moderating content compared to human users. Perhaps Facebook hasn’t managed to recruit enough Spanish speaking moderators to keep the same standard across the two languages.
Spanish is a major language in the US and the issue is relatively high-profile. With attention focused on this particular market and this particular language, it’s feasible that Facebook may also not be able to recruit enough human moderators for less visible languages in less scrutinised markets.
Beating the moderators
Facebook isn’t completely transparent about how it chooses which content to display to users. Marketers have got very good at figuring things out and learning to work with Facebook’s content algorithm to get as much reach for as little spend as possible.
There are entire blogs dedicated to understanding the algorithm and getting better visibility for free and paid content. Those in the know will avoid using words such as sale, coupon or ‘like and share’ because Facebook tends to kill the reach of overtly promotional posts in favour of those that build communities. Marketers do their research and learn by trial and error what works to optimise their content on the platform.
There’s no doubt that the people and organisations that create and share misinformation are currently trying to figure out how to get around the new moderation of content. They’ll probably already have figured out that it’s harder for Facebook’s AI tools to screen video for misinformation.
It’s more likely that video content will need to be checked by a human and that will take time. By the time a video is finally seen by a human moderator, it may already have clocked up a lot of views and shares. Facebook has been accused of being slow to add fact-checking labels to posts.
When content creators don’t use video, they try to avoid triggering the automatic moderation tools. So instead of using the flagged word ‘vaccine’, they’ll type ‘v@x’ or ‘vackseen’. What’s much harder is content that doesn’t make false claims outright but instead seeks to sow doubt in the user’s mind.
For instance, instead of claiming coronavirus is linked to the presence of 5G towers, the content creators will instead publicise a case where someone had a medical event following vaccination in a way that implies the two events are connected.
A global problem
With Facebook widely used around the world, the question of how information is shared on this platform is now an issue of global concern. We’ve seen how difficult it has been for the platform to moderate content in a widely-spoken language in the US.
Although Facebook claims that it applies the same standards across all users of its platform, it’s unclear how effectively the platform will implement moderation of less mainstream languages across the world. With 111 languages now supported by the platform, there’s serious doubt about how rigorously these standards will be applied. Other platforms are also under scrutiny, including YouTube and Twitter.
If Facebook is seen to be less effective at moderating content in Spanish, one of the world’s most widely spoken languages, then how effective is it likely to be at scrutinising content in a smaller language, such as Corsican – a language considered to be in danger and spoken by only around 280,000 people?
There’s also a wider question of diversity in tech. If Facebook struggles to recruit Spanish-speaking moderators in its native market – a place where it has a lot of exposure to a huge Spanish-speaking population then it may find it even harder to moderate languages from markets where it has less of a presence.
Critics suggest that Facebook needs to tackle the issue of account creation instead of just dealing with content on a piece-by-piece basis. They claim that the platform hasn’t been successful at cracking down on some high-profile conspiracy-promoting accounts, including some with up to half a million followers.
At least one prominent anti-vaccine campaigner has been able to open new accounts on the platform after their original one was shut down. David Icke, usually seen as a poster child for problematic conspiracy theories, had his most prominent account shut down whilst a secondary one remained open.
Misinformation spreaders that have been discouraged from posting on Facebook have often simply turned to Instagram, another Facebook-owned platform, instead. Despite their shared ownership, the platforms don’t share information, so a ban on Facebook counts for nothing on Instagram.
And all platforms seem to take the approach of tackling each piece of content individually rather than looking at an account’s overall content history and shutting down problematic accounts. That’s particularly unsettling because some of the accounts in question have a lot of followers and are often described as ‘superspreaders of misinformation’.
A pertinent topic
There’s strong evidence that minority populations are being disproportionately affected by the coronavirus, both in comparison to majority populations as well as to other minority groups.
In the UK, it has been identified that there are different outcomes for Black Caribbeans and Black Africans affected by the virus, and for Bangladeshis compared to Indians and Pakistanis. Black Britons have an increased risk of death involving coronavirus that was 2.0 times greater for men and 1.4 times greater for women compared to those of a white ethnic background and adjusting for other factors including population density and household composition.
A 2020 report to the House of Commons pointed to socioeconomic factors, such as the higher number of minority workers on zero-hours contracts who were therefore ineligible for the furlough payments that might mean they could avoid the workplace and its associated virus hazards.
The report also references housing issues, such as overcrowding, and advises that government guidance should be made accessible to minority communities. It’s also possible that some populations are more genetically vulnerable to some diseases than others. Other parts of the world are seeing similar patterns where particular populations are more vulnerable than others.
This is a particular threat to minority groups that are also defined by their languages. For the world’s endangered languages, any loss of population is a blow to the survival prospects of the language itself.
Several publications, including National Geographic, have expressed concerns about the loss of key community figures who are pivotal to the language integrity of a community. That may be even more of a concern where critically endangered languages are disproportionately spoken by older people – a demographic known to be particularly vulnerable to coronavirus.
With minority language groups particularly vulnerable to the virus, it’s particularly concerning to see social media platforms enabling (sometimes promoting) the spread of unhelpful and even dangerous information about health security.
In the US a Hispanic advocacy group has criticised what it described as “rampant Spanish-language disinformation” on Facebook. It’s worrying that harmful content is being promoted to groups that may be most vulnerable to its impact.
Ultimately, social media sites are structured in a way that’s completely at odds with efforts to suppress or cancel misinformation and its spreaders. The entire monetisation strategy is based on encouraging content creation that engages people and, sadly, misinformation and sensationalist claims tend to be very engaging.
Social media users are more likely to engage with, promote, share and talk about content that preys on their fears.
Just because of the way they are monetised, social media platforms are incentivised to support those creators that prey on fears and jump on popular concerns. It’s hard to see how society can navigate concerns about the spread of misinformation when social media is fundamentally at odds with attempts to fight untruths.