Social media has played a big role in fueling the anti-immigration riots engulfing towns and cities in the United Kingdom.
And agitator-in-chief Elon Musk is not sitting on the sidelines.
The Tesla chief executive and owner of X posted to the platform Sunday that “civil war is inevitable” in response to a post blaming the violent demonstrations on the effects of “mass migration and open borders.”
On Monday, a spokesperson for UK Prime Minister Keir Starmer addressed Musk’s comment, telling reporters “there’s no justification for that.”
But Musk is digging his heels in. On Tuesday, he labeled Starmer #TwoTierKier in an apparent reference to a debunked claim spread by conspiracy theorists and populist politicians such as Nigel Farage that “two-tier policing” means right-wing protests are dealt with more forcefully than those organized by the left. He also likened Britain to the Soviet Union for attempting to restrict offensive speech on social media.
Musk’s decision to amplify the anti-immigrant rhetoric highlights the role that false information spread online is playing in fomenting real-world violence — an issue of growing concern to the UK government, which vowed Tuesday to bring those responsible for the riots, as well as their online cheerleaders, to justice.
Later on Tuesday, a 28-year-old man in Leeds, northern England, became the first person to be charged with using “threatening words or behavior intending to stir up racial hatred” online, according to the UK Crown Prosecution Service. The charges related to “alleged Facebook posts,” Nick Price, the director of legal services at the CPS, said in a statement.
In recent days, rioters have damaged public buildings, set cars on fire and hurled bricks at police officers. They also set ablaze two Holiday Inn hotels in northern and central England believed to be housing asylum seekers awaiting a decision on their claims. Hundreds have been arrested.
The riots broke out last week after far-right groups claimed on social media that the person charged with carrying out a horrific stabbing attack that left three children dead was a Muslim asylum seeker. The online disinformation campaign stoked outrage directed at immigrants.
The suspect, who has since been named as 17-year-old Axel Rudakubana, was born in the UK, according to police.
But false claims about the attack — Britain’s worst mass stabbing targeting children in decades and possibly ever — spread rapidly online and continued garnering views even after the police had set the record straight.
According to the Institute for Strategic Dialogue, a think tank, by mid-afternoon on July 30, the day after the attack, a false name circulated online for the alleged asylum seeker had received more than 30,000 mentions on X alone from more than 18,000 unique accounts.
“The false name attributed to the attacker was circulated organically but also recommended to users by platform algorithms,” the ISD said in a statement.
“Platforms therefore amplified misinformation to users who may not otherwise have been exposed, even after the police had confirmed the name was false.”
According to the UK government, bots, which it said could be linked to state-backed actors, may well have amplified the spread of false information.
Tackling ‘online criminality’
Although social media companies have their own internal policies barring hate speech and incitement to violence from their platforms, they have long struggled to implement them.
“The problem has always been enforcement,” Isabelle Frances-Wright, a technology expert at the ISD, told CNN. “Particularly in times of crisis and conflict, when there is a huge groundswell of content, at which point their already fragile content moderation systems seem to fall apart.”
It does not help matters that Musk himself has promoted incendiary content on X, a platform that European regulators last month accused of misleading and deceiving users. If he can do it, why not others?
For example, shortly after the October 7 Hamas attack on Israel and the ensuing outbreak of the war in Gaza, the self-declared “free speech absolutist” publicly endorsed an antisemitic conspiracy theory popular among White supremacists. Musk later apologized for what he called his “dumbest” ever social media post.
On his watch, X has also relaxed its content moderation policies and reinstated several previously blocked accounts. That includes far-right figureheads like Tommy Robinson, who has published a stream of posts stoking the UK protests while criticizing violent attacks.
In 2018, before Musk bought Twitter, as X used to be known, Robinson was banned from the platform for violating its rules governing hateful “conduct.”
The UK government this week vowed to prosecute “online criminality” and has pushed social media companies to take action against the spread of false information.
“Social media has put rocket boosters under… not just the misinformation but the encouragement of violence,” UK Home Secretary Yvette Cooper said Monday.
“That is a total disgrace and we cannot carry on like this,” she told BBC Radio 5 Live in an interview, adding that the police will be pursing “online criminality” as well as “offline criminality.”
During a cabinet meeting Tuesday, Starmer said those involved in the riots — in person and online — “will feel the full force of the law and be subject to swift justice,” according to a readout seen by CNN.
At the same meeting, Peter Kyle, the minister for science and technology, said that in conversations with social media companies he had made clear their responsibility to help “stop the spread of hateful disinformation and incitement.” At a briefing following the meeting, Starmer dodged questions from reporters about Musk’s comments.
X, Facebook owner Meta and TikTok have not responded to CNN’s requests for comment.
It is unclear that the UK government has the tools to hold social media platforms accountable for their role in the riots.
The UK’s Online Safety Act, adopted last year, creates new duties for social media platforms, including an obligation to take down illegal content when it appears.
It also makes it a criminal offense to post false information online “intended to cause non-trivial harm.”
But the legislation is not yet in effect because the regulator in charge of upholding it, Ofcom, is still consulting on codes of practice and guidance.
In a statement Monday, Ofcom said tackling illegal content online is a “major priority.” The watchdog expects the first set of duties under the new Act, regarding illegal content, to go into effect “from around the end of this year.” Once the law is in place, Ofcom will be able to fine companies up to 10% of their global revenue.
“As part of our wider engagement with tech platforms, we are already working to understand what actions they are taking in preparation for these new rules,” Ofcom added.
Zahid Mahmood, Rob Picheta, Lauren Kent and Sugam Pokharel contributed reporting.