Elon Musk’s X fuelled racist riots in England and Northern Ireland

Elon Musk's X fuelled racist riots in England and Northern Ireland

Elon Musk’s social media platform, X, played a central role in spreading misinformation and hate which fuelled racist riots across England and Northern Ireland last summer, according to a new report.

Amnesty International says the platform’s content-ranking algorithms enabled toxic, racist, and false content to thrive in the wake of the July 2024 Southport stabbings, which contributed to violence including racist attacks.

Its new report analyses X’s recommender algorithm and the removal of safeguards, concluding that X is set up to amplify this type of harmful content and that there are few safeguards to prevent it.

“Our research demonstrates that these design choices significantly exacerbated human rights risks for racialised communities in the wake of the Southport riots — and continues to present a serious human rights risk today,” Amnesty’s head of big tech accountability, Pat de Brún, said.

Within hours of the murder of the three young girls and attempted murder of 10 others by 17-year-old Axel Rudakubana, incendiary posts by far-right influencers went viral on X.

Within 24 hours, posts falsely claiming the attacker was Muslim, a refugee and/or a migrant who had arrived by boat reached an estimated 27 million impressions.

Andrew Tate, a notorious online influencer, posted a video falsely claiming the attacker was an “undocumented migrant” who “arrived on a boat”, while Elon Musk claimed that “civil war is inevitable”.

Stephen Yaxley-Lennon, also known as Tommy Robinson, is banned from most mainstream platforms but received over 580 million views on X posts in the two weeks following the Southport attacks.

Amnesty’s analysis of X’s open-source recommender algorithm found that the algorithm gives top priority to content that drives “conversation” even if that conversation is driven by misinformation or hate.

This, it said, is exacerbated by the artificial amplification of posts from “premium” verified subscribers, which are paid accounts.

Mr de Brún said: “X’s algorithm favours what would provoke a response and delivers it at scale. Divisive content that drives replies, irrespective of their accuracy or harm, may be prioritised and surface more quickly in timelines than verified information.”

Since Elon Musk’s takeover of X in late 2022, X has dismantled or weakened many of its safety guardrails aimed at curbing hate speech and disinformation, from mass layoffs of content moderation staff to the reinstatement of previously banned accounts, with no evidence of human rights impact assessments.

According to Amnesty, the way X’s system weights, ranks, and boosts content, particularly posts that generate heated replies or are shared or created by “blue” or “premium” accounts — often paying users with limited identity verification — mean that inflammatory or hostile posts are likely to gain traction during periods of heightened social tension.

Where such content targets racial, religious and other marginalised groups, portraying them as threatening or violent, X’s algorithms risk inciting discrimination, hostility or violence, it adds.

Sacha Deshmukh, chief executive of Amnesty International UK, said: “By amplifying hate and misinformation on such a massive scale, X acted like petrol on the fire of racist violence in the aftermath of the Southport tragedy.

“The platform’s algorithm not only failed to ‘break the circuit’ and stop the spread of dangerous falsehoods — they are highly likely to have amplified them.”

He continued: “One year on, it appears nothing has changed. Indeed, just two weeks ago we saw false online rumours about the transfer of people seeking asylum spark protests outside the Brittania Hotel in Canary Wharf.

“The UK’s online safety regime fails to keep the public safe, and the risk of X fuelling violence, discrimination and social tensions remains as high as they were during the rioting last year.

“The UK government must address the gaps in the Online Safety Act and challenge the racist rhetoric and scapegoating of refugees that are flourishing on social media.

“Regulators must hold X to account for its repeated role in human rights abuses and recognise that the self-regulation model is clearly failing.

“In cases where X’s algorithm is found to have amplified content that led to racist attacks during the riots, X should provide an avenue for remedy and establish a restitution fund for affected communities.”

Share icon
Share this article: