A wave of online racism aimed at some of England’s black soccer players has highlighted how social media companies’ content moderation systems are failing to monitor the use of emojis.
On Sunday, England’s men’s soccer team, playing in their first major tournament final since 1966, fell to Italy on penalties. In the aftermath, a wave of racist abuse was levelled at three black England players — Marcus Rashford, Jadon Sancho and Bukayo Saka — and messages on social networks like Twitter, Facebook and Instagram included monkey and banana emojis.
The digital abuse isn’t a new phenomenon. The Professional Footballers’ Association and data science company Signify found in a 2020 study of tweets sent to some players that there were more than 3 000 explicitly abusive messages, with 29% of the racially abusive posts in the form of emojis.
“Twitter’s algorithms were not effectively intercepting racially abusive posts that were sent using emojis,” the study found. “This highlights a glaring oversight.”
But despite the long-standing problem, the abuse via emojis has continued. A more recent analysis published on Monday flagged almost 2 000 tweets as potentially abusive, targeting some black players during the European tournament, and said that although a number of the tweets were deleted, Twitter didn’t permanently suspend the accounts.
Social media companies such as Facebook, Twitter and Google, which owns YouTube, have spent years developing algorithms to detect offensive speech so that it can be removed. But experts say that they have put in a smaller effort and developed less expertise in analysing emoji language — and that has left an opening.
“It’s okay to send a monkey emoji to someone, but if you call someone a monkey, you get banned — so that’s the contradiction,” said Vyvyan Evans, a linguistics expert who wrote a book on the subject. “Insufficient effort to date has been focused on policing emojis.”
Spokesmen for Twitter and Facebook said the companies have been removing posts and disabling accounts since Sunday’s final, with Twitter saying that the network was proactive and removed more than a thousand tweets and permanently suspended accounts in the hours after the game.
“Using emojis, like monkey or banana emojis, to racially abuse someone is completely against our rules,” said a Facebook company spokesman. “We use technology to help us review and remove harmful content, but we know these systems aren’t perfect, and we’re constantly working to improve.”
UK leaders condemned the hate speech, with Prime Minister Boris Johnson saying he warned executives from Facebook, Twitter, TikTok, Snapchat and Instagram at a Tuesday meeting that they need to crack down on online abuse.
Players and officials also spoke out, including Rashford in a widely shared statement on social media. “I’ve grown into a sport where I expect to read things written about myself,” he wrote. “I can take critique of my performance all day long, my penalty was not good enough, it should have gone in, but I will never apologise for who I am and where I came from.”
Bertie Vidgen, a research fellow in online harms at the Alan Turing Institute, has been working with colleagues from Oxford University to test how speech detection models, including one from Google called Jigsaw, respond to offensive emojis. The findings so far have not been encouraging, and Vidgen said it’s not because emojis necessarily pose a more difficult technical challenge.
“They have really low performance. You can say something hateful, which if you wrote out in text would definitely be picked up,” Vidgen said. “There’s zero justification for having that loophole. They just need to enforce their policies.” — Reported by Ivan Levingston, (c) 2021 Bloomberg LP