TikTok allegedly grants ‘a bit more leniency’ to star accounts

·3-min read
TikTok’s algorithm has been a mystery to many since its conception (AFP via Getty Images)
TikTok’s algorithm has been a mystery to many since its conception (AFP via Getty Images)

Leaked recordings heard by Forbes suggest that TikTok accounts with more than 5 million subscribers benefited from more lenient content moderation.

The site claims that the recordings obtained from internal meetings date to autumn 2021, and highlight the different levels of enforcement to the company’s high-profile accounts and everyone else.

“We don’t want to treat these users as, um, like any other accounts,” one employee from the Trust & Safety team is quoted as saying in a meeting in September 2021. “There’s a bit more leniency, I’d say.”

Another recording from October 2021 allegedly suggests the team was advised not to funnel high-profile complaints to contract moderators in Kuala Lumpa as they were “very by-the-book” and “if it needs to go outside of that, they won’t”.

There’s nothing intrinsically wrong with a two-tier moderation system, and in some respects it makes sense. If a video is being seen by a handful of people, it’s clearly a less pressing concern than something that will attract millions of eyeballs within minutes.

In this case, however, the accounts with the most influence would be the least moderated.

The suggestion that different standards apply depending on popularity would be deeply troubling if true and would make the company’s official community guidelines’ claim that the rules “apply to everyone and everything on TikTok” look a touch shaky.

The Standard reached out to TikTok for comment, and to ask whether these practices are in place today.

“Our Community Guidelines apply equally to all content and accounts, and we’re committed to enforcing them fairly, consistently, and equitably,” a TikTok spokesperson said.

“Higher follower counts do not lead to more lenient moderation."

How to deal with high-profile accounts has become a serious problem for social media companies in recent years, with firms trying to balance both the principle of free speech and the shareholder-friendly fact that controversial figures are good for engagement against the very real harm that dangerous content can cause.

Twitter initially refused to censor Donald Trump’s tweets from the White House on the grounds that his inflammatory missives were both newsworthy and of “public interest”, but ultimately changed its tune, banning the outgoing President in the wake of the January 6 attacks.

Likewise, last year The Wall Street Journal reviewed company documents from Facebook revealing a programme called XCheck that, it claims, “shields millions of VIP users from the company’s normal enforcement process”.

This not only applied to political disinformation but nudity: in 2019, the Brazilian footballer Neymar shared nude photos of a woman who had accused him of sexual assault. The post was eventually removed, but not before it had been seen more than 55 million times. The footballer was not banned from the platform, despite it being a clear breach of the social network’s community guidelines.

“After escalating the case to leadership, we decided to leave Neymar’s accounts active, a departure from our usual ‘one strike’ profile disable policy,” the internal document revealed.