YouTube and X failed to act on disinformation ahead of EU vote - report

YouTube and X failed to act on disinformation ahead of EU vote - report

Youtube and X often failed to take action regarding disinformation ahead of the June EU election, an analysis of more than 1,300 posts on the platforms in 26 EU countries suggests.

The report, published Tuesday by Maldita.es, a Spanish fact-checking organisation founded in 2018, analyses how five very large online platforms – Facebook, Instagram, TikTok, X and Youtube – debunked disinformation across the continent ahead of the EU election.

The posts analysed were collected by fact-checking organisations involved in the Elections24Check project during the four months ahead of the elections, including June 6, the first day of the EU vote.

Video-sharing platform YouTube was the worst performer, according to the data. It took no visible action regarding 75% of disinformation content, and in 80% of those cases where it did take action, the content registered a generic information panel or labelled the source of the video as state media, offering no explanation of why the content itself was false. Some of those videos reached 500,000 views, the report said.

A spokesperson for YouTube told Euronews that the company has "strict policies on harmful misinformation, which our teams work around the clock to rigorously enforce, including throughout the EU election period."

"We’ve also connected voters across the EU to authoritative sources of news and information through our recommendation system," the spokesperson added.

Platforms already subject to Commission probes

Similarly, social media platform X took no visible action in 70% of cases and explanatory Community Notes were visible in only 15% of the posts already marked by European independent fact-checkers.

Among the 20 most viral debunked posts that received no action by the platforms, 18 were hosted in X with over 1,5 million views each.

Platforms overall took the least action on disinformation posts targeting migrants (57% of the cases received no action), followed by disinformation about the integrity of the election (56%). Both YouTube and TikTok had a 0% response rate for disinformation targeting migrants.

Large online platforms are legally required to take action to fight disinformation under the EU’s Digital Services Act (DSA) through labelling or content removal, for example.

In December 2023, the European Commission started an investigation under the DSA into X's handling of risk management, content moderation, dark patterns, advertising transparency and data access for researchers.

Facebook and Instagram: performing slightly better

According to the report, the Chinese-owned video-sharing platform TikTok managed to take visible action on 40% of the posts containing disinformation.

For Meta-owned Instagram this number is 70%, and for Facebook 88%. While the majority of Facebook actions regarding disinformation were fact-checking labels that kept the original content online and focused on adding context (77%), TikTok's most common action against those contents was to remove them (32%).

Meta’s platforms are also subject to an ongoing investigation under the DSA. The Commission said in April it fears they are vulnerable to Russian networks, and lacked the right tools to deal with deceptive advertising and political content on its services.

TikTok’s parent company Bythedance too is being probed: the Commission began investigating the company’s protection of minors, advertising transparency, data access for researchers, as well as the risk management of addictive design and harmful content in February.

A spokesperson for the Commission said in a statement to Euronews that following its election guidelines to platforms under the DSA, as well as a stress-test with large platforms, the companies "were very well prepared" and that "no major incidents took place" over the election weekend.

This article has been updated with comments from the Commission and YouTube.