The Meta Oversight Board urged the social media company on Monday to reconsider its current policy on manipulated media ahead of the various elections set to be held in 2024.
The board, which is run independently of Meta and funded through a grant by the company, described the policy as “incoherent, lacking in persuasive justification and inappropriately focused on how content has been created, rather than on which specific harms it aims to prevent.”
The recommendation came as part of the board’s review of Meta’s decision to leave up a video of President Biden on Facebook that was edited to make it appear as though he was inappropriately touching his granddaughter.
The Meta Oversight Board ultimately upheld Meta’s decision on the Facebook video, finding that it did not violate the company’s manipulated media policy because the policy only applies to videos created with artificial intelligence (AI) that show people saying things they never said.
“Since the video in this post was not altered using AI and it shows President Biden doing something he did not do (not something he didn’t say), it does not violate the existing policy,” the board noted.
“Additionally, the alteration of this video clip is obvious and therefore unlikely to mislead the ‘average user’ of its authenticity, which, according to Meta, is a key characteristic of manipulated media,” it added.
However, the board argued that the current policy is too narrow and should be extended to cover audio and audiovisual content, content that shows people doing things they never did and content regardless of how it was created.
“The policy should not treat ‘deep fakes’ differently to content altered in other ways (for example, ‘cheap fakes’),” the Meta Oversight Board said.
It also suggested that the company should no longer remove manipulated content when there aren’t any other policy violations and instead attach a label notifying users that the content has been “significantly altered and could mislead.”