AI is fuelling a deepfake porn crisis in South Korea. What’s behind it – and how can it be fixed?

It’s difficult to talk about artificial intelligence without talking about deepfake porn – a harmful AI byproduct that has been used to target everyone from Taylor Swift to Australian school girls.

But a recent report from startup Security Heroes found that out of 95,820 deepfake porn videos analysed from different sources, 53% featured South Korean singers and actresses – suggesting this group is disproportionately targeted.

So, what’s behind South Korea’s deepfake problem? And what can be done about it?

Teenagers and minors among victims

Deepfakes are digitally manipulated photos, video or audio files that convincingly depict someone saying or doing things they never did. Among South Korean teenagers, creating deepfakes has become so common that some even view it as a prank. And they don’t just target celebrities.

On Telegram, group chats have been made for the specific purpose of engaging in image-based sexual abuse of women, including middle-school and high-school students, teachers and family members. Women who have their pictures on social media platforms such as KakaoTalk, Instagram and Facebook are also frequently targeted.

The perpetrators use AI bots to generate the fake imagery, which is then sold and/or indiscriminately disseminated, along with victims’ social media accounts, phone numbers and KakaoTalk usernames. One Telegram group attracted some 220,000 members, according to a Guardian report.

A lack of awareness

Despite gender-based violence causing significant harm to victims in South Korea, there remains a lack of awareness on the issue.

South Korea has experienced rapid technological growth in recent decades. It ranks first in the world in smartphone ownership and is cited as having the highest internet connectivity. Many jobs, including those in restaurants, manufacturing and public transport, are being rapidly replaced by robots and AI.

But as Human Rights Watch points out, the country’s progress in gender equality and other human rights measures has not kept pace with digital advancement. And research has shown that technological progress can actually exacerbate issued of gender-based violence.

Since 2019, digital sex crimes against children and adolescents in South Korea have been a huge issue – particularly due to the “Nth Room” case. This case involved hundreds of young victims (many of whom were minors) and around 260,000 participants engaged in sharing exploitative and coercive intimate content.

The case triggered widespread outrage and calls for stronger protection. It even led to the establishment of stronger conditions in the Act on Special Cases Concerning the Punishment of Sexual Crimes 2020. But despite this, the Supreme Prosecutors’ Office said only 28% of the total 17,495 digital sex offenders caught in 2021 were indicted — highlighting the ongoing challenges in effectively addressing digital sex crimes.

In 2020, the Ministry of Justice’s Digital Sexual Crimes Task Force proposed about 60 legal provisions, which have still not been accepted. The team was disbanded shortly after the inauguration of President Yoon Suk Yeol’s government in 2022.

During the 2022 presidential race, Yoon said “there is no structural gender discrimination” in South Korea and pledged to abolish the Ministry of Gender Equality and Family, the main ministry responsible for preventing gender-based violence. This post has remained vacant since February of this year.

Can technology also be the solution?

But AI isn’t always harmful – and South Korea provides proof of this too. In 2022, a digital sex crime support centre run by the Seoul metropolitan government developed a tool that can automatically track, monitor and delete deepfake images and videos around the clock.

The technology – which won the 2024 UN Public Administration Prize – has helped reduce the time taken to find deepfakes from an average of two hours to three minutes. But while such attempts can help reduce further harm from deepfakes, they are unlikely to be an exhaustive solutions, as effects on victims can be persistent.

For meaningful change, the government needs to hold service providers such as social media platforms and messaging apps accountable for ensuring user safety.

Unified efforts

On August 30, the South Korean government announced plans to push for legislation to criminalise the possession, purchase and viewing of deepfakes in South Korea.

However, investigations and trials may continue to fall short until deepfakes in South Korea are recognised as a harmful form of gender-based violence. A multifaceted approach will be needed to address the deepfake problem, including stronger laws, reform and education.

South Korean authorities must also help to enhance public awareness of gender-based violence, and focus not only on supporting victims, but on developing proactive policies and educational programs to prevent violence in the first place.

This article is republished from The Conversation. It was written by: Sungshin (Luna) Bae, Monash University

Read more:

Sungshin (Luna) Bae does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.