Opinion: The Taylor Swift AI photos offer a terrifying warning
Editor’s Note: Laurie Segall is a longtime tech journalist and the founder of Mostly Human, an entertainment company that produces docs, films and digital content focused on the intersection of technology and humanity. She is the author of ”Special Characters: My Adventures with Tech’s Titans and Misfits.” Previously, she was CNN’s senior technology correspondent. The views expressed in this commentary are her own. Read more opinion on CNN.
Sexually explicit AI-generated photos of pop superstar Taylor Swift have flooded the internet, and we don’t need to calm down.
Swift may be one of the most famous women in the world, but she represents every woman and every girl when it comes to what’s at stake in the future of artificial intelligence and consent.
I’ve been in the trenches covering the impact of technology for nearly 15 years, and I believe sexually explicit deepfakes are one of the most significant threats we face with advances in AI. With the proliferation of AI-generated tools and Silicon Valley’s tendency to race to innovate, we are entering a phase of tech that feels familiar — only now, the stakes are even higher.
We are in an era where it’s not just our data that’s up for grabs, it’s our most intimate qualities: Our voices, our faces, our bodies can all now be mimicked by AI. Put simply: Our humanity is a click away from being used against us.
And if it can happen to Swift, it can happen to you. The biggest mistake we can make is believing that this type of harm is reserved for public figures. We are now seeing a democratization of image-generating apps enabling this type of behavior. Did your crush reject you? There’s an app for that. Now, you can digitally undress her or create your own explicit deepfake starring her.
The problem will only get worse as we move into augmented and virtual worlds. Imagine an immersive environment where a scorned ex invites others to collectively view a sexually explicit deepfake video of the girl who rejected him. Earlier this month, it was reported that British police are investigating the case of a 16-year old who alleged being raped in the virtual world by multiple attackers.
I recently spoke to George Washington University professor Dr. Mary Anne Franks, who specializes in civil rights, tech and free speech. She had a chilling warning: These types of apps and AI tools could lead to a new generation of young men with a “my wish is AI’s command” mentality. If we’re not careful, not only will we create a new generation of victims, but also a new generation of abusers.
“We’ve just made all these tools — confused, resentful, angry young men are just using [them] instead of trying to sort through what it means to deal in a healthy way with rejection,” Franks said.
Leveraging advances in technology to humiliate women is nothing new. In 2015, I created a series at CNN called “Revenge Porn: The Cyberwar Against Women.” At the time, non-consensual pornography — where a scorned ex or bad actor published naked photos of women on websites devoted to shaming them — was rampant. Like today, the laws had yet to catch up and tech companies weren’t yet making changes to protect victims.
During that investigation, I will never forget looking at websites hosted on the dark web that featured non-consensual pornography of teenage girls. A security researcher who specialized in online abuse (and tracking down abusers) showed me the depths of the problem, guiding me through forums and images I will never unsee. On one site, perpetrators compromised teenagers’ web cameras and forced young girls to perform sexual acts with a threat: If you don’t comply, we’ll send your private images we’ve recorded to all your classmates.
Fast forward to 2024. Imagine your teenager receives a sexually explicit video of themselves in a DM. They never taped a video, but due to advances in deepfake technology, it’s impossible to distinguish whether it’s real or fake. In a world where AI makes fiction so believable, truth and our perception of truth aren’t far apart. The feeling of shame, loss of control and helplessness doesn’t change because an image or video isn’t technically “real.”
Swift’s deepfake nightmare is just the tip of the iceberg highlighting the existential threat women and girls face. While X may have removed the viral posts (after they were viewed tens of millions of times), there are still a number of alternative sites devoted to this type of exploitative content. One particular site, racking in millions of views a month, features pages of sexually explicit deepfake videos devoted to Swift and other celebrities who did not consent to having their likeness used for pornographic purposes.
The genie is hard to put back in the bottle, and the cure comes with a cost. On Saturday, searches for Swift were blocked on X, with the company telling CNN that the move was temporary to “prioritize safety.”
In order to protect one of the most famous women on the planet, X temporarily had to make her invisible. While it’s a temporary move, the message has lasting impact: If one of the most famous women must disappear online in order to be safe, what does that mean for the rest of us?
I’ve thought a lot about what would actually move the needle.
From a policy perspective, a handful of states have laws against the creation or sharing of these types of sexually explicit deepfakes. All of those laws vary in scope — so where someone is able to bring charges makes a difference. If, for example, Swift filed in New York as a resident of the state — New York’s law requires the victim of this type of abuse to prove intent to cause harm, an increasingly difficult feat for AI-generated sexually explicit images, Franks said.
“Intent to harm is a very restrictive requirement because, as with other forms of image-based sexual abuse, there are lots of other motives [including] sexual gratification, to make money, to gain notoriety, to achieve social status,” Franks said. “Statutes that require intent to cause harm give all of those perpetrators a free pass.”
To file criminal charges, Swift would be required to track down the perpetrator(s) — which is both expensive and difficult to accomplish, all while risking further exposure. This points to the reality that even states with existing laws have a prohibitively complex road to prosecution.
“Someone like Taylor Swift has lawyers and someone who can help do this,” Franks said. “Your average victim is not going to have any assistance.”
Franks says the ideal federal bill would include criminal and civil penalties, citing the bipartisan Preventing Deepfakes of Intimate Images Act, which would criminally prohibit the disclosure of sexually explicit digital images without consent and provide civil recourse for victims. Because laws banning deepfakes are challenging to enforce, lawmakers in Vermont recently introduced legislation that would hold developers of generative AI products accountable for the harms they create that are reasonably foreseeable. These are first steps, but legislation is being outpaced by the speed at which the technology is unfolding.
It doesn’t seem fair to ask Swift to be our spokesperson for this, but I strongly believe that she — and the powerful coalition of fans that share her ethos — may be our best shot at building the momentum needed to create meaningful change starting now. Don’t get me wrong, there would be enormous hurdles and a personal cost for any woman, even a woman in Swift’s position, but given that she’s already reshaped the music industry and created a micro economy from her tour, I wouldn’t put anything past her.
In the Swift universe, injustice is a stepping stone, heartbreak becomes an anthem, and every disappointment is an opportunity to grow. Hopefully, we can use this moment to collectively raise our voices to sing the ultimate ballad: one where we have consent over our bodies, online.
For more CNN news and newsletters create an account at CNN.com