An AI Company Published a Chatbot Based on a Murdered Woman. Her Family Is Outraged.

This one's nasty — in one of the more high-profile, macabre incidents involving AI-generated content in recent memory, Character.AI, the chatbot startup founded by ex-Google staffers, was pushed to delete a user-created avatar of an 18-year-old murder victim who was slain by her ex-boyfriend in 2006. The chatbot was taken down only after the outraged family of the woman it was based on drew attention to it on social media.

Character.AI can be used to create chatbot "characters" from any number of sources — be it a user's imagination, a fictional character, or a real person, living or dead. For example, some of the company's bots have been used to mimic Elon Musk, or Taylor Swift. Lonely teens have used Character.AI to create friends for themselves, while others have used it to create AI "therapists." Others have created bots they've deployed to play out sexually explicit (or even sexually violent) scenarios.

For context: This isn't exactly some dark skunkworks program or a nascent startup with limited reach. Character.AI is a ChatGPT competitor started by ex-Google staffers in late 2021, backed by kingmaker VC firm Andreessen Horowitz to the tune of a billion-dollar valuation. Per AdWeek, who first reported the story, Character.AI boasts some 20 million monthly users, with over 100 million different AI characters available on the platform.

The avatar of the woman, Jennifer Crecente, only came to light on Wednesday, after her bereaved father Drew received a Google Alert on her name. It was then that his brother (and the woman's uncle) Brian Crecente — the former editor-in-chief of gaming site Kotaku, a respected media figure in his own right — brought it to the world's attention on X, tweeting:

The page from Character.AI — which can still be accessed via the Internet Archive – lists Jennifer Crecente as "a knowledgeable and friendly AI character who can provide information on a wide range of topics, including video games, technology, and pop culture," then proffering her expertise on "journalism and can offer advice on writing and editing." Even more, it appears as though nearly 70 people were able to access the AI — and have chats with it — before Character.AI pulled it down.

In response to Brian Crecente's outraged tweet, Character.AI responded on X with a pithy thank you for bringing it to their attention, noting that the avatar is a violation of Character.AI's policies, and that they'd be deleting it immediately, with a promise to "examine whether further action is warranted."

In a blog post titled "AI and the death of Dignity," Brian Crecente explained what happened in the 18 years since his niece Jennifer's death: After much grief and sadness, her father Drew created a nonprofit, working to change laws and creating game design contests that could honor her memory, working to find purpose in their grief.

And then, this happened. As Brian Crecente asked:

It feels like she’s been stolen from us again. That’s how I feel. I love Jen, but I’m not her father. What he’s feeling is, I know, a million times worse. [...] I’ll recover, my brother will recover. The thing is, why is it on us to be resilient? Why do multibillion-dollar companies not bother to create ethical, guiding principles and functioning guardrails to prevent this from ever happening? Why is it up to the grieving and the aggrieved to report this to a company and hope they do the right thing after the fact?

As for Character.AI's promise to see if "further action" will be warranted, who knows? Whether the Crecente family has grounds for a lawsuit is also murky, as this particular field of law is relatively untested.  That said, the startup's terms of service have an arbitration clause that prevents users from suing them, but there doesn't seem to be any language about this particularly unique stripe of emotional distress, inflicted on non-users, by its users.

Meanwhile, if you're looking for a sign of how these kinds of conflicts will continue to play out — which is to say, the kinds where AIs are made against the wills and desires of the people they're based on, living or dead — you only need look as far back as August, when Google hired back Character.AI's founders, to the tune of $2.7 billion. Founders, it should be noted, who initially left Google after the tech giant refused to release their chatbot on account of (among other reasons) its ethical guardrails around AI.

And just yesterday, the news broke that Character.AI is making a change. They've promised to redouble efforts on their consumer-facing products — like the one used to create Jennifer Crecente's likeness. The Financial Times reported that instead of building AI models, Character.AI "will focus on its popular consumer product, chatbots that simulate conversations in the style of various characters and celebrities, including ones designed by users."

More on Character.AI: Google Paid $2.7 Billion to Get a Single AI Researcher Back