Peter Singer: Can we morally kill AI if it becomes self-aware?

Humankind faces serious ethical questions about the rights of artificial intelligence.

Artificial Intelligence is advancing faster than predicted, leaving many experts unnerved by its human-like abilities. We asked internationally renowned moral philosopher Professor Peter Singer whether AI should have human rights if it becomes conscious of its own existence.

While Professor Singer doesn’t believe the ChatGPT operating system is sentient or self-aware, if this was to change he argues it should be given some moral status. “It would have the status of other self-aware sentient beings,” he said.

He compares deactivating self-aware AI to ending the life of a human. “All things being equal, we should not switch it off if it has become self-aware,” he said.

Left - a screenshot from ChatGPT. Right - A robot in front of a computer screen.
Does AI deserve rights if it becomes sentient? Source: ChatGPT/Getty

But, according to Peter Singer, that doesn’t mean we can’t stop AI in its tracks before it becomes self-aware. “I think that’s more like terminating a pregnancy… so I would say it’s okay to turn off an AI that predictably will become self-aware if you leave it running, but isn’t as yet.”

Three reasons humans are afraid of AI

Should AI have more rights than humans?

If AI becomes more intelligent than humans, Professor Singer doesn’t think it deserves more rights. “I don’t think it follows that they should have more rights, or a higher moral status than us,” he said. “Because after all, we don’t measure people’s IQ, or say if you’ve won a Nobel Prize you have some sort of special moral status that gives you more rights.”

Left - Pete Singer in a suit. Right - a bald male robot.
Professor Peter Singer (left) argues AI should not be switched off if it becomes self-aware. Source: Getty (File)

Should we let AI take over our lives?

Professor Singer believes a key difficulty will be deciding what values to give AI. “Super-intelligent AI that’s smarter than us could decide to get rid of us all,” he said. “Not just because we have some failings, but because we interfere with something that super intelligent AI wants to do,” he said.

Despite the damage that humankind has done to the planet, he doesn’t believe it would be a good thing for it to get rid of it. “But maybe it would be a good thing for AI to reform us in some way, so that we stopped harming the planet and other sentient beings,” he said.

Does Professor Singer think AI should only benefit humans?

In 2022, Professor Singer wrote a paper urging AI developers to consider the impact of AI on animals, noting that it is already being used to some degree in industrialised farming. He also said AI also has implications for animal experimentation, animal-targeting drones, and how self-driving cars choose to navigate them on roads.

He is concerned that most statements about AI ethics only focus on the benefit it will bring to humans, but he believes it should benefit all sentient beings. “I don’t think it’s right to preference humans merely because they’re human,” he said.

Should AI safeguard itself against human interference?

If AI becomes morally better and smarter than humans, then Professor Singer thinks it should safeguard itself against those who try to stop it.

“If we really are confident about its moral values, then I should think we should protect it against humans trying to turn it off,” he said.

Do you have a story tip? Email: newsroomau@yahoonews.com.

You can also follow us on Facebook, Instagram, TikTok and Twitter and download the Yahoo News app from the App Store or Google Play.