Peter Singer on whether we should put AI in charge of solving climate change

internally renowned moral philosopher Professor Peter Singer spoke to Yahoo News Australia about whether artificial intelligence would be better at running the planet.

Video transcript

- We have a moral imperative because we're aware of our own failings to create AI that is smarter than us.

- I think the question is not only whether AI will be smarter than us but whether it will also be aligned with the values that we think are the right values. And I think a lot of the real difficulty is in deciding what values the AI, the superintelligent AI, let's say, is going to have. Because if the AI is not going to have values that we think are the right ones, then this could be very negative. This superintelligent AI that's smarter than us could decide to get rid of us all and not just because we have some failings but because we interfere with something that that superintelligent AI wants to do, which is not necessarily a good thing at all.

- Even what we've done to the planet and other species, would that be a good thing for AI to get rid of us?

- I don't think it would be a good thing for AI to get rid of us, but maybe it would be a good thing for AI to reform us in some way so that we stopped harming the planet and other sentient beings. And perhaps the AI, being so smart, would know how to do that better than you or I know how to do it.

- Good point. Should we put, for instance, AI in charge of solving the climate change issue? Because it could take human self interest out of the issue.

- I think if we had a superintelligent AI that we were reasonably persuaded was going to reliably work to mitigate the damage that climate change is doing and was not going to have any other harmful consequences, then that would be an excellent thing. If we could say, OK, here's a very smart and quite impartial instrument for reducing the harm that is going to happen to all of us if we keep going down this path.

Now, how exactly it would do that, I can't tell. But then we're postulating that it's a lot smarter than me, so there's no reason why I should be able to tell, I suppose. AI will do that. If there is a way to do that, then I suppose, if it's really, really smart, it will find that way.

- Should we create AI that benefits humankind? Or should we be looking bigger picture? Should it be equally looking after the planet and after other species?

- Yeah, very good question. I recently published an article in a journal called AI and Ethics together with a colleague, [INAUDIBLE], exactly on that topic. And we argue that presently, most statements of AI ethics and most courses that teach about AI ethics focus only on the benefits to humans. In fact, focus only on saying AI must benefit human beings, not discriminate against some human beings, and so on. And virtually none of these statements actually talk about how AI should also benefit other sentient beings with whom we share the planet. So definitely I think that AI should benefit not only humans but other sentient beings and the future of the planet for all of the other sentient beings who will live on this planet in centuries to come.

- Should it preference humans in that?

- I don't think it's right to preference humans merely because they're human. Put it this way. If something is going to cause pain to a human and to an animal, and it's a similar kind of pain. Let's say there's a burn going to happen, not going to be fatal, but it will be painful, I don't think there's a reason for saying, oh, well, if we have to choose between one or the other, let's always choose the human. Or even to say, even if the burn is going to be worse for the animal, let's choose the lesser burn for the human because, after all, humans have a higher moral status than animals. I don't think that's justified any more than I think it would be justified to say the same thing with regard to someone who's of a different race from those who are making the decision.

So I don't think it would be right to preference humans just because they're human. But there may be cases where we're not talking about similar interests, where we're talking about different kinds of interests and go back to the discussion we had before about ending life. I think the fact that humans think about their future, plan for their future, work often very long term for what's going to happen to them in the future. For example, they may go to university and study for four or five years because of what they want to do after that time. And no animal as far as we know plans ahead in that way. So if you were to kill a human, you would cut off their plans in a way that you would not do if you killed an animal who was not capable of that kind of long-term planning.