Animals in the machine: why the law needs to protect animals from AI
The rise of artificial intelligence (AI) has triggered concern about potentially detrimental effects on humans. However, the technology also has the potential to harm animals.
An important policy reform now underway in Australia offers an opportunity to address this. The federal government has committed A$5 million to renewing the lapsed Australian Animal Welfare Strategy. Consultation has begun, and the final strategy is expected in 2027.
While AI is not an explicit focus of the review, it should be.
Australians care about animals. The strategy could help ensure decision-makers protect animals from AI’s harms in our homes, on farms and in the wild.
Will AI harms to animals go unchecked?
Computers are now so developed they can perform some complex tasks as well as, or better than, humans. In other words, they have developed a degree of “artificial intelligence”.
The technology is exciting but also risky.
Warnings about the risks to humans include everything from privacy concerns to the collapse of human civilisation.
Policy-makers in the European Union, the United States and Australia are scrambling to address these issues and ensure AI is safe and used responsibly. But the focus of these policies is to protect humans.
Now, Australia has a chance to protect animals from AI.
Australia’s previous Animal Welfare Strategy expired in 2014. It’s now being revived, and aims to provide a national approach to animal welfare.
So far, documents released as part of the review suggest AI is not being considered under the strategy. That is a serious omission, for reasons we outline below.
Powerful and pervasive technology in use
Much AI use benefits animals, such as in veterinary medicine. For example, it may soon help your vet read X-rays of your animal companion.
AI is being developed to detect pain in cats and dogs. This might help if the technology is accurate, but could cause harm if it’s inaccurate by either over-reporting pain or failing to detect discomfort.
AI may also allow humans to decipher animal communication and better understand animals’ point of view, such as interpreting whale song.
It has also been used to discover which trees and artificial structures are best for birds.
But when it comes to animals, research suggests AI may also be used to harm them.
For example, it may be used by poachers and illegal wildlife traders to track and kill or capture endangered species. And AI-powered algorithms used by social media platforms can connect crime gangs to customers, perpetuating the illegal wildlife trade.
AI is known to produce racial, gender and other biases in relation to humans. It can also produce biased information and opinions about animals.
For example, AI chatbots may perpetuate negative attitudes about animals in their training data – perhaps suggesting their purpose is to be hunted or eaten.
There are plans to use AI to distinguish cats from native species and then kill the cats. Yet, AI image recognition tools have not been sufficiently trained to accurately identify many wild species. They are biased towards North American species, because that is where the bulk of the data and training comes from.
Algorithms using AI tend to promote more salacious content, so they are likely to also recommend animal cruelty videos on various platforms. For example, YouTube contains content involving horrific animal abuse.
Some AI technologies are used in harmful animal experiments. Elon Musk’s brain implant company Neuralink, for instance, was accused of rushing experiments that harmed and killed monkeys.
Researchers warn AI could estrange humans from animals and cause us to care less about them. Imagine AI farms almost entirely run by smart systems that “look after” the animals. This would reduce opportunities for humans to notice and respond to animal needs.
Existing regulatory frameworks are inadequate
Australia’s animal welfare laws are already flawed and fail to address existing harms. They allow some animals to be confined to very small spaces, such as chickens in battery cages or pigs in sow stalls and farrowing crates. Painful procedures (such as mulesing, tail docking and beak trimming) can be legally performed without pain relief.
Only widespread community outrage forces governments to end the most controversial practices, such as the export of live sheep by sea.
This has implications for the development and use of artificial intelligence. Reform is needed to ensure AI does not amplify these existing animal harms, or contribute to new ones.
Internationally, some governments are responding to the need for reform.
The United Kingdom’s online safety laws now require social media platforms to proactively monitor and remove illegal animal cruelty content from their platforms. In Brazil, Meta (the owner of Facebook and WhatsApp) was recently fined for not taking down posts that had been tagged as illegal wildlife trading.
The EU’s new AI Act also takes a small step towards recognising how the technology affects the environment we share with other animals.
Among other aims, the law encourages the AI industry to track and minimise the carbon and other environmental impact of AI systems. This would benefit animal as well as human health.
The current refresh of the Australian Animal Welfare Strategy, jointly led by federal, state and territory governments, gives us a chance to respond to the AI threat. It should be updated to consider how AI affects animal interests.
This article is republished from The Conversation. It was written by: Lev Bromberg, The University of Melbourne; Christine Parker, The University of Melbourne, and Simon Coghlan, The University of Melbourne
Read more:
AI probably isn’t the big smartphone selling point that Apple and other tech giants think it is
Online spaces are rife with toxicity. Well-designed AI tools can help clean them up
Lev Bromberg has previously received a Commonwealth Government Research Training Program Scholarship. He is affiliated with the Australasian Animal Law Teachers and Researchers Association and the Animal Welfare Lawyers group.
Christine Parker receives funding from the Australian Research Council (ARC) for the ARC Centre of Excellence for Automated Decision-Making and Society.
Simon Coghlan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.