Apple's new photo feature has divided opinion amongst law makers and experts in technology privacy with some championing it and others suggesting it's a "terrible idea".
Apple is planning to scan US iPhones to find evidence of potential child abuse in a move which has raised concern among some security researchers that the system could be misused by governments looking to surveil their citizens.
The tech giant said its messaging app will use on-device machine learning to warn about sensitive content without making private communications readable by the company.
The tool Apple calls "neuralMatch" will detect known images of child sexual abuse without decrypting people's messages.
If it finds a match, the image will be reviewed by a human who can notify law enforcement if necessary.
Nigel Phair, a former Australian Federal Police officer and Director Enterprise of UNSW Canberra Cyber, told Yahoo News Australia “holistically” he believes “it’s a great idea” despite concerns from people about their privacy being compromised.
“It’s a completely different code when you’ve got the word ‘children’ in there,” Mr Phair said.
“There is a regulatory regime that could go with this. Anything that stops one child from being abused I’m all for it.”
Mr Phair said he hoped Australian police would support the move and that he equated the internet to a public space such as a street.
Separately, Apple said its messaging app will use on-device machine learning to identify and blur sexually explicit photos on children’s phones and can also warn the parents of younger children via text message. It also said that its software would “intervene” when users try to search for topics related to child sexual abuse.
In order to receive the warnings about sexually explicit images on their children’s devices, parents will have to enroll their child’s phone. Kids over 13 can unenroll, meaning parents of teenagers won’t get notifications.
Apple said neither feature would compromise the security of private communications or notify police.
John Clark, the president and CEO of the National Center for Missing and Exploited Children, in a statement called it a “game changer”.
“With so many people using Apple products, these new safety measures have life-saving potential for children,” Mr Clark said.
Julia Cordua, the CEO of Thorn, said that Apple’s technology balances “the need for privacy with digital safety for children”.
Thorn, a nonprofit founded by Demi Moore and Ashton Kutcher, uses technology to help protect children from sexual abuse by identifying victims and working with tech platforms.
On Twitter, many people criticised the move.
One man called it "an authoritarian dream".
"This is pretty messed up," he tweeted.
Another man tweeted he could see the information being "misused" while one woman suggested it was a "terrible idea".
"I'm all in for stopping this kind of stuff, but invading privacy isn't really the right way though. This pretty much means we will have less privacy," another man tweeted.
However, other people supported the move and suggested the only ones who should be concerned are perpetrators themselves.
“I’’d rather get flagged accidentally for family photos then let pedophiles keep getting away with abuse," one man tweeted.
Is Apple compromising its own beliefs?
Tech companies including Microsoft, Google, Facebook and others have for years been sharing "hash lists" of known images of child sexual abuse.
Apple has also been scanning iCloud, which unlike its messages is not end-to-end encrypted, for such images.
The company has been under pressure from governments and law enforcement to allow for surveillance of encrypted data.
Matthew Green, a security professor at Johns Hopkins University who earlier posted his concerns on Twitter, told the Financial Times that Apple's move will "break the dam - governments will demand it from everyone".
Alec Muffett, a security researcher and privacy campaigner, accused Apple of enabling an Orwellian type future like in 1984.
Former National Security Agency computer intelligence consultant Edward Snowden tweeted: "Make no mistake: if they can scan for kiddie porn today, they can scan for anything tomorrow".
Electronic Frontier Foundation, the online civil liberties pioneer, called Apple’s move a compromise on privacy protections “a shocking about-face for users who have relied on the company’s leadership in privacy and security.”
Mr Phair said it was an unusual move too and cited how Apple in 2016 would not unlock a phone for the FBI used by a killer in the San Bernardino, California, after a mass shooting a few months earlier. The FBI sought outside help after Apple rebuffed the agency’s efforts to make the company create a security backdoor into iPhone technology.
Apple’s refusal to cooperate with the FBI at the time became a political hot potato pitting the rights of its customers against the broader interests of public safety.
But Mr Phair said “child abuse material is a good exception” to Apple’s previous considerations on privacy.
He added “competence in law enforcement” would be needed to execute the program for its intended purpose and if it happened in Australia he would want to see reporting back through parliament.
“I would want to know how many times the police have been given stuff from Apple, how many arrests were made and how many people were locked up,” Mr Phair said.
Mr Phair said there is a chance of “false positives” as the AI learns what is and is not specifically abusive material. He said there was still a need for human oversight.
with The Associated Press
Do you have a story tip? Email: email@example.com.