Aug.6 (Reuters) – Apple Inc (AAPL.O) will roll out a photo verification system for child abuse images country by country, based on local laws, the company said on Friday.
A day earlier, Apple said it would implement a system that would filter photos of these images before they were uploaded from iPhones in the United States to its iCloud storage. Read more
Child safety groups have praised Apple for joining Facebook Inc (FB.O), Microsoft Corp (MSFT.O), Alphabet Inc (GOOGL.O) Google in taking such action.
But the verification of Apple’s photos on the iPhone itself has raised concerns that the company is probing users’ devices in a way that could be exploited by governments. Many other tech companies verify photos after uploading them to servers.
At a press conference on Friday, Apple said it plans to expand the service based on the laws of each country where it operates.
The company said nuances in its system, such as “security vouchers” transmitted from iPhone to Apple’s servers that do not contain payload data, will protect Apple from government pressure to identify material other than images of child abuse.
Apple has a humane review process that acts as a safety net against government abuse, he added. The company will not forward reports from its photo verification system to law enforcement if the review finds no images of child abuse.
Regulators are increasingly demanding that tech companies do more to remove illegal content. In recent years, law enforcement and politicians have used the scourge of child pornography to denounce strong encryption, much as they previously cited the need to fight terrorism.
A few resulting laws, including in Britain, could be used to force tech companies to act in secrecy against their users.
While Apple’s strategy may deflect government interference by showing its initiative or complying with advance directives in Europe, many security experts have said the privacy champion is making a big mistake by showing its willingness to ‘reach customers’ phones.
“This may have diverted the attention of US regulators to this topic, but it will draw regulators around the world to do the same with terrorist and extremist content,” said Riana Pfefferkorn, researcher at the Internet Observatory of Stanford.
Politically influential copyright holders in Hollywood and elsewhere might even argue that their digital rights should be enforced in this way, she said.
Facebook’s WhatsApp, the world’s largest fully encrypted messaging service, is also under pressure from governments wanting to see what people are saying, and it fears it will escalate now. The head of WhatsApp, Will Cathcart tweeted a barrage of criticism Friday against Apple for the new architecture.
“We have had personal computers for decades, and there has never been a mandate to scan the private content of every desktop, laptop or phone in the world for illegal content,” he said. -he writes. “This is not how the technology built in the free countries works.”
Apple experts have argued that they don’t really get into people’s phones because the data sent to their devices has to overcome several hurdles. For example, prohibited material is flagged up by watch groups, and IDs are bundled together in Apple operating systems around the world, making them more difficult to manipulate.
Some pundits have said they have a reason to hope Apple hasn’t really changed direction in fundamental ways.
As Reuters reported last year, the company had made an effort to encrypt iCloud backups end-to-end, meaning the company could not deliver readable versions of them to law enforcement. He abandoned the project after the FBI opposed it.
Apple could set the stage to enable encryption later this year, using this week’s measures to avoid early criticism of the change, said Alex Stamos, founder of Stanford Observatory.
Apple declined to comment on future product plans.
Reporting by Akanksha Rana in Bengaluru and Stephen Nellis and Joseph Menn in San Francisco; Editing by Shounak Dasgupta and Richard Chang
Our standards: Thomson Reuters Trust Principles.