Apple, which basked in praise from many corners after refusing to cooperate with federal authorities following the 2016 mass shooting in San Bernardino, California, is now aggressively walking back its plans to unilaterally scan customers’ phones for child porn on behalf of governments following a swift and furious backlash from privacy advocates. Per a Friday afternoon report from Reuters, the tech company clarified that it will only utilize its proposed system to look for images that have already “been flagged by clearinghouses in multiple countries.”
An initial threshold of 30 images would have to be discovered before the automated scanning system alerted Apple that an actual human should review the issue, though the company explained that the figure would eventually be reduced in the future. Apple also made assurances that its list of image identifiers is universal and will remain constant regardless of the device it is being applied to.
As Apple explained during Friday’s media call, the company’s technical protection is one that creates an encrypted on-device CSAM hash database derived from at least two or more organizations, each operating under the auspices of separate national governments.
The company, during one of many media-assuaging follow-up meetings this week, declined to comment on whether the negative blowback has had any effect on its position, though it did admit that there was “confusion” surrounding its earlier announcements. Apple did assert that the program was “still in development” and that mulligans like this were a normal part of the production process.
The practice of scanning user accounts for contraband images is old hat for the tech industry, however Apple’s plan to install the monitoring software directly on the hardware itself is an unprecedented move — one that has privacy advocates up in arms. Their concern is that governments could demand Apple scour its users’ devices for other private, political, religious or personal information once the basic capabilities are are in place.