Friday 13 August 2021 12:01 pm

Apple child protection features spark concern among its own staff

Apple employees have spoken out internally about privacy concerns over the firm’s move to scan US customer phones and computers for child sex abuse images as protests over the new feature intensify, it is reported.

More than 800 messages were shared between Apple staff across several days on an internal Slack channel, sources told Reuters, who first reported the news.

Many voiced fears that the plan, announced last week, could be exploited by governments who wished to find other material in order to censor or arrest individuals.

While previous security changes at Apple have similarly prompted concern within its own ranks, the volume and duration of the new debate was noted as surprising to the workers, who wished to remain anonymous, according to Reuters.

Though coming mainly from employees outside of lead security and privacy roles, the pushback marks a shift for a company where a strict code of secrecy around new products colors other aspects of the corporate culture.

Some Apple employees have pushed back against the criticism in the company Slack thread while others have said that Slack was not the proper forum for the discussion.

Notably there were few protests from core security and privacy employees, with some even commenting that they felt Apple’s decision was a reasonable response to the pressure to deal with illegal material on its products.

Some said they hoped that the scanning feature would be a step toward fully encrypting iCloud for customers who want it, which would reverse Apple’s direction on the issue for a second time.

Apple announced its plans to search people’s iPhones for child sex abuse material (CSAM) using new technology last week, raising privacy concerns.

The tech giant confirmed that “NeuralHash” technology will allow them to detect known CSAM images stored in iCloud photos. The automated system will perform on-device checks of photos before they are uploaded to the iCloud. 

The system checks for matches with CSAM from a database compiled by the National Center for Missing and Exploited Children (NCMEC) and alerts human reviewers if illegal content is found. If the image is verified the reviewer contacts law enforcement. 

Apple said that the method was “designed with user privacy in mind” and claimed their technology provides “an extremely high level of accuracy” with less than a one in a trillion chance of an incorrect flagging occurring.

But Matthew Green, a researcher at John Hopkins University, cautioned that repressive regimes worldwide could use the technology to surveil the public pointing out that whoever controls the database can search for whatever content they want.

In a Tweet in response to the announcement he said: “Whether they turn out to be right or wrong on that point hardly matters. This will break the dam – governments will demand it from everyone.”

The new feature is due to be rolled out to US iPhones later this year in updates to iOS 15, iPadOS 15, watchOS 8 and macOS Monetary.