After a wave of criticism from the public and security experts, Apple has decided not to launch the iCloud photo verification feature yet. Earlier, the company said it would look for prohibited content with children in cloud storage.
In early August, Apple announced that it would deploy a new NeuralHash system in the United States. Its task is to check iCloud users for prohibited sexual content with children. The project was heavily criticized by security experts. Despite the fact that the system promises to check photos only for certain topics, attempts to depersonalize photo analysis do not cancel the main problem – the very fact of constant access to user files, experts noted.
As a result, the company made a statement today:
Last month, we announced plans to introduce features to help protect children from bullying who use communication to attract and exploit, and to limit the distribution of child sexual abuse materials. Based on feedback from customers, advocacy groups, researchers, and others, we decided we needed a few more months to gather information and make improvements before releasing these critical child safety features.
At the same time, the company did not say with whom it will be consulted about the implementation of the new system, as well as what kind of information Apple will collect.
Apple originally announced the innovation on August 5, 2021, stating that they would arrive later in 2021. New features include detection of juvenile sexual abuse images stored in iCloud Photos and, separately, blocking potentially harmful messages sent to children.
Industry experts and big names like Edward Snowden responded to the initiative by sending an open letter asking Apple not to implement these features. Experts believe that features can be used to monitor users.