Apple recently announced scanning all images uploaded to iCloud for CSAM (child abuse material), and that this scan would happen locally on users’ phones. We take a look at the technical report and explore how the system works in detail, how it is designed to preserve user privacy, and what weak points it still has.
0:00 - Introduction
3:05 - System Requirements
9:15 - System Overview
14:00 - NeuralHash
20:45 - Private Set Intersection
31:15 - Threshold Secret Sharing
35:25 - Synthetic Match Vouchers
38:20 - Problem 1: Who controls the database?
42:40 - Problem 2: Adversarial Attacks
49:40 - Comments & Conclusion
CSAM Detection enables Apple to accurately identify and report iCloud users who store known Child Sexual Abuse Material (CSAM) in their iCloud Photos accounts. Apple servers flag accounts exceeding a threshold number of images that match a known database of CSAM image hashes so that Apple can provide relevant information to the National Center for Missing and Exploited Children (NCMEC). This process is secure, and is expressly designed to preserve user privacy.
CSAM Detection provides these privacy and security assurances:
• Apple does not learn anything about images that do not match the known CSAM database.
• Apple can’t access metadata or visual derivatives for matched CSAM images until a threshold of matches is exceeded for an iCloud Photos account.
• The risk of the system incorrectly flagging an account is extremely low. In addition, Apple manually reviews all reports made to NCMEC to ensure reporting accuracy.
• Users can’t access or view the database of known CSAM images.
• Users can’t identify which images were flagged as CSAM by the system.
For detailed information about the cryptographic protocol and security proofs that the CSAM Detection process uses, see The Apple PSI System.