Daring Fireball: Apple’s New ‘Child Safety’ Initiatives, and the Slippery Slope

August 7, 2021

Daring Fireball: Apple’s New ‘Child Safety’ Initiatives, and the Slippery Slope:

All of these features are fairly grouped together under a “child safety” umbrella, but I can’t help but wonder if it was a mistake to announce them together. Many people are clearly conflating them, including those reporting on the initiative for the news media. E.g. The Washington Post’s “never met an Apple story that couldn’t be painted in the worst possible light” Reed Albergotti’s report, the first three paragraphs of which are simply wrong1 and the headline for which is grossing misleading (“Apple Is Prying Into iPhones to Find Sexual Predators, but Privacy Activists Worry Governments Could Weaponize the Feature”).

Not surprisingly, this is the first really good, non-hyperbolic summary of everything Apple announced they’re doing on the topic.

  • On-device, in the Messages app, neural analysis of images for possible sensitive content sent or recieved… If the user is under 12, parents can opt-in to recieve a warning, over 12 the user can be notified but parents won’t be… And none of this is ever reported to any kind of authories, nor is any content sent to Apple or anyone else.
  • Likewise on-device updates to Siri and Search around sensitive content, with the same kind of parental opt-in notifications for under 12 users, or just the users otherwise, similar to above.

  • Most misunderstood… CSAM image fingerprint comparisons. Not sending images, not even scanning content of images, but creating a verifiable hash of images which can be compared with fingerprints in the National Center for Missing and Exploited Children (NCMEC) systems… And if enough of those match the MCMEC system triggering a human review of those fingerprints for confirmation, before finally potentially raising further alarms. These cryptographic hashes, depending on the algorythm, should be entirely unique to any given image and so should be worse than lottery odds of ever creating a single false positive that a photo in your library matches a sensitive image in the NCMEC database, much less enough to trigger further action.

These seem to be exteremely well thought out, best compromise answers to really difficult problems and by far the most pprivacy forward answers of anyone in the tech world so far.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: