Child safety group Heat Initiative plans to launch a campaign pressing Apple on child sexual abuse material scanning and user reporting. The company issued a rare, detailed response on Thursday.
I never supported since it was on device and given this is the US hashes to spot “extremism could be added” given apple doesn’t know what the hashes are.
They are not cryptographic hashes. They are “perceptual” hashes or “fuzzy” hashes. They’re basically just a low resolution copy of the original image. It’s trivial for an attacker to maliciously send innocent seeming images that are a hash collision. This is, by the way, a feature not a bug. Perceptual hashes are not designed to perform a perfect match.
There are plenty of free white-papers on how perceptual hashes work, and Facebook’s implementation is even open source.
Apple said they tested 100 million perfectly legal images and three had collisions with a CSAM perceptual hash. When you consider how many photos Apple was proposing to scan (hundreds of trillions of photos) that means thousands of false positives would have occurred even if nobody maliciously abused the system.
And because of all that - Apple was planning to do human reviews of every photo. They would, therefore, have seen every match (and every false positive). It couldn’t have been hidden from Apple.
I never supported since it was on device and given this is the US hashes to spot “extremism could be added” given apple doesn’t know what the hashes are.
No you’re wrong.
They are not cryptographic hashes. They are “perceptual” hashes or “fuzzy” hashes. They’re basically just a low resolution copy of the original image. It’s trivial for an attacker to maliciously send innocent seeming images that are a hash collision. This is, by the way, a feature not a bug. Perceptual hashes are not designed to perform a perfect match.
There are plenty of free white-papers on how perceptual hashes work, and Facebook’s implementation is even open source.
Apple said they tested 100 million perfectly legal images and three had collisions with a CSAM perceptual hash. When you consider how many photos Apple was proposing to scan (hundreds of trillions of photos) that means thousands of false positives would have occurred even if nobody maliciously abused the system.
And because of all that - Apple was planning to do human reviews of every photo. They would, therefore, have seen every match (and every false positive). It couldn’t have been hidden from Apple.
What makes you day apple didn’t know what they are? Is this a thing that happened that I’m not aware of?
If they only get the hashes supplied, Apple can’t tell why they’re bad files.
deleted by creator