1
0
mirror of https://github.com/osmarks/website synced 2025-09-07 04:47:55 +00:00

improve GUIHacker, add TTT, fix RSS, add blog post

This commit is contained in:
2022-02-25 20:10:16 +00:00
parent d30cf5ed1c
commit bac2a75be6
8 changed files with 584 additions and 22 deletions

View File

@@ -21,4 +21,4 @@ Update (19/07/2021): also consider reading [this](https://boingboing.net/2012/01
Update (06/08/2021): [Oh look, Apple just did the client-side scanning thing](https://appleprivacyletter.com/). I do not think this sets a good precedent; this is the most obviously defensible usecase for this technology, and now future extensions can just be portrayed as a natural extension of it. The best case is that this is a prelude to E2EE iCloud, but this is still a fundamental hole in the security of such a thing. Whatever happens, given government pressure, reverting this will be quite hard.
Update (19/08/2021): As it turns out, NeuralHash, which Apple intend to use for the above, is [easily collidable](https://github.com/anishathalye/neural-hash-collider) (using a fairly generic technique which should be applicable to any other neural-network-based implementation). This seems like something which should have been caught prior to release. And apparently it has [significant variations](https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX) from floating point looseness, somehow. The "1 in 1 trillion" false positive rate is maybe not very likely. It [is claimed](https://www.theverge.com/2021/8/18/22630439/apple-csam-neuralhash-collision-vulnerability-flaw-cryptography) that this is not a significant issue primarily because the hashes are secret (because of course); however, this still creates a possible issues for the system, like editing the hash of an actually-bad image to avoid detection, or (with this and some way to get around the later review stages, like [adverserial image scaling](https://bdtechtalks.com/2020/08/03/machine-learning-adversarial-image-scaling/) or just using legal content likely to trigger a human false-positive) generating otherwise okay-looking images which are flagged. Also, the [Apple announcement](https://www.apple.com/child-safety/) explicitly says "These efforts will evolve and expand over time", which is a worrying thing I did not notice before.
Update (19/08/2021): As it turns out, NeuralHash, which Apple intend to use for the above, is [easily collidable](https://github.com/anishathalye/neural-hash-collider) (using a fairly generic technique which should be applicable to any other neural-network-based implementation). This seems like something which should have been caught prior to release. And apparently it has [significant variations](https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX) from floating point looseness, somehow. The "1 in 1 trillion" false positive rate is maybe not very likely. It [is claimed](https://www.theverge.com/2021/8/18/22630439/apple-csam-neuralhash-collision-vulnerability-flaw-cryptography) that this is not a significant issue primarily because the hashes are secret (because of course); however, this still creates a possible issues for the system, like editing the hash of an actually-bad image to avoid detection, or (with this and some way to get around the later review stages, like [adverserial image scaling](https://bdtechtalks.com/2020/08/03/machine-learning-adversarial-image-scaling/) or just [using legal content likely to trigger a human false-positive](https://news.ycombinator.com/item?id=28238071)) generating otherwise okay-looking images which are flagged. Also, the [Apple announcement](https://www.apple.com/child-safety/) explicitly says "These efforts will evolve and expand over time", which is a worrying thing I did not notice before.