1
0
mirror of https://github.com/osmarks/website synced 2024-12-25 01:20:33 +00:00
website/blog/online-safety-bill.md

24 lines
8.9 KiB
Markdown
Raw Normal View History

2021-07-10 12:09:00 +00:00
---
title: Against the "Online Safety Bill"
description: In which I get annoyed at yet more misguided UK government behaviour.
created: 08/07/2021
2021-08-19 21:06:57 +00:00
updated: 19/08/2021
2021-07-10 12:09:00 +00:00
slug: osbill
---
I recently found out that the UK government aims to introduce the "[Online Safety Bill](https://www.gov.uk/government/publications/draft-online-safety-bill)" and read about it somewhat (no, I have not actually read much of the (draft) bill itself; it is 145 pages with 146 of impact assessments and 123 of explanatory notes, and so out of reach of all but very dedicated lawyers) and, as someone on the internet, it seems very harmful. This has already been detailed quite extensively and probably better than I [can](https://techcrunch.com/2021/05/12/uk-publishes-draft-online-safety-bill/) [manage](https://www.openrightsgroup.org/blog/access-denied-service-blocking-in-the-online-safety-bill/) [elsewhere](https://matrix.org/blog/2021/05/19/how-the-u-ks-online-safety-bill-threatens-matrix), so I'll just summarize my issues relatively quickly.
Firstly, it appears to impose an unreasonable amount of requirements on essentially every internet service (technically mine, too, due to the comments box!): risk-assessments, probably age verification (age is somewhat sensitive information which it would not be good for all websites to have to collect), fees for companies of some size (I think this is just set by OFCOM), and, more generally, removing "harmful content", on pain of being blocked/sanctioned/fined. Not *illegal* content, just "content that is harmful to children/adults" (as defined on page 50 or so). The bill is claimed to deal with the excesses of Facebook and other large companies, and they certainly have problems, but this affects much more than that (and doesn't seem to address their main problems (misaligned incentives with users causing optimization for engagement over all else, privacy violations, monopolistic behaviour) much).
Secondly, despite the slight commitment to freedom of speech in the form of also giving webservices "A duty to have regard to the importance of protecting users right to freedom of expression within the law" (page 22), this is [not something the bill](https://www.theregister.com/2021/06/23/online_safety_bill_legal_type_legal_say/) [would actually be good for](https://www.bbc.co.uk/news/technology-57569336) [on net](https://cpj.org/2021/05/uk-online-safety-bill-raises-censorship-concerns-and-questions-on-future-of-encryption/); onerous compliance requirements will hit smaller platforms most, and the various enforcement measures would make OFCOM into a [powerful censorship body](https://www.openrightsgroup.org/blog/is-government-preparing-to-censor-discussions-about-migration/) (the Secretary of State is also allowed to arbitrarily alter "codes of practice"). It also contains limited bodged-in exceptions for "journalistic content" and "content of democratic importance", but this is not a general solution to anything, and I don't think limiting free speech to certain higher-status groups is a good idea.
2021-08-06 16:11:06 +00:00
Finally, while the bill never explicitly mentions end-to-end encryption (which I wanted to write about here before, but never got round to), the government has [for](https://www.gov.uk/government/publications/international-statement-end-to-end-encryption-and-public-safety) [some](https://www.techdirt.com/articles/20210402/23434546545/uk-politicians-getting-serious-about-ending-end-to-end-encryption.shtml) [time](https://www.techdirt.com/articles/20190928/18254143088/no-new-agreement-to-share-data-between-us-uk-law-enforcement-does-not-require-encryption-backdoors.shtml) [been](https://www.telegraph.co.uk/news/2017/07/31/dont-want-ban-encryption-inability-see-terrorists-plotting-online/) against end-to-end encryption under the usual pretext of terrorism [and](https://www.childrenscommissioner.gov.uk/report/access-denied-how-end-to-end-encryption-threatens-childrens-safety-online/) [child](https://www.gov.uk/guidance/private-and-public-channels-improve-the-safety-of-your-online-platform) [safety](https://techcrunch.com/2021/06/30/uk-tells-messaging-apps-not-to-use-e2e-encryption-for-kids-accounts/) (which *are* frequently mentioned in it), and it contains [powers](https://www.openrightsgroup.org/blog/endgame-for-end-to-end-encryption/) which could be used to help with their ongoing campaign against this (the "technology notice"). It's claimed that they don't want to break the security and privacy E2EE provides and just want some form of "lawful access", but these are [incompatible](https://www.schneier.com/wp-content/uploads/2016/09/paper-keys-under-doormats-CSAIL.pdf), regardless of the claims of [incompetent politicians](https://www.theguardian.com/technology/2017/apr/04/amber-rudd-necessary-hashtags-confusion-online-images-videos-home-office); a cryptographic system can't know about what's lawful or ethical, only whether someone has a particular cryptographic key. Any mechanism for leaking communication to someone other than the intended recipient fundamentally breaks the security of a system, if sometimes in less bad ways than having no end to end encryption at all.
2021-07-10 12:09:00 +00:00
Given the obvious impossibility of actually banning strong E2EE systems (the basic concepts are well-documented and the cryptographic primitives are on every computer and [built into recent CPUs](https://en.wikipedia.org/wiki/AES_instruction_set)), the most plausible approach is to break the security of popular E2EE platforms and discourage using others, exactly what this law seems designed to do. This probably wouldn't help with the issues used as justification, inasmuch as anyone looking to do terrorism could just use something with non-crippled security, but would negatively affect the average user - if the default is not private, the majority of communications will be [left unprotected](https://matrix.org/blog/2020/10/19/combating-abuse-in-matrix-without-backdoors) from access by social media companies themselves, data breaches, current or future governments/government agencies, or rogue employees. The other alternative proposed is client-side scanning, where messages remain end-to-end encrypted but are checked for objectionable content on-device. This has similar issues though - if such a system exists, it [can be retasked](https://twitter.com/matthew_d_green/status/1392814211122843651) for anything else, and it would probably be designed as a black box such that users don't know exactly what is banned/flagged.
2021-08-06 16:11:06 +00:00
If you are in fact in the UK, I hope this has convinced you to do something about it, as it becoming law would be harmful to freedom of speech, the relative openness of the current web, security, and not having authoritarian governments in the future. I don't know what useful things can actually be done about this, as on similar issues contacting my MP has proven entirely unhelpful, but who knows. If I've made errors, which is entirely plausible, please alert me via methods so they can be fixed.
Update (19/07/2021): also consider reading [this](https://boingboing.net/2012/01/10/lockdown.html), which addresses this sort of thing as a result of more general problems.
2021-08-19 21:06:57 +00:00
Update (06/08/2021): [Oh look, Apple just did the client-side scanning thing](https://appleprivacyletter.com/). I do not think this sets a good precedent; this is the most obviously defensible usecase for this technology, and now future extensions can just be portrayed as a natural extension of it. The best case is that this is a prelude to E2EE iCloud, but this is still a fundamental hole in the security of such a thing. Whatever happens, given government pressure, reverting this will be quite hard.
Update (19/08/2021): As it turns out, NeuralHash, which Apple intend to use for the above, is [easily collidable](https://github.com/anishathalye/neural-hash-collider) (using a fairly generic technique which should be applicable to any other neural-network-based implementation). This seems like something which should have been caught prior to release. And apparently it has [significant variations](https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX) from floating point looseness, somehow. The "1 in 1 trillion" false positive rate is maybe not very likely. It [is claimed](https://www.theverge.com/2021/8/18/22630439/apple-csam-neuralhash-collision-vulnerability-flaw-cryptography) that this is not a significant issue primarily because the hashes are secret (because of course); however, this still creates a possible issues for the system, like editing the hash of an actually-bad image to avoid detection, or (with this and some way to get around the later review stages, like [adverserial image scaling](https://bdtechtalks.com/2020/08/03/machine-learning-adversarial-image-scaling/) or just [using legal content likely to trigger a human false-positive](https://news.ycombinator.com/item?id=28238071)) generating otherwise okay-looking images which are flagged. Also, the [Apple announcement](https://www.apple.com/child-safety/) explicitly says "These efforts will evolve and expand over time", which is a worrying thing I did not notice before.