1
0
mirror of https://github.com/osmarks/website synced 2024-12-23 16:40:31 +00:00
website/blog/online-safety-bill.md
2024-04-22 19:19:53 +01:00

9.2 KiB
Raw Blame History

title description created updated slug
Against the "Online Safety Bill" In which I get annoyed at yet more misguided UK government behaviour. 08/07/2021 19/08/2021 osbill

::: epigraph attribution="Malcolm Turnbull" The laws of Australia prevail in Australia, I can assure you of that. The laws of mathematics are very commendable, but the only law that applies in Australia is the law of Australia. :::

I recently found out that the UK government aims to introduce the "Online Safety Bill" and read about it somewhat (no, I have not actually read much of the (draft) bill itself; it is 145 pages with 146 of impact assessments and 123 of explanatory notes, and so out of reach of all but very dedicated lawyers) and, as someone on the internet, it seems very harmful. This has already been detailed quite extensively and probably better than I can manage elsewhere, so I'll just summarize my issues relatively quickly.

Firstly, it appears to impose an unreasonable amount of requirements on essentially every internet service (technically mine, too, due to the comments box!): risk-assessments, probably age verification (age is somewhat sensitive information which it would not be good for all websites to have to collect), fees for companies of some size (I think this is just set by OFCOM), and, more generally, removing "harmful content", on pain of being blocked/sanctioned/fined. Not illegal content, just "content that is harmful to children/adults" (as defined on page 50 or so). The bill is claimed to deal with the excesses of Facebook and other large companies, and they certainly have problems, but this affects much more than that (and doesn't seem to address their main problems (misaligned incentives with users causing optimization for engagement over all else, privacy violations, monopolistic behaviour) much).

Secondly, despite the slight commitment to freedom of speech in the form of also giving webservices "A duty to have regard to the importance of protecting users right to freedom of expression within the law" (page 22), this is not something the bill would actually be good for on net; onerous compliance requirements will hit smaller platforms most, and the various enforcement measures would make OFCOM into a powerful censorship body (the Secretary of State is also allowed to arbitrarily alter "codes of practice"). It also contains limited bodged-in exceptions for "journalistic content" and "content of democratic importance", but this is not a general solution to anything, and I don't think limiting free speech to certain higher-status groups is a good idea.

Finally, while the bill never explicitly mentions end-to-end encryption (which I wanted to write about here before, but never got round to), the government has for some time been against end-to-end encryption under the usual pretext of terrorism and child safety (which are frequently mentioned in it), and it contains powers which could be used to help with their ongoing campaign against this (the "technology notice"). It's claimed that they don't want to break the security and privacy E2EE provides and just want some form of "lawful access", but these are incompatible, regardless of the claims of incompetent politicians; a cryptographic system can't know about what's lawful or ethical, only whether someone has a particular cryptographic key. Any mechanism for leaking communication to someone other than the intended recipient fundamentally breaks the security of a system, if sometimes in less bad ways than having no end to end encryption at all.

Given the obvious impossibility of actually banning strong E2EE systems (the basic concepts are well-documented and the cryptographic primitives are on every computer and built into recent CPUs), the most plausible approach is to break the security of popular E2EE platforms and discourage using others, exactly what this law seems designed to do. This probably wouldn't help with the issues used as justification, inasmuch as anyone looking to do terrorism could just use something with non-crippled security, but would negatively affect the average user - if the default is not private, the majority of communications will be left unprotected from access by social media companies themselves, data breaches, current or future governments/government agencies, or rogue employees. The other alternative proposed is client-side scanning, where messages remain end-to-end encrypted but are checked for objectionable content on-device. This has similar issues though - if such a system exists, it can be retasked for anything else, and it would probably be designed as a black box such that users don't know exactly what is banned/flagged.

If you are in fact in the UK, I hope this has convinced you to do something about it, as it becoming law would be harmful to freedom of speech, the relative openness of the current web, security, and not having authoritarian governments in the future. I don't know what useful things can actually be done about this, as on similar issues contacting my MP has proven entirely unhelpful, but who knows. If I've made errors, which is entirely plausible, please alert me via methods so they can be fixed.

Update (19/07/2021): also consider reading this, which addresses this sort of thing as a result of more general problems.

Update (06/08/2021): Oh look, Apple just did the client-side scanning thing. I do not think this sets a good precedent; this is the most obviously defensible usecase for this technology, and now future extensions can just be portrayed as a natural extension of it. The best case is that this is a prelude to E2EE iCloud, but this is still a fundamental hole in the security of such a thing. Whatever happens, given government pressure, reverting this will be quite hard.

Update (19/08/2021): As it turns out, NeuralHash, which Apple intend to use for the above, is easily collidable (using a fairly generic technique which should be applicable to any other neural-network-based implementation). This seems like something which should have been caught prior to release. And apparently it has significant variations from floating point looseness, somehow. The "1 in 1 trillion" false positive rate is maybe not very likely. It is claimed that this is not a significant issue primarily because the hashes are secret (because of course); however, this still creates a possible issues for the system, like editing the hash of an actually-bad image to avoid detection, or (with this and some way to get around the later review stages, like adverserial image scaling or just using legal content likely to trigger a human false-positive) generating otherwise okay-looking images which are flagged. Also, the Apple announcement explicitly says "These efforts will evolve and expand over time", which is a worrying thing I did not notice before.