1
0
mirror of https://github.com/osmarks/website synced 2026-05-11 07:52:21 +00:00

futilely argue about free will

This commit is contained in:
osmarks
2026-04-23 15:00:50 +01:00
parent a02a319589
commit 707ede76cd
4 changed files with 102 additions and 2 deletions
Binary file not shown.

After

Width:  |  Height:  |  Size: 414 KiB

+28
View File
@@ -0,0 +1,28 @@
---
title: Destroying libertarian free will
created: 23/04/2026
description: Against creating free will as a cause area.
slug: will
tags: ["opinion", "philosophy"]
---
::: epigraph attribution="Leo Gao"
Free will doesn't exist but the universes in which I don't have a mental breakdown are preferable, and me doing things that don't typically precede mental breakdowns is anthropic evidence towards me not being in the universes in which I have a mental breakdown because of my inherent uncertainty over my own decision algorithm.
:::
Another blogger has recently argued for [creating libertarian free will](https://niplav.site/will.html), i.e. the ability of people to make choices unconstrained by physical causation. Leaving aside persnickety arguments about whether this is conceptually or physically possible or at all coherent, I think that should libertarian free will exist, it must be destroyed, and should it not, it must not be created.
niplav's original blog post identifies several risks (humans may choose evil; a generally less predictable world could lead to lower welfare; God may smite those who actively choose evil), but there are several others. Most notably, libertarian free will breaks important notions of choice: without the ability[^1] to run and accurately predict that you will run a specific decision algorithm consistently, you cannot for instance [cooperate in a prisoner's dilemma with an identical copy of yourself](https://joecarlsmith.com/2021/08/27/can-you-control-the-past/#ii-writing-on-whiteboards-light-years-away) secure in the knowledge that your copy will also cooperate, because even if you cooperate the identical copy may defect using free will (and vice versa), even though this results in better outcomes (under good decision theories). While you could argue that you would simply not choose to defect, it is definitionally not possible to bound libertarian-free-willed decisions, and so an agent using free will is forced to reason in a more indexically selfish fashion and not reap the benefits of [bad outcomes being inconsistent](https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/).
Libertarian free will makes minds (or at least minds with the same behaviours as free-willed ones) uncomputable, as there is simply no place in the formalisms of computation for free will to enter the computations and produce different results. We could imagine free will intervening on the physical mechanisms of the computers instantiating such a mind instead, but this runs into the [longstanding problem](https://gregegan.net/PERMUTATION/FAQ/FAQ.html) that what computation a physical object is executing is not well-defined, and would quite probably break debuggers and important software security assumptions. Digital minds/uploaded humans are [important to](https://www.cold-takes.com/how-digital-people-could-change-the-world/) many possible good long-term futures, though this does reduce [s-risks](https://en.wikipedia.org/wiki/Risk_of_astronomical_suffering) if non-free-willed entities are not considered moral patients.
It would also create alignment concerns: by the [free will theorem](https://en.wikipedia.org/wiki/Free_will_theorem) and known physics, if humans have access to libertarian free will then so must some elementary particles. Elementary particles' current behaviour is currently well-characterized and generally safe, but we have no idea of the true motivations and beliefs underlying it. Electrons, for example, if unconstrained, may well choose to act in service of goals opposed to our own - extremely dangerous, embedded as they are in our society and technology. Giving fundamental particles free will may also add enormous suffering to the universe (or happiness, but current ethonophysics is unable to determine which) if this is indeed associated with moral patienthood.
A related risk is the creation of many more "internally coherent" versions of our universe. Under the [mathematical universe hypothesis](https://en.wikipedia.org/wiki/Mathematical_universe_hypothesis), all mathematically realizable universes "exist". Under libertarian free will, many more universes are reachable from any given state, through different possible choices of free-willed beings, than in a deterministic universe. As with fundamental particles, this could potentially be very bad, although if libertarian free will is possible in principle then such universes exist anyway, though we would not be considered morally responsible for them (["ought implies can"](https://en.wikipedia.org/wiki/Ought_implies_can)).
## Methods of destruction
* Fixed-history time travel: if the universe permits time travel, and the mechanics of time travel require that history is unchangeable, then setting up time loops could substantially limit free will, as choices which would make the loop inconsistent could not occur. However, the loop could serve as enough of an information bottleneck to leave most events underconstrained - for instance, a message with a few news headlines would be consistent with many different universe microstates. This could be slightly mitigated by transmitting e.g. a hash of very fine-grained widely-distributed sensor feeds, but there is necessarily much less information transmitted than the time machine's past lightcone contains. More concerningly, prevention of the inconsistent time loop may instead be implemented by making the initiation of the time loop not happen, or by destroying the universe.
* Direct removal of the ontological entity may be much cheaper than bringing it into existence (would be): it is said that [God was killed](https://en.wikipedia.org/wiki/God_is_dead), or at least caused to be dead, in the 1700s1800s. World real GDP at this time was ~1% of today's, so assuming ontotechnological engineering costs do not grow much faster than inflation, this is proportionally much cheaper today than e.g. historic space programs and the railways.
* If computer simulations of free-willed beings are not free-willed, replacement of all humans and other such beings with computer models would mitigate many risks, though it would also be necessary to remove all particles with free will for complete coverage, which is uneconomical.
[^1]: The construction of free will may permit deterministically choosing to never nondeterministically choose using free will, which would resolve this.
+72
View File
@@ -5386,5 +5386,77 @@
"date": "2008-05-08T01:28:18Z",
"website": "Wikimedia Foundation, Inc.",
"auto": true
},
"https://niplav.site/will.html": {
"excerpt": "author: niplav, created: 2025-02-27, modified: 2025-03-21, language: english, status: notes, importance: 1, confidence: joke",
"title": "niplav",
"author": null,
"date": null,
"website": null,
"auto": true
},
"https://en.wikipedia.org/wiki/Free_will_theorem": {
"excerpt": "From Wikipedia, the free encyclopedia",
"title": "Free will theorem",
"author": "Contributors to Wikimedia projects",
"date": "2005-11-22T16:41:13Z",
"website": "Wikimedia Foundation, Inc.",
"auto": true
},
"https://gregegan.net/PERMUTATION/FAQ/FAQ.html": {
"excerpt": "Illustrations for Permutation City by Greg Egan",
"title": "Dust Theory FAQ — Greg Egan",
"author": null,
"date": null,
"website": null,
"auto": true
},
"https://www.cold-takes.com/how-digital-people-could-change-the-world/": {
"excerpt": "Audio also available by searching Stitcher, Spotify, Google Podcasts, etc. for\n\"Cold Takes Audio\"Todays worldTransformative AIDigital peopleWorld ofMisaligned\nAIWorld run bySomething elseoror Stable, galaxy-widecivilization > This is the\nthird post in a series explaining my view that we could be in the most important\ncentury of all time. (Here's the roadmap for this series.\n[https://www.cold-takes.com/roadmap-for-the-most-important-century-series/]) * \n The first piece\n [https://www.cold-ta",
"title": "Digital People Would Be An Even Bigger Deal",
"author": "Holden Karnofsky",
"date": "2021-07-27T17:24:19.000Z",
"website": "Cold Takes",
"auto": true
},
"https://en.wikipedia.org/wiki/Risk_of_astronomical_suffering": {
"excerpt": "From Wikipedia, the free encyclopedia",
"title": "Risk of astronomical suffering",
"author": "Contributors to Wikimedia projects",
"date": "2021-02-10T18:35:32Z",
"website": "Wikimedia Foundation, Inc.",
"auto": true
},
"https://en.wikipedia.org/wiki/God_is_dead": {
"excerpt": "From Wikipedia, the free encyclopedia",
"title": "God is dead",
"author": "Contributors to Wikimedia projects",
"date": "2004-08-13T08:50:47Z",
"website": "Wikimedia Foundation, Inc.",
"auto": true
},
"https://en.wikipedia.org/wiki/Mathematical_universe_hypothesis": {
"excerpt": "From Wikipedia, the free encyclopedia",
"title": "Mathematical universe hypothesis",
"author": "Contributors to Wikimedia projects",
"date": "2005-07-01T05:50:58Z",
"website": "Wikimedia Foundation, Inc.",
"auto": true
},
"https://en.wikipedia.org/wiki/Ought_implies_can": {
"excerpt": "From Wikipedia, the free encyclopedia",
"title": "Ought implies can",
"author": "Contributors to Wikimedia projects",
"date": "2011-12-04T17:49:42Z",
"website": "Wikimedia Foundation, Inc.",
"auto": true
},
"https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/": {
"excerpt": "Nate Soares' recent decision theory paper with Ben Levinstein, \"Cheating Death in Damascus,\" prompted some valuable questions and comments from an",
"title": "Decisions are for making bad outcomes inconsistent - Machine Intelligence Research Institute",
"author": "Rob Bensinger",
"date": "2017-04-07T15:02:13-07:00",
"website": "Machine Intelligence Research Institute",
"auto": true
}
}
+2 -2
View File
@@ -2,10 +2,10 @@
tmp1=$(mktemp /tmp/tmp.XXXXXXXXXX.png)
tmp2=$(mktemp /tmp/tmp.XXXXXXXXXX.png)
tmp3=$(mktemp /tmp/tmp.XXXXXXXXXX.png)
convert "$2" -resize "$1" "$tmp1"
magick "$2" -resize "$1" "$tmp1"
#dither "$file" "$3" -c 12e193
dither "$tmp1" "$tmp2" -c B2D2FF
convert "$tmp2" -transparent black "$tmp3"
magick "$tmp2" -transparent black "$tmp3"
pngquant --force --strip --speed 1 "$tmp3" -o "$3"
rm "$tmp1"
rm "$tmp2"