1
0
mirror of https://github.com/osmarks/website synced 2026-03-08 01:09:44 +00:00

minor copyedit & addition

This commit is contained in:
osmarks
2026-02-11 12:58:00 +00:00
parent 95c4298a26
commit 04f26b6dcf
3 changed files with 14 additions and 5 deletions

View File

@@ -63,7 +63,7 @@ Moreover, people are on average [not very smart](https://www.overcomingbias.com/
Increasingly, doing well in modernity requires long-horizon, complex, quantitative decision-making and forward planning, and/or cultural knowledge not common to everyone. Consider:
* For Americans, getting into high-status colleges has a significant difference on later career outcomes, and famously requires years of wrangling highly specific extracurriculars and knowing how to write essays to accurately pander to admissions officers.
* For Americans, getting into high-status colleges has a significant effect on later career outcomes, and famously requires years of wrangling highly specific extracurriculars and knowing how to write essays to accurately pander to admissions officers.
* With the fall of defined-benefit pensions, (comfortable) retirement requires understanding compound interest and investment returns, various tax-advantaged savings options with inscrutable acronyms and constraints, as well as having the low time preference/discipline to bother to do this.
* [Advance-booked transport ticket pricing](/pricecog/).
* Safely and correctly using credit is similarly complex and valuable.
@@ -75,7 +75,7 @@ LLMs can't competently do the long-horizon planning sides of this, which is why
This suggests that the role of AI chat systems in many people's[^19] lives could go far beyond "boyfriend/girlfriend in a computer" - without substantial deep technical changes ("just" extensive work on interface design and probably finetuning), the result can be something like "superhumanly compelling omniscient-feeling life coach"[^16] (which may also be your boyfriend/girlfriend)[^13]. If its advice is generally better than what people think of on their own, they will generally defer to the LLM (on increasingly specific decisions, if the technology can keep up).
This sounds like a horrific dystopian nightmare, but in many ways it could be an improvement over the status quo. Almost everyone is continually subject to the opaque whims of economic forces, [unpleasantly accurate modelling](https://x.com/wanyeburkett/status/1927413667173159142) by other people's algorithms (linear regressions) and recommender systems anyway: being managed by a more humanized system more aware of your interests is a step up. There are wider advantages to offloading decisionmaking: [making choices is](https://thezvi.wordpress.com/2017/07/22/choices-are-bad/) [often unpleasant](https://thezvi.wordpress.com/2017/08/12/choices-are-really-bad/), and having them made for you conveniently absolves you from blame in folk morality. It's also plausible to me that most people don't have explore/exploit tradeoffs correctly set for the modern world/big cities and e.g. don't try enough restaurants, hobbies or variety in general[^21].
This sounds like a horrific dystopian nightmare, but in many ways it could be an improvement over the status quo. Almost everyone is continually subject to the opaque whims of economic forces, [unpleasantly accurate modelling](https://x.com/wanyeburkett/status/1927413667173159142) by other people's algorithms (linear regressions) and recommender systems anyway: being managed by a more humanized system more aware of your interests is a step up. There are wider advantages to offloading decisionmaking: [making choices is](https://thezvi.wordpress.com/2017/07/22/choices-are-bad/) [often unpleasant](https://thezvi.wordpress.com/2017/08/12/choices-are-really-bad/), and having them made for you conveniently [absolves you from blame](/assets/misc/copenhagen_ethics.html) in folk morality. It's also plausible to me that most people don't have explore/exploit tradeoffs correctly set for the modern world/big cities and e.g. don't try enough restaurants, hobbies or variety in general[^21].
However, the incentives of the providers here are very bad: if a user is supported well by your system and becomes better off mentally/financially/etc, you cannot capture that value very easily, whereas it's relatively easy to charge for extra interaction with your product[^17][^20]. Thus, as users enjoy having "takes" and being agreed with, AIs will still be built for sycophancy and not contradict users as much as they should, and will probably aim to capture attention at the expense of some user interests. On the other hand, AI companies are constrained by PR, at least inasmuch as they fear regulation, so nothing obviously or photogenically bad for users, or anything which looks like that, can be shipped[^15]. On the third hand, much user behaviour is "ill-formed and coercible" - if someone hasn't thought deeply about something, they could form several different opinions depending on framing and context, so there are enough degrees of freedom that influence on them and sycophancy don't trade off too badly. I think the result is an unsatisfying compromise in which:
@@ -84,7 +84,6 @@ However, the incentives of the providers here are very bad: if a user is support
* Competition is constrained by it being difficult to switch providers - regardless of GDPR-compliance memory export features, off-policy learning of you by another LLM is less informative than active conversations, and companions shape your preferences in their favour.
* Active chatbot interaction, as opposed to speaking with them while doing other things, encroaches on time currently used for social media, as well as (what remains of) in-personal social interaction, although very high use remains seen as low-status[^18]. Many humans adopt the speech habits and to some extent interaction styles of popular AI systems.
<details><summary>Aside: local-first AI.</summary>
When this sort of topic, or data privacy issues, are brought up, people often suggest running AI systems locally on your own hardware so they are under your control and bound to you. This will not work. Self-hosting anything is weird and niche enough that very few people do it even with the cost being a few dollars per month for a VPS and some time spent reading the manuals for things which will autoconfigure it for you. LLMs as they currently exist benefit massively from economies of scale (being essentially [memory-bandwidth-bound](/accel/)): without being able to serve multiple users on the same system and batch execution, and to keep hardware busy all the time, it's necessary to accept awful performance or massively underutilize very expensive hardware. Future architecture developments will probably aim to be more compute-bound, but retain high minimum useful deployment sizes. Also, unlike with normal software, where self-hosted replacements can do mostly the same things if more jankily, the best open-ish AI generally lags commercial AI by about a year in general capabilities and longer in product (there's still no real equivalent to ChatGPT Advanced Voice Mode available openly).

View File

@@ -190,7 +190,7 @@ Since this list was written, I think it became notorious for introducing the "me
### Starrigger
[Starrigger](https://www.goodreads.com/book/show/981841.Starrigger) surprised me by being pretty smart given its odd-sounding conceit (interstellar travel via highways) and age. Generally a fun read.
[Starrigger](https://www.goodreads.com/book/show/981841.Starrigger) surprised me by being pretty smart given its odd-sounding conceit (interstellar travel via highways) and age. Generally a fun read. The sequels are annoying, however.
### Stories Of Your Life And Others
@@ -246,6 +246,10 @@ ATTENTION. DUE TO A SCALE BACK IN COVERAGE, THE MORAL ARC OF THE UNIVERSE NO LON
[UNSONG](https://unsongbook.com/) (now published as a book, but the web version is what I read and is probably fine) is one of approximately two books here based on wild free association based on (very sophisticated) puns. I recall there being some weirdness with pacing, but I like it overall, and it is *fearsome* how well Scott is able to connect everything to everything else.
### Venomous Lumpsucker
[Venomous Lumpsucker](https://www.goodreads.com/book/show/59593576-venomous-lumpsucker) is perhaps the only good environmentalist fiction (at least, out of that which I have read), and also one of the rare pieces of speculative economics fiction I'm aware of. Has the vaguely whimsical feeling of [Accelerando](#accelerando), moving through a world of advanced, bizarre and poorly understood technologies.
### Void Star
::: epigraph attribution=Cloudbreaker
@@ -319,6 +323,10 @@ You can also play [Minetest](https://www.minetest.net/), a free and open source
This is actually two separate series (the [British](https://en.wikipedia.org/wiki/Dirk_Gently_(TV_series)) version and the [American but also British](https://en.wikipedia.org/wiki/Dirk_Gently%27s_Holistic_Detective_Agency_(TV_series)) version) based on Douglas Adams' excellent humour. Both have a detective who uses "the fundamental interconnectedness of all things" - the American version is somewhat flashier and weirder, whereas the British version is in some loose sense more realistic and significantly more cynical.
### Hundreds of Beavers
[Hundreds of Beavers](https://en.wikipedia.org/wiki/Hundreds_of_Beavers) is an inspiring story about human ingenuity triumphing over nature (beavers).
### Limitless
[Limitless](https://en.wikipedia.org/wiki/Limitless_(TV_series)) (the movie is also decent) is among the somewhat less bad fictional depictions of superhuman intelligence, but mostly a good comedy. Unfortunately, like many of the series I like, it was killed unreasonably early. Perhaps future AI technology will simply predict the next season.

View File

@@ -30,7 +30,7 @@ To change the world, a superintelligence doesn't have to be massively better at
For many tasks, there is a hard limit on how good anything can get, and dimishing returns on compute/resources as that limit is approached. For instance, [bin packing](https://en.wikipedia.org/wiki/Bin_packing_problem) is NP-hard, but very simple and cheap algorithms can get within a small constant factor of the optimal solution. Chess engine developers believe that the best engines cannot get significantly better at play from the normal chess starting position, because the game can be drawn too easily, and some think that they're about able to [draw God](https://en.chessbase.com/post/how-god-plays-chess) from there (frontier chess engines are instead tested in asymmetric starting positions such that one side should always be able to win or draw). In tic-tac-toe, perfect play is obviously simple enough that no ASI can, within the constraints of the game, do anything beyond forcing a draw against a competent opponent. In benchmarks with known correct answers, nothing can beat a 100% score[^16]. This is often used to argue for diminishing returns on intelligence in general, which I don't think is correct.
There are several important problems with this: mere "diminishing returns" tells you nothing about how rapidly they diminish, whether they asymptote or just grow more slowly, or how far beyond humans you can go; usefulness isn't always linear in whatever metric shows diminishing returns; and more intelligence unlocks qualitatively new abilities which preexisting benchmarks won't catch.
There are several important problems with this: mere "diminishing returns" tells you nothing about how rapidly they diminish, whether they asymptote or just grow more slowly, or how far beyond humans you can go; usefulness isn't always linear in whatever metric shows diminishing returns; and more intelligence unlocks qualitatively new abilities which preexisting benchmarks won't catch[^19].
As I address in the next section, capabilities can frequently go far beyond humans', since ([Moravec's paradox](https://en.wikipedia.org/wiki/Moravec's_paradox)) we are strongly optimized for the kind of task our ancestors regularly faced and rely on less robust general-purpose capability for anything more recent[^17].
@@ -133,3 +133,5 @@ Enough of the world is bottlenecked on (availability of) human intelligence, rat
[^17]: This may centrally be an I/O limitation.
[^18]: This section uses mostly evidence from deep learning. It is possible that ASI won't be built this way (though I personally expect it to be), but whatever is used should be "at least as good" in these ways.
[^19]: Apparently, Gwern [also wrote about this](https://gwern.net/complexity), but I forgot.