documentation/anthropic_shadow.myco

7 lines
1.1 KiB
Plaintext
Raw Normal View History

2024-09-01 10:08:01 +00:00
As a corollary to [[quantum immortality]], events which would kill all humans, such as the [[posthuman technocapital singularity]] or other forms of [[AGI doom]], cannot happen. As such, things which lead to events which would kill all humans also cannot happen. Anthropic shadow explains* many phenomena:
* Python has bad dependency management because all ML code is written in it. If it were good, we would have AGI.
2024-11-25 16:07:01 +00:00
* RL doesn't work stably or reliably because it would be too powerful - imitation learning is less likely to have [[unanticipated capabilities excursions]] or [[reward hack]].
2024-11-25 16:05:02 +00:00
* LLMs are what we got because they are slow to develop ([[scaling laws]]) and can do some useful tasks but are bad at [[agentic]] action. Nobody training at large scale will adopt various things which plausibly substantially improve performance because they would become too capable.
2024-09-24 16:46:59 +00:00
* Google can never ship any AI products competently because their infrastructure and researchers are too powerful.
* Kabbalah (approximately, extremely deep research into Jewish lore) exists to divert many of the world's highest-[[power level]] autists away from engineering fields.