From 79aa64c1ae37dff5716711ceab4e7601923986d1 Mon Sep 17 00:00:00 2001 From: osmarks Date: Mon, 25 Nov 2024 16:07:01 +0000 Subject: [PATCH] =?UTF-8?q?Edit=20=E2=80=98anthropic=5Fshadow=E2=80=99?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- anthropic_shadow.myco | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/anthropic_shadow.myco b/anthropic_shadow.myco index 0ff8ab7..9162877 100644 --- a/anthropic_shadow.myco +++ b/anthropic_shadow.myco @@ -1,7 +1,7 @@ As a corollary to [[quantum immortality]], events which would kill all humans, such as the [[posthuman technocapital singularity]] or other forms of [[AGI doom]], cannot happen. As such, things which lead to events which would kill all humans also cannot happen. Anthropic shadow explains* many phenomena: * Python has bad dependency management because all ML code is written in it. If it were good, we would have AGI. -* RL doesn't work stably or reliably because it would be too powerful - imitation learning is less likely to do "weird things". +* RL doesn't work stably or reliably because it would be too powerful - imitation learning is less likely to have [[unanticipated capabilities excursions]] or [[reward hack]]. * LLMs are what we got because they are slow to develop ([[scaling laws]]) and can do some useful tasks but are bad at [[agentic]] action. Nobody training at large scale will adopt various things which plausibly substantially improve performance because they would become too capable. * Google can never ship any AI products competently because their infrastructure and researchers are too powerful. * Kabbalah (approximately, extremely deep research into Jewish lore) exists to divert many of the world's highest-[[power level]] autists away from engineering fields. \ No newline at end of file