diff --git a/anthropic_shadow.myco b/anthropic_shadow.myco index 7a2d7dc..e1970c0 100644 --- a/anthropic_shadow.myco +++ b/anthropic_shadow.myco @@ -1,6 +1,6 @@ As a corollary to [[quantum immortality]], events which would kill all humans, such as the [[posthuman technocapital singularity]] or other forms of [[AGI doom]], cannot happen. As such, things which lead to events which would kill all humans also cannot happen. Anthropic shadow explains* many phenomena: * Python has bad dependency management because all ML code is written in it. If it were good, we would have AGI. -* RL doesn't work stably or reliably because it would be too powerful. +* RL doesn't work stably or reliably because it would be too powerful - imitation learning is less likely to do "weird things". * LLMs are what we got because they are slow to develop ([[scaling laws]]) and can do some useful tasks but are bad at [[agentic]] action. * Google can never ship any AI products competently because their infrastructure and researchers are too powerful. \ No newline at end of file