Edit ‘anthropic_shadow’
This commit is contained in:
parent
eb2c60e0a9
commit
79ff7e52fe
@ -1,6 +1,6 @@
|
|||||||
As a corollary to [[quantum immortality]], events which would kill all humans, such as the [[posthuman technocapital singularity]] or other forms of [[AGI doom]], cannot happen. As such, things which lead to events which would kill all humans also cannot happen. Anthropic shadow explains* many phenomena:
|
As a corollary to [[quantum immortality]], events which would kill all humans, such as the [[posthuman technocapital singularity]] or other forms of [[AGI doom]], cannot happen. As such, things which lead to events which would kill all humans also cannot happen. Anthropic shadow explains* many phenomena:
|
||||||
|
|
||||||
* Python has bad dependency management because all ML code is written in it. If it were good, we would have AGI.
|
* Python has bad dependency management because all ML code is written in it. If it were good, we would have AGI.
|
||||||
* RL doesn't work stably or reliably because it would be too powerful.
|
* RL doesn't work stably or reliably because it would be too powerful - imitation learning is less likely to do "weird things".
|
||||||
* LLMs are what we got because they are slow to develop ([[scaling laws]]) and can do some useful tasks but are bad at [[agentic]] action.
|
* LLMs are what we got because they are slow to develop ([[scaling laws]]) and can do some useful tasks but are bad at [[agentic]] action.
|
||||||
* Google can never ship any AI products competently because their infrastructure and researchers are too powerful.
|
* Google can never ship any AI products competently because their infrastructure and researchers are too powerful.
|
Loading…
x
Reference in New Issue
Block a user