Edit ‘anthropic_shadow’
This commit is contained in:
parent
4b5690a0ec
commit
9b8e396c9f
@ -2,6 +2,6 @@ As a corollary to [[quantum immortality]], events which would kill all humans, s
|
||||
|
||||
* Python has bad dependency management because all ML code is written in it. If it were good, we would have AGI.
|
||||
* RL doesn't work stably or reliably because it would be too powerful - imitation learning is less likely to do "weird things".
|
||||
* LLMs are what we got because they are slow to develop ([[scaling laws]]) and can do some useful tasks but are bad at [[agentic]] action.
|
||||
* LLMs are what we got because they are slow to develop ([[scaling laws]]) and can do some useful tasks but are bad at [[agentic]] action. Nobody training at large scale will adopt various things which plausibly substantially improve performance because they would become too capable.
|
||||
* Google can never ship any AI products competently because their infrastructure and researchers are too powerful.
|
||||
* Kabbalah (approximately, extremely deep research into Jewish lore) exists to divert many of the world's highest-[[power level]] autists away from engineering fields.
|
Loading…
Reference in New Issue
Block a user