Edit ‘anthropic_shadow’

This commit is contained in:
osmarks 2024-09-24 16:46:59 +00:00 committed by wikimind
parent 2d16997f59
commit 5ea504ad71

View File

@ -4,3 +4,4 @@ As a corollary to [[quantum immortality]], events which would kill all humans, s
* RL doesn't work stably or reliably because it would be too powerful - imitation learning is less likely to do "weird things".
* LLMs are what we got because they are slow to develop ([[scaling laws]]) and can do some useful tasks but are bad at [[agentic]] action.
* Google can never ship any AI products competently because their infrastructure and researchers are too powerful.
* Kabbalah (approximately, extremely deep research into Jewish lore) exists to divert many of the world's highest-[[power level]] autists away from engineering fields.