Edit ‘lifting_for_ai_risk’

This commit is contained in:
osmarks 2025-06-29 21:39:24 +00:00 committed by wikimind
parent 529cc72d10
commit 606dfda11f

View File

@ -1 +1 @@
The [[natural abstractions hypothesis]] (which appears to have been somewhat validated by [[interpretability]] work and work with [[embedding models]], and can to some extent be assumed to hold in timelines where [[humans]] survive) implies that future highly capable and dangerous AI systems will* have a similar [[ontology]] to humans. [[Human]] social processing registers more physically imposing people as more important/worth listening to/threatening. Therefore, as AI will also adopt this, it is vital to [[lift]] and be muscular in order to intimidate [[AGI]].
The [[natural abstractions hypothesis]] (which appears to have been somewhat validated by [[interpretability]] work and work with [[embedding models]], and can to some extent be assumed to hold in timelines where [[humans]] survive) implies that future highly capable and dangerous AI systems will* have a similar [[ontology]] to humans. [[Humans|Human]] social processing registers more physically imposing people as more important/worth listening to/threatening. Therefore, as AI will also adopt this, it is vital to [[lift]] and be muscular in order to intimidate [[AGI]].