Edit ‘lifting_for_ai_risk’
This commit is contained in:
parent
529cc72d10
commit
606dfda11f
@ -1 +1 @@
|
||||
The [[natural abstractions hypothesis]] (which appears to have been somewhat validated by [[interpretability]] work and work with [[embedding models]], and can to some extent be assumed to hold in timelines where [[humans]] survive) implies that future highly capable and dangerous AI systems will* have a similar [[ontology]] to humans. [[Human]] social processing registers more physically imposing people as more important/worth listening to/threatening. Therefore, as AI will also adopt this, it is vital to [[lift]] and be muscular in order to intimidate [[AGI]].
|
||||
The [[natural abstractions hypothesis]] (which appears to have been somewhat validated by [[interpretability]] work and work with [[embedding models]], and can to some extent be assumed to hold in timelines where [[humans]] survive) implies that future highly capable and dangerous AI systems will* have a similar [[ontology]] to humans. [[Humans|Human]] social processing registers more physically imposing people as more important/worth listening to/threatening. Therefore, as AI will also adopt this, it is vital to [[lift]] and be muscular in order to intimidate [[AGI]].
|
Loading…
x
Reference in New Issue
Block a user