Edit ‘decision_theory/newcombs_paradox’
This commit is contained in:
parent
b3284d2599
commit
30ab43776c
@ -6,6 +6,8 @@ A CDT agent reasons that, regardless of what happened in the past, they will alw
|
|||||||
|
|
||||||
An EDT agent reasons that their behaviour now is evidence about what the (unobserved) opaque box's contents are - conditional on them taking box A and B, the B very probably contains nothing, and conditional on them taking B only, B very probably contains $10000000. As such, they take box B only and receive $1000000.
|
An EDT agent reasons that their behaviour now is evidence about what the (unobserved) opaque box's contents are - conditional on them taking box A and B, the B very probably contains nothing, and conditional on them taking B only, B very probably contains $10000000. As such, they take box B only and receive $1000000.
|
||||||
|
|
||||||
|
The problem is sometimes criticized for artificially "rewarding irrationality" (EDT-like behaviour), but [[Decision Theory/Newcomblike Problems Are The Norm]].
|
||||||
|
|
||||||
=== Generalizations
|
=== Generalizations
|
||||||
|
|
||||||
The Transparent Newcomb's Paradox variant makes both boxes transparent.
|
The Transparent Newcomb's Paradox variant makes both boxes transparent. This results in EDT no longer necessarily oneboxing (picking only B).
|
Loading…
Reference in New Issue
Block a user