13 lines
1.4 KiB
Plaintext
13 lines
1.4 KiB
Plaintext
Imagine [[Omega]], a perfectly honest being which is also known to be a historically very accurate predictor of people's actions in this situation, sets up two boxes containing things, and gives you a choice between taking **box A and B** and **box B** alone. At the time this is set to you, the contents of the boxes are fixed.
|
|
|
|
Box A contains $10000, an amount of money. It's transparent, so you can see that it contains this. Box B is opaque, but you are told (by Omega, who is perfectly honest) that it contains $1000000, a larger amount of money, **[[iff]]** it predicted earlier that you would take **only** box B.
|
|
|
|
A CDT agent reasons that, regardless of what happened in the past, they will always get an extra $10000 by taking box A and B. Omega predicts this, so they receive a total of $10000.
|
|
|
|
An EDT agent reasons that their behaviour now is evidence about what the (unobserved) opaque box's contents are - conditional on them taking box A and B, the B very probably contains nothing, and conditional on them taking B only, B very probably contains $10000000. As such, they take box B only and receive $1000000.
|
|
|
|
The problem is sometimes criticized for artificially "rewarding irrationality" (EDT-like behaviour), but [[Decision Theory/Newcomblike Problems Are The Norm]].
|
|
|
|
=== Generalizations
|
|
|
|
The Transparent Newcomb's Paradox variant makes both boxes transparent. This results in EDT no longer necessarily oneboxing (picking only B). |