documentation/decision_theory.myco
2024-09-22 19:39:33 +00:00

23 lines
2.0 KiB
Plaintext

Decision theory solves the extremely hard problem of telling you what you should do if you have a consistent and mathematically formalized model of the world and a utility function telling you exactly how good any particular world-state is. The most common decision theories are evidential decision theory and causal decision theory; various [[accursed decision theories]] exist but are not in wide use.
== Evidential decision theory
EDT says, informally, that you should take whatever action has the highest expected utility (i.e. the sum of utility for each outcome weighted by the probability of that outcome conditional on you taking the action).
== Causal decision theory
Informally, CDT says that you should take whatever action //causes// the highest expected utility, i.e. the conditional probability is replaced with a counterfactual (there is some debate about how to operationalize this, but roughly speaking, it's the probability if you replaced the action you took with another one).
== Limitations
Neither theory provides a satisfying answer to all problems.
=== Toxoplasmosis Dilemma
Imagine that you have the opportunity to pet a cat. In this hypothetical world cats, in general, might carry [[toxoplasmosis]], which causes you to be more likely to pet cats, as well as having many other negative effects you disvalue greatly. The cat you may pet has been tested for toxoplasmosis, and you are very confident it is not infected and could not infect you. However, people with toxoplasmosis are twice as likely to pet a cat given the opportunity than people without it.
An EDT agent reasons that petting the cat would be reasonably strong evidence that they have toxoplasmosis, which is undesirable, so they would not pet the cat. However, a CDT agent reasons that at the time of their decision their infection status is fixed, so counterfactually deciding to pet it or not pet it cannot affect whether they have toxoplasmosis. The latter is, in this instance, more intuitively satisfying.
=== XOR Blackmail
<= decision_theory/xor_blackmail