Choose one box or two in Newcomb's problem—compare evidential, causal, and functional decision theory recommendations
You face two boxes: one transparent containing $1,000, one opaque. A superintelligent predictor has already decided whether to put $1,000,000 in the opaque box based on its prediction of your choice. If it predicted you'll take only the opaque box, it put the million in. If it predicted you'll take both, it left the opaque box empty. The predictor is almost always right.
Two compelling but contradictory arguments:
Choose the action with the highest evidential expected value—what the action is evidence for:
EU_EDT(A) = Σ P(O | A) × U(O)
EDT one-boxes: taking one box is strong evidence you'll get the million, since the predictor saw what kind of person you are.
Choose the action with the highest causal expected value—what the action actually causes:
EU_CDT(A) = Σ P(O || A) × U(O)
CDT two-boxes: your choice doesn't causally affect what the predictor already did. The || symbol denotes a causal intervention, not mere conditioning.
Choose as if you're determining the output of the decision algorithm that both you and the predictor are computing:
EU_FDT(A) = Σ P(O | Source(You) = Source(Predictor) = A) × U(O)
FDT one-boxes: the predictor simulated your decision procedure. By choosing one-box, you're determining what that procedure outputs, which determines what the predictor predicted.