This is "Newcomb's Problem" or "Newcomb's Paradox." It was first written about by philosopher Robert Nozick, who heard it from physicist William A. Newcomb. It is called a paradox because two seemingly reasonable lines of argument lead to opposite strategies.
The two-box strategy (the "dominance" argument):
You are presented with two boxes: one certainly contains $1000 and the other might contain $1 million. You can either take one box or both. You cannot change what is in the boxes. Therefore, to maximize your gain you should take both boxes.
This is called the dominance argument because whichever state of nature exists ($1 million in box B or $1 million not in box B), you gain more by picking both boxes. Thus the two-box strategy "dominates" the one-box strategy.
The one-box strategy (the "expected gain" argument):
- The expected gain of each strategy is
Since P(predict X | do X) is near unity, your expected gain if you take both boxes is nearly $1000, whereas your expected gain if you take one box is nearly $1 million. Therefore you should take one box.
Now, to resolve the paradox, it is not sufficient to provide better arguments for any one point of view or to undermine the arguments for any one point of view. You have to bring out the hidden assumptions that lead to these arguments. In this case, the hidden assumption is that P(predict X | do X) is near unity, given that P(do X | predict X) is near unity. If this is so, then the one-box strategy is best; if not, then the two-box strategy is best.
Events which proceed from a common cause are correlated. My mental states lead to my choice and, very probably, to the state of box B. Therefore my choice and the state of box B are highly correlated. In this sense, my choice changes the "probability" that the money is in box B. However, if you do not admit the possibility of reverse causality, then since your choice cannot change the state of box B, this correlation is irrelevant.
While you are given that P(do X | predict X) is high, unless reverse causality is possible, it is not given that P(predict X | do X) is high. Thus without reverse causality, the expected gain from either action cannot be determined from the information given. Thus the dominance argument holds and you should take both boxes. With reverse causality, the dominance argument cannot hold since your actions change the states of nature. Thus the expected gain argument holds and you should take box B only.