Jump to content

Newcomb's paradox: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Extensions to Newcomb's problem: Removing a self-published paper with no peer review
I think giving an entire section to Simon Burgess's analysis is undue weight, when it contributes so little.
Line 48: Line 48:
[[Andrew David Irvine|Andrew Irvine]] argues that the problem is structurally isomorphic to [[Braess' paradox]], a non-intuitive but ultimately non-paradoxical result concerning equilibrium points in physical systems of various kinds.<ref>{{cite journal |first=Andrew |last=Irvine |title=How Braess' paradox solves Newcomb's problem |journal=International Studies in the Philosophy of Science |volume=7 |issue=2 |year=1993 |pages=141–60 |doi=10.1080/02698599308573460}}</ref>
[[Andrew David Irvine|Andrew Irvine]] argues that the problem is structurally isomorphic to [[Braess' paradox]], a non-intuitive but ultimately non-paradoxical result concerning equilibrium points in physical systems of various kinds.<ref>{{cite journal |first=Andrew |last=Irvine |title=How Braess' paradox solves Newcomb's problem |journal=International Studies in the Philosophy of Science |volume=7 |issue=2 |year=1993 |pages=141–60 |doi=10.1080/02698599308573460}}</ref>


Simon Burgess has argued that the problem can be divided into two stages: the stage before the predictor has gained all the information on which the prediction will be based, and the stage after it. While the player is still in the first stage, they are presumably be able to influence the predictor's prediction, for example by committing to taking only one box. Burgess argues that after the first stage is done, the player can decide to take both boxes A and B without influencing the predictor, thus reaching the maximum payout.<ref>{{cite journal|last=Burgess|first=Simon|title=Newcomb's problem: an unqualified resolution|journal=Synthese|date=January 2004|volume=138|issue=2|pages=261–287|doi=10.1023/b:synt.0000013243.57433.e7 |jstor=20118389}}</ref> This assumes that the predictor cannot predict the player's thought process in the second stage, and that the player can change their mind at the second stage without influencing the predictor's prediction. Burgess says that given his analysis, Newcomb's problem is akin to the [[Kavka's toxin puzzle|toxin puzzle]].<ref>{{cite journal|last=Burgess|first=Simon|title=Newcomb's problem and its conditional evidence: a common cause of confusion|journal=Synthese |date=February 2012 |volume=184 |issue=3 |pages=319–339 |doi=10.1007/s11229-010-9816-1 |jstor=41411196}}</ref> This is because both problems highlight the fact that one can have a reason to intend to do something without having a reason to actually do it.
==Influencing the predictor==
Simon Burgess has argued that we need to recognize two stages to the problem. The first stage is that before which the predictor has gained all the information on which the prediction will be based. If, for example, we suppose that the prediction is at least partially based on a brain scan of the player then the first stage will not be over at least until that brain scan has been taken. An important point to appreciate is that while the player is still in that first stage, they will presumably be able to influence the predictor's prediction (e.g., by committing to taking only one box). The second stage commences after the completion of the brain scan (and/or after the gathering of any other information on which the prediction is based). As Burgess points out, the first stage is the one in which all of us currently find ourselves. Moreover, there is a clear sense in which the first stage is more significant than the second because it is then that the player can determine whether the $1,000,000 is in box B. Once they get to the second stage, the best that can be done is to determine whether to get the $1,000 in box A.<ref>{{cite journal|last=Burgess|first=Simon|title=Newcomb's problem: an unqualified resolution|journal=Synthese|date=January 2004|volume=138|issue=2|pages=261–287|doi=10.1023/b:synt.0000013243.57433.e7 |jstor=20118389}}</ref>

Those persuaded by Burgess's approach do not say either that it is simply rational to take just box B or that it is conversely rational to take both boxes, but
rather argue a player should make their decision while in the first stage and that that decision should be to commit to only box B. Once in the second stage, the rational decision would be to take both boxes, although by that stage the player should already have made up their mind to take just box B. Burgess has repeatedly emphasized that he is not arguing that the player should change their mind on getting to the second stage. The safe and rational strategy to adopt is to simply make a commitment to just box B while in the first stage and to have no intention of wavering from that commitment, i.e., make an 'unqualified resolution'. Burgess points out that those who make no such commitment and therefore miss out on the $1,000,000 have simply failed to be prepared. In a more recent paper Burgess has explained that, given his analysis, Newcomb's problem should be seen as being akin to the [[Kavka's toxin puzzle|toxin puzzle]].<ref>{{cite journal|last=Burgess|first=Simon|title=Newcomb's problem and its conditional evidence: a common cause of confusion|journal=Synthese |date=February 2012 |volume=184 |issue=3 |pages=319–339 |doi=10.1007/s11229-010-9816-1 |jstor=41411196}}</ref> This is because both problems highlight the fact that one can have a reason to intend to do something without having a reason to actually do it.

With regard to causal structure, Burgess has consistently followed Ellery Eells and others in treating Newcomb's problem as a common cause problem. Contrary to David Lewis, he argues against the idea that Newcomb's problem is another version of the [[prisoner's dilemma]]. Burgess's argument on this point emphasizes the contrasting causal structures of the two problems.


==Consciousness==
==Consciousness==

Revision as of 16:52, 26 May 2019

In philosophy and mathematics, Newcomb's paradox, also referred to as Newcomb's problem, is a thought experiment involving a game between two players, one of whom purports to be able to predict the future.

Newcomb's paradox was created by William Newcomb of the University of California's Lawrence Livermore Laboratory. However, it was first analyzed and was published in a philosophy paper spread to the philosophical community by Robert Nozick in 1969,[1] and appeared in the March 1973 issue of Scientific American, in Martin Gardner's "Mathematical Games."[2] Today it is a much debated problem in the philosophical branch of decision theory.[3]

The problem

There is a predictor, a player, and two boxes designated A and B. The player is given a choice between taking only box B, or taking both boxes A and B. The player knows the following:[4]

  • Box A is clear, and always contains a visible $1,000.
  • Box B is opaque, and its content has already been set by the predictor:
    • If the predictor has predicted the player will take both boxes A and B, then box B contains nothing.
    • If the predictor has predicted that the player will take only box B, then box B contains $1,000,000.

The player does not know what the predictor predicted or what box B contains while making their choice.

Game theory strategies

Predicted choice Actual choice Payout
A + B A + B $1,000
A + B B $0
B A + B $1,001,000
B B $1,000,000

In his 1969 article, Nozick noted that "To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly."[4] The problem continues to divide philosophers today.[5][6]

Game theory offers two strategies for this game that rely on different principles: the expected utility principle and the strategic dominance principle. The problem is called a paradox because two analyses that both sound intuitively logical give conflicting answers to the question of what choice maximizes the player's payout.

  • Considering the expected utility when the probability of the predictor being right is almost certain or certain, the player should choose box B. This choice statistically maximizes the player's winnings, setting them at about $1,000,000 per game.
  • Under the dominance principle, the player should choose the strategy that is always better; choosing both boxes A and B will always yield $1,000 more than only choosing B. However, the expected utility of "always $1,000 more than B" depends on the statistical payout of the game; when the predictor's prediction is almost certain or certain, choosing both A and B sets player's winnings at about $1,000 per game.

David Wolpert and Gregory Benford suggest that there is no conflict between the two strategies; Newcomb's problem, they say, represents two different games with different probabilistic outcomes, and the conflict arises because of this imprecise definition of the game. The optimal strategy for either of the games they describe is independent of the predictor's infallibility, questions of causality, determinism, and free will.[4]

Causality and free will

Predicted choice Actual choice Payout
A + B A + B $1,000
B B $1,000,000

Causality issues arise when the predictor is posited as infallible and incapable of error; Nozick avoids this issue by positing that the predictor's predictions are "almost certainly" correct, thus sidestepping any issues of infallibility and causality. Nozick also stipulates that if the predictor predicts that the player will choose randomly, then box B will contain nothing. This assumes that inherently random or unpredictable events would not come into play anyway during the process of making the choice, such as free will or quantum mind processes.[7] However, these issues can still be explored in the case of an infallible predictor. Under this condition, it seems that taking only B is the correct option. This analysis argues that we can ignore the possibilities that return $0 and $1,001,000, as they both require that the predictor has made an incorrect prediction, and the problem states that the predictor is never wrong. Thus, the choice becomes whether to take both boxes with $1,000 or to take only box B with $1,000,000—so taking only box B is always better.

William Lane Craig has suggested that, in a world with perfect predictors (or time machines, because a time machine could be used as a mechanism for making a prediction), retrocausality can occur.[8] If a person truly knows the future, and that knowledge affects their actions, then events in the future will be causing effects in the past. The chooser's choice will have already caused the predictor's action. Some have concluded that if time machines or perfect predictors can exist, then there can be no free will and choosers will do whatever they're fated to do. Taken together, the paradox is a restatement of the old contention that free will and determinism are incompatible, since determinism enables the existence of perfect predictors. Put another way, this paradox can be equivalent to the grandfather paradox; the paradox presupposes a perfect predictor, implying the "chooser" is not free to choose, yet simultaneously presumes a choice can be debated and decided. This suggests to some that the paradox is an artifact of these contradictory assumptions.[9]

Gary Drescher argues in his book Good and Real that the correct decision is to take only box B, by appealing to a situation he argues is analogous—a rational agent in a deterministic universe deciding whether or not to cross a potentially busy street.[10]

Andrew Irvine argues that the problem is structurally isomorphic to Braess' paradox, a non-intuitive but ultimately non-paradoxical result concerning equilibrium points in physical systems of various kinds.[11]

Simon Burgess has argued that the problem can be divided into two stages: the stage before the predictor has gained all the information on which the prediction will be based, and the stage after it. While the player is still in the first stage, they are presumably be able to influence the predictor's prediction, for example by committing to taking only one box. Burgess argues that after the first stage is done, the player can decide to take both boxes A and B without influencing the predictor, thus reaching the maximum payout.[12] This assumes that the predictor cannot predict the player's thought process in the second stage, and that the player can change their mind at the second stage without influencing the predictor's prediction. Burgess says that given his analysis, Newcomb's problem is akin to the toxin puzzle.[13] This is because both problems highlight the fact that one can have a reason to intend to do something without having a reason to actually do it.

Consciousness

Newcomb's paradox can also be related to the question of machine consciousness, specifically if a perfect simulation of a person's brain will generate the consciousness of that person.[14] Suppose we take the predictor to be a machine that arrives at its prediction by simulating the brain of the chooser when confronted with the problem of which box to choose. If that simulation generates the consciousness of the chooser, then the chooser cannot tell whether they are standing in front of the boxes in the real world or in the virtual world generated by the simulation in the past. The "virtual" chooser would thus tell the predictor which choice the "real" chooser is going to make.

Fatalism

Newcomb's paradox is related to logical fatalism in that they both suppose absolute certainty of the future. In logical fatalism, this assumption of certainty creates circular reasoning ("a future event is certain to happen, therefore it is certain to happen"), while Newcomb's paradox considers whether the participants of its game are able to affect a predestined outcome.[15]

Extensions to Newcomb's problem

Many thought experiments similar to or based on Newcomb's problem have been discussed in the literature.[1] For example, a quantum-theoretical version of Newcomb's problem in which box B is entangled with box A has been proposed.[16]

The meta-Newcomb problem

Another related problem is the meta-Newcomb problem.[17] The setup of this problem is similar to the original Newcomb problem. However, the twist here is that the predictor may elect to decide whether to fill box B after the player has made a choice, and the player does not know whether box B has already been filled. There is also another predictor: a "meta-predictor" who has reliably predicted both the players and the predictor in the past, and who predicts the following: "Either you will choose both boxes, and the predictor will make its decision after you, or you will choose only box B, and the predictor will already have made its decision."

In this situation, a proponent of choosing both boxes is faced with the following dilemma: if the player chooses both boxes, the predictor will not yet have made its decision, and therefore a more rational choice would be for the player to choose box B only. But if the player so chooses, the predictor will already have made its decision, making it impossible for the player's decision to affect the predictor's decision.

Notes

  1. ^ a b Robert Nozick (1969). "Newcomb's Problem and Two Principles of Choice". In Rescher, Nicholas (ed.). Essays in Honor of Carl G Hempel (PDF). Springer.
  2. ^ Gardner, Martin (March 1974). "Mathematical Games". Scientific American. p. 102. Reprinted with an addendum and annotated bibliography in his book The Colossal Book of Mathematics (ISBN 0-393-02023-1)
  3. ^ "Causal Decision Theory". Stanford Encyclopedia of Philosophy. The Metaphysics Research Lab, Stanford University. Retrieved 3 February 2016.
  4. ^ a b c Wolpert, D. H.; Benford, G. (June 2013). "The lesson of Newcomb's paradox". Synthese. 190 (9): 1637–1646. doi:10.1007/s11229-011-9899-3. JSTOR 41931515.
  5. ^ Bellos, Alex (28 November 2016). "Newcomb's problem divides philosophers. Which side are you on?". the Guardian. Retrieved 13 April 2018.
  6. ^ Bourget, D., & Chalmers, D. J. (2014). What do philosophers believe?. Philosophical Studies, 170(3), 465-500.
  7. ^ Christopher Langan. "The Resolution of Newcomb's Paradox". Noesis (44).
  8. ^ Craig (1987). "Divine Foreknowledge and Newcomb's Paradox". Philosophia. 17 (3): 331–350. doi:10.1007/BF02455055.
  9. ^ Craig, William Lane (1988). "Tachyons, Time Travel, and Divine Omniscience". The Journal of Philosophy. 85 (3): 135–150. doi:10.2307/2027068. JSTOR 2027068.
  10. ^ Drescher, Gary (2006). Good and Real: Demystifying Paradoxes from Physics to Ethics. ISBN 978-0262042338.
  11. ^ Irvine, Andrew (1993). "How Braess' paradox solves Newcomb's problem". International Studies in the Philosophy of Science. 7 (2): 141–60. doi:10.1080/02698599308573460.
  12. ^ Burgess, Simon (January 2004). "Newcomb's problem: an unqualified resolution". Synthese. 138 (2): 261–287. doi:10.1023/b:synt.0000013243.57433.e7. JSTOR 20118389.
  13. ^ Burgess, Simon (February 2012). "Newcomb's problem and its conditional evidence: a common cause of confusion". Synthese. 184 (3): 319–339. doi:10.1007/s11229-010-9816-1. JSTOR 41411196.
  14. ^ Neal, R. M. (2006). "Puzzles of Anthropic Reasoning Resolved Using Full Non-indexical Conditioning". arXiv:math.ST/0608592. {{cite arXiv}}: Cite has empty unknown parameter: |work= (help)
  15. ^ Dummett, Michael (1996), The Seas of Language, Clarendon Press Oxford, pp. 352–358
  16. ^ Piotrowski, Edward; Jan Sladowski (2003). "Quantum solution to the Newcomb's paradox". International Journal of Quantum Information. 1 (3): 395–402. arXiv:quant-ph/0202074. doi:10.1142/S0219749903000279.
  17. ^ Bostrom, Nick (2001). "The Meta-Newcomb Problem". Analysis. 61 (4): 309–310. doi:10.1093/analys/61.4.309.

References