How can being a good man complicate the Prisoner’s Dilemma? Ben Labe explains.
The two essential features that have long conspired to make the Prisoner’s Dilemma one of the most troubling and fought-over games ever are its simple setup and its tendency to promote antisocial outcomes. While neither feature by itself is very harmful, together they call into question one of the most basic tenets of libertarian capitalism, namely, that day-to-day self-interested behavior should produce socially beneficial outcomes.
The Prisoner’s Dilemma is usually supposed to defy two efficiency concepts, Pareto Efficiency and the Invisible Hand Hypothesis, and theorists who are frustrated by it usually try to preserve the first. While in principle I am sympathetic to such an effort, restoring the Pareto Efficient outcome as an equilibrium solution to the Prisoner’s Dilemma is not the focus of this entry. Instead, I would like to identify what I believe to be a second kind of pathology inherent in the Prisoner’s Dilemma, one that I believe might affect traditional Game Theory and Consumer Choice Theory as a whole but that is consistently overlooked. In particular, such a pathology could cause serious concerns for the consequentialist axiom.
♦◊♦
Because I first conceived of how to express this problem while considering the Prisoner’s Dilemma, that is how I will present it. By way of example, we will consider a case in which someone is trying to impose such a scenario on Immanuel Kant. You’ll see why I chose him in a minute.
First, let’s review the basic structure of the Prisoner’s Dilemma. Usually, the game consists of two players, each facing two choices. For now, we’ll say that those choices are to “Defect” (D) or to “Cooperate” (C). Together, the players face the following payoff matrix.
The rows (labelled “C” and “D”) represent the possible choices of player 1, and the columns (labelled “Cooperate” and “Defect”) represent the possible choices of players 2. In each payoff box, player 1′s payoff is listed first and player 2′s is listed second. I have made the payoffs symmetric for both players, but this need not be the case. All that we really need is for the following ordering to hold: c>a>d>b. That c>a and d>b implies that the self-interest motive will induce each player to defect, and that a>d establishes the Pareto sub-optimality of the resulting (D, Defect) outcome.
However, now suppose that the philosopher Kant is player 1, and the “Defect” action is to tell a lie. This leads to the following normal form representation, where only the actions have changed:
To provide some context for why I chose Kant in particular, here is an excerpt from his “On a Supposed Right to Lie because of Philanthropic Concerns”: “To be truthful (honest) in all declarations is…a sacred and unconditionally commanding law of reason that admits of no expediency whatsoever.” According to Kant, therefore, there is no situation in which lying is preferred to telling the truth. The ethical soundness of this principle notwithstanding, let us suppose for the sake of argument that the player Kant truly lives by it.
Under these circumstances, the preferred player 1 choice is clearly not to lie; telling the truth is always preferred to lying. In response to such an argument, most game theorists counter by saying that it demonstrates a misunderstanding of the notion of utility. The payoffs above don’t represent monetary payoffs; they represent real payoffs in the form of a partial ordering of possible outcomes. If you think that cooperating is a good thing, then that will be a component of your payoff when you cooperate. To put it more generally, they will hold that whatever reasons you could possibly have for acting one way or another will be accounted for in your payoff function. So, if you like cooperating so much that it tips the directions of the above-mentioned inequalities, then you were never really facing a prisoner’s dilemma in the first place. What we would need to do to put you in that situation is to provide enough extra incentives to make defecting again seem like a dominant strategy. The game theorist’s presumption, moreover, is that such a thing can always be done.
In player Kant’s case, however, piling on extra incentives will never be enough. Taking a and b down to negative infinity won’t work, because to Kant, c and d are already there. Nor will any process of adding an infinite number of incentives to c and d be enough to draw them from the abyss in which they began.
I hope that you are now convinced when the defect action is to lie, the player Kant can never be made to face the sought-after ordering, and that this is because he possesses reasons that utilities simply cannot account for within the Prisoner’s Dilemma framework. Since his reasons were deontological rather than consequentialist, his action set was automatically restricted.
♦◊♦
I can now provide a general formulation of what I perceive to be the problem: If the goal of game theory is to describe, explain, and predict rational behavior, then why do we insist upon ignoring one of the two major types of reasons that rational beings give, either by couching them within the other category or by denying their existence altogether? It seems that Game Theorists have a choice to make. Either we admit that there are certain games which lack universality, or, we look for ways to loosen the axiom of consequentialism enough for actions to play a serious constitutive role in influencing game theoretic outcomes. Either way, we must acknowledge that trying to subsume a player’s actions under the vacuous guise of utility is not always a possible, let alone the preferable, way to go.
Originally appeared at The Unwelcome Pundit
Photo of “The Prisoner Puggle” by istolethev
Interesting…so in other words, you’re saying that it doesn’t make sense to broadly put people in a consequentialist (determine morality by the consequences) or virtuist (determining morality by inherent virtues, beliefs, doctrines) camp? Either that or I didn’t get the article. 🙂