Chapter 2 Make Ratifiable Decisions
A rational chooser knows what they are doing, and thinks that it is for the best. That is, they think that there is nothing else they could be doing that would be better. This book defends a version of decision theory that starts, and largely ends, with this principle. Properly understood, this is all there is to decision theory. But how to properly understand it will be the subject of much of this book.
The principle I just stated is backwards looking. It says that the chooser must think that the decision is for the best when they make it. It doesn’t say much about how they come to make that decision, or whether the decision makes sense given the views the chooser has at the start of deliberation. That’s by design. Decision theory is the theory of when decisions can be defended. Or, at least, that’s what I’ll argue in this book.
I’m mostly going to be concerned with a special class of decision problems: those involving demons who have spectacular predictive powers. These have been a particular focus of decision theorists for the last halfcentury. In keeping them center stage I am, in this one respect at least, following tradition. But I will make use of the principle that our theory of how to make decisions when demons are around should be consistent with our theory of how to make decisions when demons are not around. And the motivations for the two parts of the theory should be consistent as well. This turns out to be a somewhat substantive constraint.
Demons predict what other people will choose, make moves accordingly, and these moves make a difference to the consequences of other choices. That’s to say, demons behave just like the rational players in orthodox game theory. Interacting with demons is, at a fairly deep level, playing games with them. So we should expect game theory to have something to tell us about how those interactions go. This isn’t a novel point; I owe it to William Harper (1986). But it is going to be central to the plot of this book.
The next three sections spell out the points made in each of the last three paragraphs. And then I’ll close the chapter by setting out a generic version of the main example of the book, and going over the plans for the rest of the book.
2.1 Basic Decision Theory
2.1.1 Simple Decision Problems
A simple decision problem starts with a table like 2.1.
L 
R 


U 
\(v_{11}\) 
\(v_{12}\) 
D 
\(v_{21}\) 
\(v_{22}\) 
The rows we the options that the chooser, who I’ll mostly call Chooser from now on, has. The columns list the possible states of the world. And in the cells set out the value to Chooser of each of these optionstate pairs. Just to make the notation easier to remember, I’ve written \(v_{ij}\) for the value of the outcome when Chooser selects option in row \(i\), and the world is in state \(j\).
Now there are a lot of questions one could have about that last paragraph, and I’ll spend section D going over five such questions at some length. But for now let’s start with that basic picture. In general there are more than two possible options and more than two possible states, but let’s start with this simple case for now and work up to the more complicated cases.
In order to make a decision, Chooser typically needs one more piece of information  how likely are the states \(L\) and \(R\)? Let’s add that information in: the probability of \(L\) is \(p_1\) and the probability of \(R\) is \(p_2\). What do we mean by ‘probability’ here? Good question, and one that I’ll spend a lot of time on in chapter 6. For now, just use its colloquial meaning.
2.1.2 Defining Basic Decision Theory
Given all these facts, Chooser can assign a value to an option using the following formulae:
\[\begin{align*} V(U) &= p_1 v_{11} + p_2v_{12} \\ V(D) &= p_1 v_{21} + p_2v_{22} \end{align*}\]
And Chooser is rational iff they choose an option with maximal value.
In the more general case where \(O_i\) is an arbitrary option, there are \(k\) possible states, the probability of state \(S_k\) is \(p_k\), and the value of the combination of option \(O_i\) and state is \(S_k\), the value of option \(O_i\) is given by this formula.
\[ V(O_i) = \sum_{j=1}^k p_j v_{ij} \]
And in the even more general case where there are continuum many states, we need to use integrals to work out the value of each option. But these complexities aren’t going to be particularly relevant to our story, so I’ll return to the special two choice two option case, and note when the extra generalities are needed.
The formulae above are what are usually called the expected values of each option, and the decision theory I’ve just state is that if Chooser is rational, they maximise the expected value of their choice, given this probability distribution over the states of the world. Let’s restate that not using variables, but explicitly using probabilities and values. Here \(Pr\) is the probability function relevant to Chooser’s decision, and \(V\) is the value function. I’ll use concatenation for conjunction, so \(UL\) means that both \(U\) and \(L\) are true, i.e., that the first option is chosen and the first state is actualised. And let \(O\) be an option, in this case a member of {U, D}.
\[ V(O) = Pr(L) V(OL) + Pr(R) V(OR) \]
I’ll call the theory that values each option this way, and says that rational choosers maximise value, Basic Decision Theory. It could just as easily be called the Crude Savage Decision Theory. The ‘Savage’ there is because the formula at the heart of it is the same formula that Savage (1954) puts at the heart of his decision theory. But the ‘Crude’ is there because Basic Decision Theory leaves off all that Savage says about the nature of options and states. Steele and Stefánsson (2020, sect 3.1) have a good summary of what Savage says here. I’m not going to go into that. Instead I’ll note why something more needs to be said. As it stands, Basic Decision Theory leads to absurd outcomes.
2.1.3 Why Basic Decision Theory Fails
Consider the St. Crispin’s Day speech that Shakespeare has Henry V give before the Battle of Agincourt. (I’m indebted to a long ago conversation with Jamie Dreier for pointing out the connection between this speech and decision theory.) The background is that the English are about to go to battle with the French, and they are heavily outnumbered. Westmoreland wants to wait for more troops, and Henry does not, offering this reasoning.
What’s he that wishes so?
My cousin Westmoreland? No, my fair cousin;
If we are marked to die, we are enough
To do our country loss; and if to live,
The fewer men, the greater share of honor.
God’s will! I pray thee, wish not one man more.
It looks like Henry is suggesting that table 2.2 is the right model for their decision.
Victory 
Defeat 


Attack 
\(a\) 
\(c\) 
Wait 
\(b\) 
\(d\) 
Henry argues, not implausibly, that \(a > b\) since they will get more honour, and \(c > d\), since they will lose fewer men. Now it doesn’t matter what the probabilities of victory and defeat are. To see this, call them \(x\) and \(y\) respectively, and note that the two facts Henry mentions suffice to guarantee that \(ax + cy > bx + dy\), assuming just that the probabilities are nonnegative. So great, Henry is right, and they should attack. And they do, and they win, and all’s well that end’s well.
No! That’s an absurd decision, even if it ended well on this occasion. Note that Henry’s reasoning here is completely general  you could say the same before every battle. And it isn’t true that you should always rush into battle with whoever you have on hand.
At one level, it’s easy to say what has gone wrong here. There is too tight a connection between the choice Henry makes, and which state of the world is actual. But, and this is the philosophical problem, just what is the problematic connection between the choice and the state? This isn’t obvious, because there are two different connections between the choice and the state, and it’s a matter of some philosophical import which of them matters.
On the one hand, there is an evidential connection between the choice and the state. Learning that Henry has decided to go to battle rather than wait for reinforcements is evidence that he will lose. Misleading evidence, as it turned out, but certainly evidence. And maybe what’s gone wrong in Henry’s reasoning is that he has ignored that evidential connection in his reasoning.
On the other hand, there is a causal connection between the choice and the state. It’s possible, given Henry’s evidential situation at the time he makes the decision, not just that he is defeated, but that his rushing into battle causes his defeat. Relatedly, it’s possible, for all Henry knows, that rushing into battle lowered the objective chance of his winning. Those last two sentences didn’t quite say the same thing, and the differences between them will matter a little going forward, but for now we’ll slide over their differences, and just note that there is in some natural sense a causal connection between Henry’s decision and the resulting state of the battle.
So here we get to a point of common ground among contemporary decision theorists. It will, more or less, be the last point of common ground on the journey; from here everything gets contentious. When there is both an evidential and a causal connection between the possible choices and the possible states of the world, it is inappropriate to use Basic Decision Theory to make a decision. Indeed, in these cases, Basic Decision Theory will often validate the wrong decision.
It’s not quite a universal view, and we’ll come back in section E to people who don’t believe it, but there is another very widely accepted claim in the vicinity of this one. When there is no evidential or causal connection between the possible choices and the possible states, most theorists think Basic Decision Theory recommends the right choice. Now they might not say, and typically do not say, that it makes the right recommendations for the right reasons. But they do say, at least most of them, that it gets the verdicts right. In the favored lingo of twentieth century philosophers, it is extensionally adequate in these cases.
So that covers the cases where there is both an evidential and causal connection  Basic Decision Theory gets things wrong  and the cases where there is neither  Basic Decision Theory gets things right. But what about the cases where there is one such connection but not the other? We’re all taught that correlation is not causation. What happens when there is correlation but not causation between the choices and the states? Then things get really interesting, and that’s the debate we’re doing to jump into.
2.2 Some Theories of Decision
2.2.1 Newcomb Problems
It’s not at all obvious how there could be a case where the possible choices and possible states could be causally connected but not evidentially connected. I’m going to set the possibility of such a case aside, at least until someone shows me what such a case might look like. Because there is a very natural way that the choices and states could be evidentially but not causally connected: they could have a common cause. And one way that could come about is if the states are predictions of Chooser’s choice, made by someone who has a deep insight into Chooser’s choice dispositions.
We’ll call that someone Demon, and a decision problem in which the states are based in Demon’s predictions a Demonic decision problem. I’ll have much more to say about demons in section B, but for now all we need to know is that Demon has the means, motive, and opportunity to correctly predict what strategy a Chooser will adopt.
The most famous Demonic decision problem is Newcomb’s Problem (Nozick 1969). Chooser is presented with two boxes, one opaque and one clear. They have a choice between taking just the opaque box, i.e., taking one box, and taking both the opaque and the clear box, i.e., taking two boxes. The clear box has a good of value \(y\) in it. The contents of the opaque box are unknown. Demon has predicted the chooser’s choice, and has placed a good of value \(x\) in it if they predict Chooser will take one box, and left it empty (which we’ll assume has value 0) if they predict Chooser will take both boxes. The key constraint is \(x > y\). In most versions the value given for \(x\) is massively greater than that for \(y\), but the theories that are developed for the problem typically are sensitive only to whether \(x\) is larger than \(y\), not to how much larger it is.
Demon is really good at their job. They are not a time traveller; they are making a prediction that is not causally influenced by what the Chooser actually does. But they are really good. I’ll assume that they are arbitrarily good, and come back to just what I mean by that in section B.
I’ll write 1 and 2 for the two choices, and P1 and P2 for the predictions. In general, where there is a demon who makes these kinds of predictions, I’ll write ‘PX’ to mean the state of choice X being predicted. Table 2.3 sets out the problem.
P1 
P2 


1 
\(x\) 
0 
2 
\(x + y\) 
\(y\) 
A large part of late 20th Century decision theory was given over to discussing this problem. Socalled Causal Decision Theorists argued in favor of taking both boxes. The primary argument is that whatever the demon has done, the chooser gets a bonus of \(y\) for taking the second box. It’s good to get guaranteed bonuses, so they should take the bonus. This is basically the view I’m going to defend in this book, though with a number of deviations from the way it was defended in these classic works. Socalled Evidential Decision Theorists argued in favor of taking just the one box. The primary argument is that the chooser who takes one box expects to get \(x\), the chooser who expects to get both boxes expects to get \(y\), it’s better to take a choice that one expects to do better, and \(x > y\), so it’s better to take one box.
Both of these arguments trace back to the original presentation of the problem by Nozick. He named the problem after William Newcomb, a physicist from whom he learned of the problem. For much more detail on the background to this problem, and the motivation for the two prominent solutions, see Weirich (2020). Let’s turn to looking at those two solutions in more detail.
2.2.2 Introducing EDT and CDT
The two most famous theories in recent work in decision theory are Causal Decision Theory and Evidential Decision Theory. I used these terms in subsection 2.2.1 without defining them. It’s time to do that now.
As I understand the way the terms are used, and indeed as I’ll be using them, they are potentially misleading. Both of these are not really theories, but families of theories. Evidential Decision Theory (EDT) is a somewhat tighter family of theories than Causal Decision Theory (CDT), but neither is something that I would typically be happy calling a theory. In this section I’ll give somewhat imprecise descriptions of each ‘theory’, starting with EDT. In section 2.2.3 I’ll say why both of these are really theory schema, and set out some of the more viable ways of making them into precise theories.
EDT, as I’m going to understand it, traces back to the first edition of Richard Jeffrey’s The Logic of Decision (Jeffrey 1965). The idea behind it is that what goes wrong with Henry’s reasoning at Agincourt is that he ignores the fact that rushing into battle lowers the probability that he will win. In fact, according to EDT, the probability that he will win doesn’t really matter to his decision. What matters is the probability that he will win if he attacks, and the probability that he will win if he waits for reinforcements. The value of each choice, according to EDT, is given by this formula
\[ V(O_i) = Pr(S_1  O_i) V(S_1 O_i) + Pr(S_2  O_i) V(S_2 O_i) \]
And, as before, the one rule in decision theory is that one should maximise value.
So EDT says that Basic Decision Theory is incorrect. It uses probabilities of states, i.e., terms like \(Pr(S_j)\), where it should use probabilities of states given choices, i.e., terms like \(Pr(S_j  O_i)\). That’s what goes wrong with Henry’s reasoning.
In Newcomb’s Problem, EDT says that one should take one box. Assume, for simplicity, that the probability that the demon will make correct predictions is 1. Then the value of taking one box is \(x\), the value of taking two boxes is \(y\), and by hypothesis \(x >>> y\), so one should take one box.
CDT, or at least the version we’re going to focus on for a while, traces back to David Lewis’s paper Causal Decision Theory (Lewis 1981a). Lewis actually has two aims in this paper: to set out a version of CDT, and to argue that the other versions don’t differ in significant ways from his version. It’s going to be somewhat important to the plotline of this book that Lewis’s second claim, that the various versions of CDT don’t greatly differ, is false. But the positive theory Lewis presents is interesting whether or not that second claim goes through, and that’s what we’ll focus on.
The idea is that Basic Decision Theory was not incorrect, as EDT says, but incomplete. It needs to be supplemented with rules about when the formula can be applied. In particular, we need to add that the states have to be causally independent of the options. In Lewis’s terminology, the states have to be ‘dependency hypotheses’. Each dependency hypothesis is something that the chooser has no causal influence over, and which determines, in conjunction with each possible act by the chooser, the probability of each possible outcome. If you apply the formula from Basic Decision Theory to cases where the states themselves depend (or even may depend) on the option, things go wrong. That’s what CDT says goes wrong in Henry’s case. It’s the right formula, and he applies it correctly, but he shouldn’t have started with simply win and lose as the states. Rather, he should have started with dependency hypotheses that do not causally depend upon his choices. For example, he could have started with the following three hypotheses: the troops we have now are enough to win; the troops we have now are not enough to win, but the troops we will have after reinforcements will be enough to win; and, even after getting reinforcements, we won’t have enough troops to win. Since the middle of those states is very likely, and the utility of waiting for reinforcements is higher in that state, he probably should have waited.
In Newcomb’s Problem, CDT says that one should take both boxes. What Demon predicts is not causally dependent on what Chooser selects. So we can use P1 and P2 as states. Let \(z\) be the probability of P1, and hence the probability of P2 is \(1z\). Then the expected value of taking one box is \(zx\), while the expected value of taking two boxes is \(zx + y\). Without yet knowing what \(z\) is, a question that will become rather important as we go on, we know that \(zx + y > zx\), so taking two boxes has higher value. So that’s what one should do.
2.2.3 Making The Theories Precise
So that’s the basic picture of EDT and CDT. But as I alluded to earlier, setting out the basic picture isn’t quite the same thing as setting out a theory. In this section I’ll flag some factors that need to be settled to turn them into a theory.
 What are probabilities?
 Are they ex ante or ex post?
NOT COMPLETE
2.3 Games and Decisions
This section goes into a bit of detail about the connection between game theory and decision theory. If you want much more background on game theory, I’ve included some explanations of the key concepts in Appendix A. The point of this section is that the connection between game theory and decision theory is much tighter than a lot of theorists have realised.
2.3.1 Newcomb Games
Let’s start with an interesting variant of Newcomb’s Puzzle, one which is used to good effect by Frank Arntzenius (2008). Keep the contents of the boxes the same, including that Demon puts \(x\) into the first box iff they predict only one box will be taken. But this time both boxes are clear. Now Chooser can see exactly what is in each box before making a decision. What should they do?
We can model this problem using the tree in figure 2.1.
This representation should look familiar from game theory textbooks. It’s just a standard extensive form representation of a game where each player makes one move. Since we’ll be using trees like this a bit, I want to explain the notation.
The game starts at the hollow node, which in this case is at the top of the tree. At each node, we move along a path from wherever we are to a subsequent node. So each node gets labeled with who is making the choice, and the edges get labeled with the choices they can make. This game starts with the Demon predicting either that Chooser takes 1 box  this is the edge labeled P1  or that Chooser takes 2 boxes. Either way we get to a node where Chooser moves, either by taking 1 box or 2. It’s a solid node, which means (in the notation of this book) that it’s not where the game starts, and it’s not where the game ends. Then whatever happens, we get to a terminal node, here denoted with a square. At each terminal node we list the payouts.
But here we only listed the payouts to Chooser. To make something really into a game, there should be payouts for both players. What are Demon’s payouts? Well, what makes something the payout function for a player is that it takes higher values the more they get what they want. Since Demon is trying to predict player, they want situations where they predict them well. So we can simply say that their payout is 1 for a correct prediction, and 0 for an incorrect prediction. That suggests the tree for Arntzenius’s ‘transparent box’ version of Newcomb’s Problem should look like figure 2.2.
I’ve put Demon’s payouts second, even though Demon moves first. The focus here is on Chooser, so they are player 1. When a game representation lists the payout in a situation as \(a, b\) that means that player 1 gets \(a\) and player 2 gets \(b\). In this case that means the chooser gets \(a\) and the demon gets \(b\).
In this book I’m mostly going to work with games where Demon’s payouts are either 1 for a correct prediction of 0 for an incorrect one. But once we’ve got the basic concept of Demon as a player getting payouts, we can set the demon up with other payouts too. And then we can bring just about any tool we like from contemporary game theory to bear on demonic decision theory.
That move, of treating Newcomb Problems as games, is taken straight from work by William Harper (1986). And it is going to be the central move in this book.
When we follow Harper’s lead and transform the original Newcomb Problem into a game, we get table 2.4.
P1 
P2 


1 
\(x, 1\) 
\(0, 0\) 
2 
\(x + y, 0\) 
\(y, 1\) 
Or, at least, that’s the socalled strategic form of the game. We can also represent it, as in figure 2.3 as a game that takes place over time.
The dashed line there represents that those two nodes are in what game theorists call an information set. That means that when the player to move reaches one of those nodes, all they know is that they are at one of these nodes and not any other. In this case, Chooser knows that they have to select 1 box or 2, and they know the payouts given their choice and Demon’s prediction. But they do not know what Demon predicted, so they do not know which node they are at.
This extensive form representation is in a way more accurate than the strategic form representation in the table above. It encodes that Demon goes first, which is something usually stressed in the story that is told about Newcomb’s Problem. But the table form is easier to read, and makes clearer that there is only one equilibrium of the game: Demon makes prediction P2 and Chooser chooses 2. So I’ll mostly use tables when they are possible. And they often are possible  lots of games can be turned into demonic decision problems like Newcomb’s Problem.
2.3.2 Familiar Games
Much of what happens in this book comes from seeing demonic decision problems as games and, conversely, seeing games as potential demonic decision problems. So I want to spend a little time setting out how the translation between the two works.
Transforming a demonic decision problem into a game is easy. As I noted, you just replace the states generated by Demon’s choices with moves for Demon, and give them payout 1 if they predict correctly, and 0 otherwise.
You might worry that this only gives you cases where Demon is approximately perfect, but we also want cases where the demon is, say, 80% accurate. But that’s easy to do as well. In fact there are two ways to do it.
The first is what I’ll call the Selten strategy, because it gives the demon a ‘trembling hand’ in the sense of Selten (1975). Instead of letting Demon choose a state in the original problem, let Demon choose one of \(n\) buttons, where \(n\) is the number of choices the (human) chooser has. Each button is connected to a probabilistic device that generates one of the original states. If you want Demon to be 80% accurate, say the button \(b_i\) associated with option \(o_i\) outputs state \(s_i\) with probability 0.8, and each of the other states with probability \(\frac{0.2}{n  1}\). And still say that Demon gets payout 1 for any \(i\) if the chooser selects \(o_i\) and the button generates state \(s_i\), and 0 otherwise.
The second is what I’ll call the Smullyan strategy, because it involves a Knights and Knaves puzzle of the kind that play a role in several of Smullyan’s books, especially his (1978). Here the randomisation takes place before Demon’s choice. Demon is assigned a type Knight or Knave. Demon is told of the assignment, but Chooser is not. If Demon is assigned type Knight, the payouts stay the same as in the game where Demon is arbitrarily accurate. If Demon is assigned type Knave, the payouts are reversed, and Demon gets payout 1 for an incorrect prediction.
There are benefits to each approach, and there are slightly different edge cases that are handled better by one or other version. I’m mostly going to stick to cases where Demon is arbitrarily accurate, but I need these on the table to talk about cases others raise where Demon is only 7580% accurate. And in general either will work for turning a demonic decision problem into a game.
Turning games into demonic decision problems is a bit more interesting. Start with a completely generic twoplayer, twooption, simultaneous move, symmetric game, as shown in table 2.5. We won’t only look at symmetric games, but it’s a nice way to start.
A 
B 


A 
\(x, x\) 
\(y, z\) 
B 
\(z, y\) 
\(w, w\) 
In words, what this says is that each player chooses either A or B. If they both choose A, they both get \(x\). If they both choose B, they both get \(w\). And if one chooses A and the other chooses B, the one who chooses A gets \(y\) and the one who chooses B gets \(z\). (Note that the payouts list row’s payment first, if you’re struggling to translate between the table and the text.) A lot of famous games can be defined in terms of restrictions on the four payout values. For example, a game like this is a Prisoners’ Dilemma if the following constraints are met.
 \(x > z\)
 \(y > w\)
 \(w > x\)
Some books will also add \(2x > y + z\) as a further constraint, but I’ll stick with these three.
Now to turn a game into a demonic decision problem, first replace column’s payouts with 1s and 0s, with 1s along the main diagonal, and 0s everywhere else. Table 2.6 shows what a generic symmetric game looks like after this transformation.
A 
B 


A 
\(x, 1\) 
\(y, 0\) 
B 
\(z, 0\) 
\(w, 1\) 
The next step is to replace Demon’s moves with states that are generated by Demon’s predictions. As before, I’ll put ‘P’ in front of a choice name to indicate the state of that choice being predicted. The result is table 2.7.
PA 
PB 


A 
\(x\) 
\(y\) 
B 
\(z\) 
\(w\) 
If we add the constraints \(x > z, y > w, w > x\), this is essentially a Newcomb Problem. I’m a long way from the first to point out the connections between Prisoners’ Dilemma and Newcomb’s Problem; it’s literally in the title of a David Lewis paper (Lewis 1979). But what I want to stress here is the recipe for turning a familiar game into a demonic problem.
We can do the same thing with Chicken. The toy story behind Chicken is that two cars are facing off at the end of a road. They will drive straight at each other, and at the last second, each driver will choose to swerve off the road, which we’ll call option A, or stay on the road, which we’ll call option B. If one swerves and the other stays, the one who stays is the winner. If they both swerve they both lose and it’s boring, and if they both stay it’s a fiery crash. So in terms of the payouts in the general symmetric game, the constraints are:
 \(x < z\)
 \(y >> w\)
 \(x >> w\)
Just what it means for one value to be much more than another, which is what I mean by ‘\(>>\)’, is obviously vague. Table 2.8 gives an example with some numbers that should satisfy it.
A 
B 


A 
\(0, 0\) 
\(0, 1\) 
B 
\(1, 0\) 
\(100, 100\) 
Replace the other driver, the one who plays column in this version, with a Demon, who only wants to predict row’s move. The result is table 2.9.
A 
B 


A 
\(0, 1\) 
\(0, 0\) 
B 
\(1, 0\) 
\(100, 1\) 
All I’ve done to generate table 2.9 is replace column’s payouts with 1s on the main diagonal, and 0s elsewhere. The next step is to replace the demonic player with states generated by Demon’s choices. The result is table 2.10.
pa 
pb 


a 
\(0\) 
\(0\) 
b 
\(1\) 
\(100\) 
And table 2.10 is just the Psychopath Button example that Andy Egan (2007) raises as a problem for Causal Decision Theory.
Another familiar game from introductory game theory textbooks is matching pennies. This is a somewhat simplified version of rockpaperscissors. Each player has a penny, and they reveal their penny simultaneously. They can either show it with the heads side up (option A), or the tails side up (option B). We specify in advance who wins if they show the same way, and who wins if they show opposite ways. So let’s say column will win if both coins are heads or both are tails, and row will win if they are different. The payouts are shown in table 2.11.
A 
B 


A 
\(0, 1\) 
\(1, 0\) 
B 
\(1, 0\) 
\(0, 1\) 
This isn’t a symmetric game, but it is already demonic. Column’s payouts are 1 in the main diagonal and 0 elsewhere. So we can convert it to a demonic decision problem fairly easily, as in table 2.12.
pa 
pb 


a 
\(0\) 
\(1\) 
b 
\(1\) 
\(0\) 
And table 2.12 is the familiar problem Death in Damascus from Gibbard and Harper (1978).
Let’s do one last one, starting with the familiar game Battle of the Sexes. Row and Column each have to choose whether to do R or C. They both prefer doing the same thing to doing different things. But Row would prefer they both do R, and Column would prefer they both do C. (The original name comes from a version of the story where Row and Column are a heterosexual married couple, and Row wants to do some stereotypically male thing, while Column wants to do some stereotypically female thing. That framing is tiresome at best, but the category of asymmetric coordination games is not, hence my more abstract presentation.) So table @(tab:bachstravinsky) is one way we might think of the payouts.
R 
C 


R 
\(4, 1\) 
\(0, 0\) 
C 
\(0, 0\) 
\(1, 4\) 
As it stands, that’s not a symmetric game. But we can make it a symmetric game by relabeling the choices. Let option A for each player be doing their favored choice, and option B be doing their less favored choice. That turns table @(tab:bachstravinsky) into table @(tab:bachstravinskysymmetric).
A 
B 


A 
\(0, 0\) 
\(4, 1\) 
B 
\(1, 4\) 
\(0, 0\) 
After making that change, change column’s payouts so that it is a demonic game. The result is table @(tab:bachdemon)
A 
B 


A 
\(0, 1\) 
\(4, 0\) 
B 
\(1, 0\) 
\(0, 1\) 
Finally, replace Demon’s choices with states generated by (probably accurate) predictions, to get the decision problem in table @(tab:asymmdeathdamascus).
PA 
PB 


A 
\(0\) 
\(4\) 
B 
\(1\) 
\(0\) 
That decision problem is the asymmetric version of Death in Damascus from Richter (1984).
The point of this section has not just been to show that we can turn games into decision problems by treating one of the players as a predictor. That’s true, but not in itself that interesting. Instead I want to make two further points.
One is that most of the problems that have been the focus of attention in the decision theory literature in the past couple of generations can be generated from very familiar games, the kinds of games you find in the first one or two chapters of a game theory textbook. And the generation method is fairly similar in each respect.
The second point is that most of the simple games you find in those introductory chapters turn out to result, once you transform them this way, in demonic decision problems that have been widely discussed. But there is just one exception here. There hasn’t been a huge amount of discussion of the demonic decision problem you get when you start with the game known as stag hunt. I’ll turn to that in the next subsection.
In later parts of the book, I’ll be frequently appealing to decision problems that are generated from other games that have been widely discussed by economic theorists. Most of these discussions are not particularly recent; the bulk of the work I’ll be citing is from the 1980s and 1990s, and I don’t take myself to be making a significant contribution to contemporary economic theorising. But what I want to point out is that there is a vast source of examples in the economic theory literature that decision theorists could be, and should be, discussing. And I’ve spent so long here on the translation between the two literatures in part because I think there are huge gains to be had from bringing these literatures into contact.
2.3.3 An Indecisive Example
This subsection is mostly going to be talking about games that are commonly known as stag hunts. Brian Skyrms has written extensively on why stag hunts are philosophically important (Skyrms 2001, 2004), and putting them at the center of the story is one of several ways in which this book is following Skyrms’s lead.
Stag hunts are symmetric twoplayer twooption, simultaneous move games. So they can be defined by putting constraints on the values in 2.5. In this case, the constraints are
 \(x > z\)
 \(w > y\)
 \(x > w\)
 \(z + w > x + y\)
The name comes from a thought experiment in Rousseau’s Discourse on Inequality.
They were perfect strangers to foresight, and were so far from troubling themselves about the distant future, that they hardly thought of the morrow. If a deer was to be taken, every one saw that, in order to succeed, he must abide faithfully by his post: but if a hare happened to come within the reach of any one of them, it is not to be doubted that he pursued it without scruple, and, having seized his prey, cared very little, if by so doing he caused his companions to miss theirs. (Rousseau 1913, 209–10)
Normally, option A is called hunting, and option B is called gathering. The game has two equilibria  both players hunt, or both players gather. So it’s unlike prisoners’ dilemma, which only has one equilibria. And the more cooperative equilibria, where both players hunt, is better. But, and this is crucial, it’s a risky equilibria. To connect it back to Rousseau, the thought is that the players would both be better off if they both cooperated to catch the stag (or deer in this translation). But cooperating is risky; if the players do different things, it is best to go off gathering berries (or bunnies) on one’s own than trying in vain to catch a stag singlehanded.
And that’s what we see in the game. The first two constraints imply that the game is in equilibrium if the players do the same thing. The third constraint says that if they both hunt, option A, they are better off than if they both gather, option B. But the fourth constraint codifies the thought that this is a risky equilibrium. Even though the equilibrium where everyone hunts is better, there are multiple reasons we might end up at the equilibrium where everyone gathers.
One reason for this is that the players might want to minimise regret. Each play is a guess that the other player will do the same thing. If one plays A and guesses wrong, one loses \(w  y\) compared to what one could have received. If one plays B and guesses wrong, one loses \(x  z\). And the last constraint entails that \(x  z < w  y\). So playing B minimises possible regret.
Second, one might want to maximise expected utility, given uncertainty about what the other player will do. Since one has no reason to think the other player will prefer A to B or vice versa  both are equilibria  maybe one should give each of them equal probability. And then it will turn out that B is the option with highest expected utility. Intuitively, B is a risky option and A is a safe option, and when in doubt, perhaps one should go for the safe option.
There are other arguments too for choosing to gather rather than hunt. To use a notion from Skyrms, gathering has a larger basin of attraction than hunting, and its basin of attraction includes the midpoint of the probability space. Those two motivations, plus the two from the previous two paragraphs, give us four motivations for gathering. If one wants to motivate gathering in general, it is actually important to get clear on which of the four one takes to be most important. Because while they are equivalent in the two option game (or problem), they are not in general equivalent. Indeed, all four come apart very soon once more options get added. Since I’m not defending gathering, I think decision theory should say either option is acceptable in stag hunts, I’m not going to continue exploring this line. But there are some interesting technical questions along these paths.
We can turn stag hunt into a decision problem by replacing the other player with Demon in a way that should be familiar by now. The result is the decision problem in table (tab:stagdecision).
PA 
PB 


A 
\(x\) 
\(y\) 
B 
\(z\) 
\(w\) 
In order to have less algebra, I’m going to often focus mostly on a particular version of this decision, with the values shown in table @(stagdecisionparticular). But it’s important that the main conclusions will be true of all decision problems based on stag Hunt.
PA 
PB 


A 
10 
0 
B 
6 
8 
These kinds of cases are important in the history of game theory because they illustrate in one game the two most prominent theories of equilibrium selection: risk dominance and payoff dominance (Harsanyi and Selten 1988). Risk dominance recommends gathering; payoff dominance recommends hunting. And most contemporary philosophical proponents of decisive decision theories (in the sense of decisiveness described back in section 1.3) fall into one of these two camps.
In principle, there are three different views that a decisive theory could have about stag decisions: always hunt, always gather, or sometimes do one and sometimes the other. A decisive theory has to give a particular recommendation on any given stag decision, but it could say that the four constraints don’t settle what that decision should be. Still, in practice all existing decisive theories fall into one or other of the first two categories.
One approach, endorsed for rather different reasons by Richard Jeffrey (1983) and Frank Arntzenius (2008), says to hunt because it says in decisions with multiple equilibria, one should choose the equilibria with the best payout. Proponents of Evidential Decision Theory also say to hunt in these situations, because the allhunt outcome is better than the allgather outcome, and it doesn’t even matter whether these are equilibria. Another family of approaches says to always gather in stag decisions. For very different reasons, this kind of view is endorsed by Ralph Wedgwood (2013), Dmitri Gallow (2020) and Abelard Podgorski (2022). These three views differ from each other in how they motivate gathering, and in how they extend the view to other choices, but they all agree that one should gather in any stag decision.
I’m going to argue that all of these views are mistaken. Decision theory should not say what to do in these cases  either choice is rational.
Now I should note here that I’m slightly cheating in setting out the problem this way. The theory I defend says that in any decision problem like this with two equilibria, either choice can be rational. And that includes games like, say, the one in @(notstagdecision), where everyone I mentioned in the last few paragraphs would agree that A is the uniquely correct choice.
PA 
PB 


A 
10 
0 
B 
2 
4 
I certainly don’t want to lean too hard on the intuition that either option is rational in a stag hunt—though I do in fact think that it’s intuitive that either option is rational in a stag hunt. But if we were just leaning on intuitions, then this last example would be devastating to my theory, since it really isn’t particularly intuitive here that either option is rational. Thankfully, the argument, which I’ll set out in some detail in chapter 4, doesn’t appeal to these kinds of intuitions. Still, I think it’s useful to focus on stag hunts because, as Skyrms shows, they are so philosophically important. And they will be my canonical example of a problem where the right decision theory is Indecisive.