Chapter 4 Against Decisiveness
At the heart of causal ratificationism is the claim that in many decision problems, there is no uniquely rational solution. And this isn’t because the options are tied. Rather, it is because each of them is acceptable once it is made. Causal ratificationism is indecisive; it doesn’t always provide a verdict on what to do.
This chapter argues that the right decision theory, whatever it is, is indecisive. Or, to be a little more precise, it will be an argument that the right theory, whatever it is, is weakly indecisive. As I’ll define the terms, causal ratificationism is strongly indecisive, and the argument of this chapter won’t entail that the right theory should be strongly indecisive. In fact, in the whole book I’m not going to offer any kind of proof that the right theory is strongly indecisive. Instead, I’ll argue for a number of constraints, including weak indecisiveness, and argue that causal ratificationism is the best way to satisfy those constraints. The most distinctive of these constraints is weak indecisiveness, and the point of this chapter is to motivate that constraint.
The last two paragraphs used a lot of terminology, and section ?? clarifies and defines the key terms. Then in sections 4.2–??, I’ll argue that the right theory, whatever it is, is weakly indecisive. In section ?? I’ll show how this causes problems for most existing decision theories in philosophy, and compare it to existing objections to some of those theories. Finally in section ??, I’ll discuss how these considerations relate to the defence of the view I really believe: that the right decision theory is strongly indecisive.
4.1 What Is Decisiveness
I will say a decision theory is decisive iff for any decision problem, it says either:
 There is a uniquely best choice, and rationality requires choosing it.; or
 There is a nonsingleton set of choices each of which is tied for being best, and each of which can be permissibly chosen.
A decision theory is decisive over binary choices iff it satisfies this condition for all decision problems where there are just two choices. Most decision theories in the literature are decisive, and of those that are not, most of them are at least decisive over binary choices. I’m going to argue that the correct decision theory, whatever it is, is indecisive. It is not, I’ll argue, even decisive over binary choices.
That definition, unfortunately, relies on two more terms that are not easy to define: decision problem and tie. I’ll deal with these in reverse order.
For decisiveness to be anything other than a trivial truth, it can’t just be that options are tied if each is rationally permissible. If that is what it meant for options to be tied, a decisive theory would just be one that either says one option is mandatory or many options are permissible. To make decisiveness a substantive claim, a different account of what it is for options to be tied is needed. I’ll borrow a technique from Ruth (Chang2002?) to provide such an account Some options are tied iff either is permissible, but this permissibility is sensitive to sweetening. That is, if options \(X\) and \(Y\) are tied, then for any positive \(\varepsilon\), the agent prefers \(X + \varepsilon\) to \(Y\). If both \(X\) and \(Y\) are permissible, and this dual permissibility persists if either is ‘sweetened’, i.e., replaced by an option that is improved by \(\varepsilon\), then they aren’t tied. My thesis, the thesis that the right theory is indecisive, is that the right decision theory says that sometimes there are multiple permissible options, and each of them would still be permissible if one of them was sweetened.
I’m going to provide two notions of a decision problem, one concrete and one abstract. These will correspond to the two notions of decisiveness, weak and strong, that I’ve already mentioned.
First, the abstract sense. It suffices to specify an abstract decision problem to describe the following four values. (Note that this is stipulative; I’m hereby defining what I mean by abstract decision problem.)
 What choices the chooser has;
 What possible states of the world there are (where it is understood that the choices of the chooser make no causal impact on which state is actual);
 What the probability is of being in any state conditional on making each choice; and
 What return the chooser gets for each choicestate pair.
Most recent papers on decision theory do not precisely specify what they count as a decision problem, but they seem to implicitly use an account like this. It is very common in philosophical work on decision theory to see a vignette that settles nothing beyond these four things, and then the writer assumes that this or that decision theory should have something to say about the problem. (Commonly they will also make firm pronouncements about the intuitively right answer to this problem, and wield this as evidence that said decision theory is false.) So while this is stipulative, I don’t think it’s particularly distinctive; most theorists think of decision problems as something like this.
But look how much it leaves out! It says nothing about what time of day it is, what the weather is, how happy the chooser is feeling (unless this impacts the returns in the relevant sense), or many other things. Maybe those matter to decision theory. Maybe the right decision theory is CDT in the seminar room, and EDT in the pub, plus I guess a classification of possible situations into being more publike or seminarroomlike. Then a decision problem needs a fifth clause, which specifies whether the chooser is in a pub or a seminar room. At the very least, I think we should have a language for discussing decision theory that lets CDTintheseminarroom/EDTinthepub be a statable theory.
To that end, say a concrete decision problem is a centered world that has a chooser at its center. The centered world will determine an abstract decision problem. The function from concrete decision problems to abstract decision problems is not trivial. What, for instance, does it mean to say that these are, and these are not, the choices available to a concrete chooser? But I assume there is some function. There is no function in the reverse direction; every abstract decision problem corresponds to many, many concrete decision problems.
A decision theory is strongly decisive iff it is decisive over abstract decision problems. That is, it is strongly decisive iff for any abstract decision problem, it says that either there is a unique rational choice in the problem, or that some options are tied. A decision theory is weakly decisive iff it is decisive over concrete decision problems. That is, it is strongly decisive iff for any concrete decision problem, it says that either there is a unique rational choice in the problem, or that some options are tied. A decision theory is weakly indecisive iff it is not strongly decisive, and strongly indecisive iff it is not weakly indecisive.
Any strongly indecisive theory is weakly indecisive, but the converse is false. The CDTintheseminarroom/EDTinthepub theory is weakly indecisive. If it is presented Newcomb’s Problem, it does not issue a verdict. It says both options are rationally consistent with the abstract structure of the problem. So it is not strongly decisive, which is to say it is weakly indecisive. But it is weakly decisive. In any concrete instance of Newcomb’s Problem, the chooser is either in the seminar room (or a seminarroomlike space) or the pub (or a publikespace), and so there is only one rational choice. So it is not strongly indecisive.
The example I’ve used so far of a weakly but not strongly indecisive theory is not, I suspect, one that will appeal to many readers. There are, however much more interesting weakly decisive but not strongly indecisive theories. They will have to wait; for the next few sections, the focus will be on strongly decisive theories, and the development of an argument against them.
4.2 Six Decision Problems
The core of this chapter, and indeed the core of this book, revolves around the decision problems in tables 4.1 and 4.2. I’m going to argue that whichever moves are acceptable in one of those two problems are also acceptable in the other, for any real values of \(x_1, x_2, x_3, x_4, e\), and any value of \(p \in (0, 1)\). The names of the problems are because they are going to be the first and sixth members of a sequence of problems that I’ll present shortly.
PU 
PD 


U 
\(x_1\) 
\(x_2\) 
D 
\(x_3\) 
\(x_4\) 
PU 
PD 


U 
\(px_1 + (1p)e\) 
\(x_2\) 
D 
\(px_3 + (1p)e\) 
\(x_4\) 
Once I’ve argued that these problems should be treated the same way, I’ll then argue that no strongly decisive theory does treat them the same way, so all strongly decisive theories are false. But what are these problems? What do these tables mean?
In each case, Chooser has two options: U for Up, and D for Down. There is a demon, called Demon, who is arbitrarily good at predicting Chooser’s choices, but who is not causally influenced by what Chooser does.^{6} Demon will select either PU, meaning they predict Chooser will play U, or PD, meaning they predict Chooser will play D. The two twoway choices produce four possible outcomes, and the payouts to Chooser in each of those four possibilities are shown in the table. So these are just the kinds of problems that are familiar in the modern decision theory literature. What’s not familiar is the claim that Problem 1 and Problem 6 should be treated the same way. The argument for that goes via four more problems, and this section introduces them.
Problem 2 is just like Problem 1, except it provides more backstory about why Demon is so likely to correctly predict Chooser correctly. It is a game with two players: Chooser and Demon. Chooser’s payouts are just as before, Demon gets zero payout for an incorrect prediction, and a positive payout for predicting correctly. And Demon is a very good predictor and an expected utility maximiser, so they will almost certainly make correct predictions. But note that Demon’s payouts are not the same in each case of a correct prediction. Because Demon is playing a game not making a prediction, I’ve relabeled their moves. They are called B and C; option A will come in at the next step. Table 4.3 is the game table for Problem 2.
B 
C 


U 
\(x_1, 1\) 
\(x_2, 0\) 
D 
\(x_3, 0\) 
\(x_4, 2\) 
In standard presentations of problems like Problem 1, it is assumed that Demon moves first, but the move isn’t revealed until after Chooser moves. So while these problems are evidentially like simultaneous move games, there is a sense in which it is more accurate to represent them as game trees. So let’s include the game tree version of Problem 2 as well, as figure 4.1.
I don’t know of any argument for treating the table version and tree version of Problem 2 differently, so I will follow the standard assumption that these are two representations of essentially the same problem.
Problem 3 is just like Problem 2, except Demon is offered an exit strategy. When Demon moves, they can now choose A, B or C. If they choose B or C, the game continues as before. Chooser is told that Demon chose B or C, but not which one they chose, and knows all the payouts, and has to choose U or D. If Demon chooses A, however, the game ends, and Chooser gets payout \(e\), while Demon gets payout 1. Figure ((fig:abcv3?)) is the tree for that game.
That would be enough to specify a game, but decision theorists typically want something more. If Demon predicts Chooser will play D, then Demon will clearly play C. But what will happen if Demon predicts Chooser will play U? In that case, Chooser will get payout 1 from either A or B, and all we’ve been told is that Demon maximises expected utility. So let’s add one specification to the game. If Demon predicts that Chooser will play U, they will play B with probability \(p\), and A with probability \(1p\).
Note that this suggests Demon might not be ideally rational. From Demon’s perspective, A weakly dominates B. But they have some probability of choosing B. If one thinks that ideal rationality requires eschewing weakly dominated options, this means Demon is not ideally rational. It was, however, never part of the story that Demon is ideal; just that they are a utility maximiser. And they can be, even if they choose B.
Problem 4 is just like Problem 3, except Chooser must select before being told whether Demon has adopted the exit strategy of A. Imagine, to make it vivid, that the gamemaster is impatiently waiting for Demon’s selection, but Demon is stuck in their room, taking their time. Chooser has to run, so the gamemaster says that Chooser should write their move in an envelope. If Demon chooses A, the envelope will be burned, since it doesn’t matter . If Demon chooses B or C, the envelope will be opened and what’s written in it will be Chooser’s move. Because Chooser doesn’t know which move Demon has made, the tree (shown in figure 4.3) looks a little different.
Since Demon has no evidence of Chooser’s move when Demon makes their one move, and Chooser has no evidence of Demon’s move when Chooser makes their move, this is effectively a oneshot simultaneous move game. So we can represent it just as well with table 4.4 as a tree.
A 
B 
C 


U 
\(e, 1\) 
\(x_1, 1\) 
\(x_2, 0\) 
D 
\(e, 1\) 
\(x_3, 0\) 
\(x_4, 2\) 
Since Demon is an external force that produces states of the world which Chooser has no causal influence over, we can just as well treat Demon’s moves as states. That will get us Problem 5, as shown in table 4.5. In that problem, the three moves from Problem 4 become three states. And Chooser knows that \(Pr(C  D) = 1, Pr(B  U) = p, Pr(A  U) = 1p\), and all the other conditional probabilities of states given choices are set to zero.
A 
B 
C 


U 
\(e, 1\) 
\(x_1, 1\) 
\(x_2, 0\) 
D 
\(e, 1\) 
\(x_3, 0\) 
\(x_4, 2\) 
In this problem, the two states A and B are separated out. But these are just two possible ways that the state PU could be realised. We could collapse these states into one, and replace the listed payouts with Chooser’s expected payouts, given that PU is realised and they make a particular choice. That will give us Problem 6, with the following familiar looking table.
PU 
PD 


U 
\(px_1 + (1p)e\) 
\(x_2\) 
D 
\(px_3 + (1p)e\) 
\(x_4\) 
So those are our six problems. In the next section, I’ll argue that as theorists, we should think of all these problems the same way.
For simplicity, I’m going to assume that the probability that Demon predicts correctly is 1. If you don’t want to allow that this probability can be 1 without having a causal connection, it won’t make a huge difference to make it _{\(1  \varepsilon\)}. It’s just important to note in Problems 46 that Demon acts as if the probability is 1. Even if there is some chance of Demon being wrong, Demon acts as if this isn’t possible. This is not a distinctive assumption; in most equilibrium concepts in game theory, an equilibrium involves the players believing that their predictions of the other players are correct with probability 1.↩︎