C Methodology
There are three primary sources of evidence that are available in constructing and evaluating decision theories: principles, sameness of cases, and cases. Most of the literature focusses on the third of these. I think that’s a mistake; this is by far the least reliable source of evidence. We should instead focus on the first two. That’s what I’ll do in this book, and the purpose of this section is to defend this methodological starting point.
By principles, I mean claims like Preferences should be transitive, or Choosers should not regret their choices as soon as they are made. These are very general claims about what the structure of one’s choices should look like. There aren’t a lot of principles like these that we can be antecedently very confident in. But there are some, and those we should hold on to barring extraordinary evidence.
By sameness of cases, I mean claims that two particular decisions should get (in some sense), the same choice. Here is a familiar example from a non-Demonic decision problem. Chooser has a ticket to today’s cricket match, and is deciding whether to go. Chooser enjoys watching cricket, but does not enjoy sitting around in the stands waiting for the rain to clear, and there is a good chance of rain. What should Chooser do? Well, we haven’t said nearly enough to settle that, so let’s ask something more precise. What more do we need to know to know what Chooser should do? We do need to know how much Chooser likes watching cricket, dislikes sitting around in the rain, will have to pay to get to the ground and how likely rain is. But we don’t need to know how much Chooser paid for the ticket. That’s a sunk cost. If we settle all the forward looking parameters (likelihood of rain, utility of going under different scenarios, etc), then changing the backwards looking ones (how much Chooser paid) doesn’t make a difference. If it would be rational to stay home given all those parameters and having paid $10 for the ticket, it would be rational to stay home given all those parameters and having paid $100 for the ticket. Even in hard cases, and if the weather is bad enough Chooser may have a very hard choice here, we often have clear enough knowledge that some differences do not in fact make a difference.
And by cases I mean claims about what Chooser should do in a particular vignette. These are the main form of evidence that get used in philosophical decision theory. We (the theorists) are told that Chooser faces a problem like the following choice between \(A\) and \(B\), with a Demon predicting the choice and the payout dependent on the actions of the two of them.
PA |
PB |
|
---|---|---|
A |
\(6\) |
\(0\) |
B |
\(4\) |
\(3\) |
A lot of philosophers seem to the following approach to decision theory. (I say ‘seem to’ because I’ve never really seen a good defence of this methodology, but I have seen a lot of arguments from cases like this to sweeping theoretical claims.) First, we figure out what Chooser should do in this case, perhaps by consulting our intuitions. Second, we work out which theory best fits with what we’ve learned this way about individual cases.
Now it would be wrong to say that the three kinds of evidence I’ve presented here fall into three very neat categories. The boundaries between them are blurry at best. Any evidence about an individual case can be turned into a kind of principle by simply replacing the names in the vignette with variables and universally quantifying over the variables. We’ll get very restricted principles that way - anyone facing just that game should play \(B\), for example - but all principles have some restrictions on them. And there’s not much distance between judging that what Chooser should do is independent of what they paid for the ticket, and judging that the principle Sunk costs are irrelevant is part of our evidence. But the fact that a boundary isn’t sharp doesn’t mean that it’s theoretically useless. The paradigms of the three kinds of evidence are different enough that we can helpfully keep them in mind when understanding what particular theorists are doing.
And the paradigm that starts with individual cases, like the table I just presented, seems considerably worse than the other two paradigms. Arguments in that family are vulnerable to two kinds of objection that arguments that start with the other kinds of evidence are not.
First, judgments about cases might conflate what the standards of ideal rationality say about the case with what the standards of non-ideal rationality say about the case. Consider this example, which I’ll come back to in chapter 5.
Quick Basketball Choice
Chooser has to name an NBA team. (The NBA is the main club basketball competition in the world.) If the team wins the NBA championship this year, Chooser wins a million dollars. But if Chooser thinks about any basketball player before naming the team, Chooser will be cast into the fires of Hell for all of eternity. What should Chooser do?
I think Chooser should say the first team name that comes into their head, or just pass and get out of the game if that’s possible. They should very much not try to make the choice that maximises expected utility. That will require forming probabilistic judgments about the likelihood of different teams winning, and that will involve thinking about the players on the team, and that will involve damnation. Just say something and hope for the best.
Is this a counterexample to the claim that in non-Demonic cases Chooser should maximise expected utility? No, because that claim is about ideal rationality. When there are constraints on how Chooser can make the choice, ideal rationality doesn’t apply. Chooser should be judged by standards of non-ideal rationality, and non-ideal rationality says to say the first team name that comes into his head. I’ll make all this a fair bit more precise in chapter 5, including coming back to whether it makes sense to talk as if there is a single standard of non-ideal rationality. But I hope it’s reasonably clear that something like this is the right thing to say. There are multiple ways we can judge individual choices in individual cases. The standards of ideal rationality are one of those ways to judge cases, but not the only way. And sometimes when we ask what someone should do, we are (implicitly) using one of those other ways. I think most purported counterexamples to one or other decision theory in the literature are no more convincing than using Quick Basketball Choice to refute expected utility theory. This is especially true when they involve, as this case does, constraints not just on what decision is made, but how it is made. (For example, when they involve the decision maker being punished for randomising). And this is one reason why I distrust arguments from particular examples.
The second reason is more internal to decision theory. The project I’m engaged in, and the project that most philosophers who write about demonic decision theory are engaged in, is the project of generalising expected utility theory to cover Demonic cases. But the methodology of starting with judgments about individual decisions and finding the theory that best fits them does not, in fact, lead to expected utility theory. So that methodology is inconsistent with the theory we are assuming to be correct in non-Demonic cases. So there is a fairly deep tension in any work that tries to generalise expected utility theory by appeal to intuitions about cases. This point requires a bit of background to make, so let’s come back to it after saying more precisely what the theory that we’re all trying to generalise is.