Chapter 3 Why So Defensive?

I’m arguing for causal ratificationism. And much of the argument will come in the next three chapters, when I argue in turn against three of the component parts of of proceduralism. But in this chapter I want to first address an argument that proceduralism must be right, because only procedural theories can deliver what decision theory promises: a rule for making decisions. And the main argument of this chapter is going to be that decision theory cannot, and should not, be in the business of providing such a rule. Such a rule would have to be sensitive to resource constraints, and this kind of sensitivity isn’t compatible with doing the kind of theorising that decision theorists do.

3.1 Decision Theory and Making Decisions

I’ll get to why I think decision theory isn’t, and shouldn’t be, used to help humans make decisions. First I want to argue against something that is perhaps less widely believed, but it probably more plausible: that decision theory will be a helpful way for machines who are not subject to serious resource constraints to make decisions.

We are, collectively, currently engaged in a massive project of making machines that make decisions, from ‘smart’ thermostats to self-driving cars. Now one might have hoped that decision theory would have something useful to contribute to this project. That hope I think can be realised, but it’s complicated. One might have further hoped that decision theory would be helpful in a way that only a proceduralist theory can be helpful, by providing an algorithm to program into the machines. And that hope that some might have won’t be, and shouldn’t be, met. That’s because sometimes we want the machines to be irrational. Here is one simple case of this, based in part on David Lewis’s work on nuclear deterrence (Lewis 1989), and in part on Dr. Strangelove (Kubrick 1964).

Chooser is President of a relatively small country. Due to an unfortunate machine translation incident with their larger neighbor during widget tariff negotiations, the neighbor has become an enemy. (The fact that they are called Enemy should have been a clue that this would happen, but Chooser has the worst advisors and no one noticed this until it was too late.) And Enemy now plans to express their displeasure by launching a nuclear missile at chooser’s largest city. Chooser doesn’t have many ways to respond to this; any normal attempt at retaliation would just launch a larger war that would go very badly for Chooser’s country.

Fortunately, Chooser’s military has just developed a doomsday device. If launched, the doomsday device will kill everyone in Enemy’s country. And Enemy is smart enough, or at least self-interested enough, to not do anything that will lead to the doomsday device being launched. Probably. Unfortunately, the doomsday device will not just kill everyone in Enemy’s country, it will also kill everyone in Chooser’s country. Fortunately, Chooser also has the ability to tie an automated launcher to the doomsday device, so it will launch if any nuclear missile hits their major city. And they have the ability to let Enemy know that they have tied an automated launcher to the doomsday device. So they can make a very credible threat to Enemy.

If it is likely enough that Enemy will back down when threatened this way, Chooser should install the automated launcher. And, and this is very important, they should make sure Enemy knows that they have done so. Even if there is some small probability \(\varepsilon\) that Enemy will launch anyway, if \(\varepsilon\) is small enough, and the probability of having everyone in the largest city killed high enough, it is a risk worth taking.

I’d originally thought of making the doomsday device kill not just everyone in Chooser’s country and Enemy’s country, but everyone in the world. (As was the case for the doomsday device in Dr. Strangelove.) But this complicates the decision making in ways I’d rather avoid. For one thing we have to account for the loss of future generations. For another, as Jonathan Knutzen (2022) points out, we have to account for the loss of humanity in general, on top of the loss of all those individual humans. Maybe there is no realistic probability of failure small enough that this could be a reasonable risk. But there surely is a probability of failure small enough that the risk of losing a whole country is worth trading off against the certainty of losing the largest city. And that’s the risk I’m asking Chooser to take in this particular example. And I think in the right circumstances, it’s a risk to take.

But now change the example in a few ways. Chooser still has the doomsday device, but they don’t have the automated launcher. Fortunately, Enemy is now blessed, or cursed, with a Demon, who can predict with very high probability how Chooser will react if the largest city is destroyed. In particular, the Demon can predict, with very high probability, whether Chooser will react by launching the doomsday device, killing everyone in both countries. Unfortunately, the Demon seems to have predicted that Chooser will not do that, because the nuclear missile is now headed towards the largest city. What should Chooser do?

I think it’s very plausible that Chooser should not respond by launching the doomsday device. Even if Chooser wants to punish Enemy country for launching the nuclear missile, which is a reasonable enough wish, the punishment would not be proportionate, and the damage to Chooser’s own citizens would be intolerable. If Chooser’s only options are the doomsday device or nothing, Chooser has to do nothing. Or so I think; I’ll just note that this is an appeal to intuition about a case and that some people may feel differently. But let’s explore what happens if you agree that it would be wrong to kill everyone in two countries to try and prevent a nuclear missile launch that’s already happened.

One thing that follows from this view about Chooser is that rational choice is not the same thing as the choice a well designed machine would make. And, conversely, a well designed machine will not do just what the rational choice is. We just said that the right way to program the machine, if it is available, is to launch the doomsday device as soon as the nuclear missile is detected. But Chooser, if they are rational, will not launch the doomsday device in response to the nuclear missile. This is a counterexample to Functional Decision Theory (FDT), which says that the rational choice in a situation is the manifestation in that situation of the optimal algorithm (Levinstein and Soares 2020). The optimal algorithm is the one the machine runs: automatically launch the doomsday device. But that’s not what is rational.

The problem here is that launching the doomsday device is what game theorists call a non-credible threat. And you can’t make a non-credible threat credible by loudly insisting that you’ll really really do it this time, or even that it would be the rational thing to do.

There is another problem for FDT concerning pairs of cases. Change the second example so that Enemy’s Demon is actually not very reliable. In this variant, they are better than chance, but not a lot better. Now Chooser certainly wouldn’t set up the doomsday machine to automatically launch; the risk of a false positive is too high. So FDT says that in the variant where chooser has to decide what to do after learning the launch was made, chooser will do nothing. And while this is the right thing to do, it is made for the wrong reasons. Once Enemy has launched the missile, Chooser’s best estimate yesterday of how reliable Demon in general was becomes irrelevant to what chooser should do. But according to FDT it could be decisive.

So if decision theory is relevant to building machines that make decisions, it’s not because the right decision theory should be built into those machines. And hence it’s not because decision theory must be proceduralist in order to make it possible to build it into these machines. It’s rather because the people who make the machines face a very hard decision problem about what kinds of machines to build, and decision theory could be relevant to that problem. But how is decision theory relevant to that problem, or indeed any problem? The next section looks at that question in more detail.

3.2 Why Do Decision Theory

What are we trying to do when we produce a decision theory? I think some of the disputes within the field come from different theorists having different motivations, and hence different answers to this question. My answer is going to be that decision theory plays a key role in a certain kind of explanatory project. And I think defensivism is well suited to play that role. But to see what I mean by playing a key role in an explanatory project, it helps to compare that with other possible views about the aim of decision theory.

One thing you might hope decision theory would do, and certainly one thing students often expect it will do, is provide advice on how to make decisions. I think decision theory is very ill-suited to this task, and it shouldn’t really be the aim of the theory. The primary reason for this is that in any real life situation, the inputs are too hard to identify. To use decision theory as a guide to action, I need to know the utility of the possible states. And I need to know not just what’s better and worse, but how much they are better or worse. At least speaking for myself, the only way I can tell the magnitudes of the differences in utilities between states is to ask about various gambles, and think about which of them I’d be indifferent between. That’s to say, the only way I can tell that the utility of A is half way between that of B and C is to ask whether I’d be indifferent between A and a 50/50 chance of getting B or C. So I have to know what decisions I’d (rationally) make before I can work out the utilities. And that means I have to know what decisions to make before I can even apply decision theory, which is inconsistent with thinking that decision theory should be the guide to what decisions to make. This isn’t such a pressing problem when decisions can be made using purely ordinal utilities, but those cases are rare. So in general there is little use for decision theory in advising decisions.

A somewhat better use for decision theory is in evaluation. Using decision theory, we can look at someone else’s actions and ask whether they were rational. This is particularly pressing in cases where the person has harmed another, perhaps due to possible carelessness, or in defense of another, and we’re interested in whether their actions were rational. Now one immediate complication in these cases is that we don’t know the actor’s value function. Even if the action doesn’t maximise value as we see it, we don’t know whether they have a different value function (perhaps one that puts low weight on harms to others), or they were doing a bad job maximising value. We don’t know whether they were a knave or a fool. But it’s often charitable to assume that they do have a decent enough value function, and we can ask whether what they were doing was rational if they did indeed have a decent value function.

This is a task decision theory is useful for, but it alone wouldn’t justify the existence of books like this one. For one thing, the theory of how to act around Demons isn’t usually relevant to whether an act was careless, or a permissible kind of self/other-defense. It is sometimes relevant. Sometimes we should think game-theoretically about whether a person was acting properly, and that will bring up issues that are similar to issues involved in making decisions around Demons. But usually the kind of decision theory we need in these cases is fairly elementary.

A bigger problem is that rationality is too high a bar in these cases. (I’m indebted here to Jonathan Sarnoff.) Imagine that D puts up a ladder somewhat sloppily, and it falls and injures V. The question at hand is whether D is morally responsible for the injury to V due to their carelessness. Ladders are tricky things, and sometimes one can take reasonable precautions and bad things happen anyway. Sometimes it is correct to attribute an injury to bad luck even if a super-cautious person would have avoided it. We aren’t in general obliged to take every possible precaution to avoid injuring others. (If we were, we wouldn’t be able to go out in public.) So what’s the test we should use for whether this particular injury was just a case of bad luck or a case of carelessness? It is tempting to use decision theory here. The injury is a case of bad luck iff it was decision-theoretically rational for D to act as they did, assuming they had a decent value function. The problem is that this is a really high bar. Imagine that D did what any normal person would do in setting up the ladder, but there was a clever way to secure it for minimal cost that D didn’t notice, and most people wouldn’t have noticed. Then there is a good sense in which what D did was not decision-theoretically rational; the value maximising thing to do was the clever trick. But we don’t want people to be morally responsible every time they fail to notice a clever option that only a handful of people would ever spot. And this is the general case. Decision-theoretic rationality is a maximising notion, and as such it’s a kind of hard norm to satisfy. We don’t want every failure to satisfy it in cases of unintentional injury to others, or intentional injury to others in the pursuit of a justifiable end like self-defence, to incur moral liability. So this isn’t actually a place where decision theory is useful.

And if decision theory isn’t useful in these cases, then it’s value as an evaluative tool is somewhat limited. We can still use it for going around judging people, and saying that was rational, that was irrational. And that’s a fine pastime, being judgmental can be fun, especially if the people being judged are in charge of institutions we care about. But we might hope for a little more out of our theory.

A third role for decision theory is in predicting what people will do. Sometimes we know people’s incentives well enough to be able to predict that they will act as if they are rational. And at least sometimes, that can lead to surprising results. I’ll talk through one case that I find surprising, and I suspect other philosophers will find surprising too.

Chooser runs a televised rock-paper-scissors tournament. Ratings are fine, but Chooser is told by the bosses that what the audience really likes is when rock beats scissors. The audience doesn’t think rock-paper-scissors is really violent enough, and the implicit violence of rock smashing scissors is a help. So Chooser is thinking about generating more outcomes where rock beats scissors. And their plan is to make a cash payoff to players every time they successfully play rock, on top of the point they get for winning the game. Chooser’s hope is that the game payoff will change from the standard payoff table, as shown in table 3.1, to table 3.2.

Table 3.1: Rock-Paper-Scissors

Rock

Paper

Scissors

Rock

0, 0

-1, 1

1, -1

Paper

1, -1

0, 0

-1, 1

Scissors

-1, 1

1, -1

0, 0

Table 3.2: Rock-Paper-Scissors with Bonus for Rock

Rock

Paper

Scissors

Rock

0, 0

-1, 1

2, -1

Paper

1, -1

0, 0

-1, 1

Scissors

-1, 2

1, -1

0, 0

They turn to their resident decision theorist to ask how much this will improve ratings. If you haven’t worked through this kind of problem before, it’s actually a fun little exercise to work out what the effect of this change will be. Because the disappointing news they are going to get from the decision theorist is that this move will backfire. After the change the rock-scissors combination will occur less often than it did before the change.

In the original game, it’s pretty clear what the unique equilibrium of the game is. Each player plays each option with probability \(\frac{1}{3}\). If either player had any deviation from that, then they would in the long run be exploitable. So that’s what they will do, over a long enough run. And that means that a combination where one player chooses Rock and the other chooses Paper will occur in \(\frac{2}{9}\) of games.

But what’s the equilibrium of the new game? It’s symmetric, each player uses the same mixed strategy. And in that mixed strategy, a player chooses Paper with probability \(\frac{5}{12}\), Rock with probability \(\frac{1}{3}\), and Scissors with probability \(\frac{1}{4}\). So the combination where one player chooses Rock and the other chooses Paper will occur in \(\frac{1}{6}\) of games, a considerable reduction from what it previously was.

It is very easy to share the intuition that if you reward a certain kind of behavior, you’ll see more of it. (Some politicians, whose knowledge of economics starts and ends with supply-and-demand graphs, rely on nothing but this intuition in economic policy making.) But that intuition doesn’t always work in the context of competitive games. Here rewarding Rock doesn’t result in any change to how often Rock is played, but does result in a reduction of how often one plays the strategy that Rock defeats. What it actually incentivises is behavior that is outside this Rock-Scissors interaction, i.e., Paper. Now this doesn’t require a huge amount of decision theory to work out - it’s pretty simple linear algebra. But the intuition that rewarding a kind of behavior causes it to be more common is widespread enough that I think a theory that predicts it won’t happen isn’t completely trivial.

So cases like these are cases where, I think, decision theory has a useful predictive function. And this means it has practical advantages to the institutional designer, i.e., the chooser in this case. Knowing a bit of decision theory will tell them not to literally waste their money on this plan to reward players who win playing rock. Does it also have practical advantages to the players? Perhaps it has some, though it’s a little less clear. After all, if every other player finds the new equilibrium quickly enough, then the expected return of each strategy in a given game will be equal. Decision theory itself says that in a particular play it doesn’t matter what a player does. So it kind of undermines its own claim to being practically significant to the players. But this shouldn’t reduce how useful the theory is at predicting how players will react to changes in the institutional design, and hence how valuable the theory could be to institutional designers.

But while decision theory can be predictively useful, the main role it plays is in explaining human behavior. Think about the explanation George Akerlof offers of why used cars lose value so quickly, i.e., why people don’t pay nearly as much for lightly used cars as they play for new cars (Akerlof 1970). Or think about the explanation Michael Spence offers for why employers might pay more to hire college graduates even if college does not make employees more productive (Spence 1973). In each case carefully thinking about the decision problem each actor faces can give us a story about how behavior that looks surprising at first actually makes sense.

The point is not that these explanations always work. Both of them make substantive assumptions about the kind of situation that actors are in. I’m inclined to think that the assumptions in Akerlof’s model are close enough to true that his explanation works, and the ones in Spence’s model are not. But whether you think that’s true or not, what you should think is that models like these show how decision theory can play a role in simple but striking explanations of otherwise mysterious behaviour.

A key assumption in each such explanation is that actors are basically rational. Or, at least, that their behaviour is close enough to what it would be if they were rational that rationality is a good enough assumption for explanatory purposes. A long tradition in philosophy of economics is that this is a fatal weakness in these explanations. After all, we know that people are not in fact perfectly rational. But I’m inclined to think it is a strength, in fact an important strength of decision theoretic explanations of behavior.

The fact that people are not perfectly rational does not mean that we cannot explain their behaviour using models that assume rationality. All that we need for that is that in a particular situation, the behaviour is as it would be if they were rational. And that can be true in certain domains. For example, people who prefer vanilla ice cream to strawberry ice cream buy more vanilla ice cream than strawberry ice cream. Now wheeling out a belief-desire model of action, combined with an assumption that ice-cream purchases are made by practically rational actors, to explain that pattern of purchases would be overkill. But it wouldn’t be wrong. In some cases people collectively do act as they would if they were all rational. Not in all cases, of course, but in some. And to tell whether we are in such a case, we have to look.

To know whether people are acting rationally, we sometimes need to have a sophisticated theory of decision. At first glance, it might look irrational to have a very strong preference for new cars over lightly used cars. It takes some work to see that it might be, as Akerlof argued, a rational reaction to epistemic asymmetries. This work is a project that decision theory can contribute to, and indeed has contributed to.

The project of at least trying to see how surprising behaviour might be rational, a project which decision theory has a key role in, is valuable for two reasons.

One reason is epistemic. It’s really easy to fall into thinking that certain behaviour is the result of a bias, and not even look for possible rational explanations of it. This is what Brighton and Gigerenzer call the ‘bias bias’ (Brighton and Gigerenzer 2015). You don’t even have to posit a bias in favor of explanations in terms of irrationality to get this result. Often the explanation in terms of irrationality is easier. It’s much easier to say that people have an irrational attachment to new cars than to build a model of rational choice under epistemic asymmetry that explains the behaviour. And obviously its easier to settle for easy explanations. A commitment to looking for rational explanations of behaviour is a useful practice because it makes the researcher not settle for simple explanations. It might be that on a particular occasion the simple explanation is right, and people are just being irrational. But it is often a good use of time to at least look at what the best rational explanation is, and see if it is as plausible as the best irrational explanation.

Another reason is moral. There is a kind of respect involved in treating people as rational, or at least taking as a live option that people are acting rationally unless there is fairly strong reason to believe they are not. And we should show this kind of respect to other humans. And explanations of behaviour in terms of rational choice typically have the advantage that they make sense not just to the theorist, but to the person actually carrying out the behaviour. They allow for at least the possibility of the theorist and the person being theorised about to understand the behaviour in the same way. And that’s a kind of equality that we should value.

The upshot of these considerations is that we should try to see how surprising behaviour could be rational. Rather than seeing someone as incompetently trying to carry out our ends, we should consider the possibility that behaviour we find surprising is the result of differences in what evidence is available, or in what values the actors have. Making sure we at least check what an explanation in terms of rational choice would look like is a useful heuristic because it sometimes turns up surprising and plausible models. It’s an empirical question whether it is an efficient heuristic. I suspect it is, but maybe there are other heuristics that more efficiently lead to plausible models. Even if that were so, I would still think that it would be good to start by looking for rational choice models. That’s because the moral reasons in favor of looking for these kinds of models, or for these kinds of explanations, would be decisive.

That’s what I think the primary purpose of decision theory is. It’s part of the project of trying to explain surprising aspects of human behaviour in terms of rational choices by people with different amounts of evidence, and different values. And since that’s a very valuable project, I think decision theory is valuable, at least insofar as it contributes to the project.

For what it’s worth, I think the kind of decision theory I’m defending, where the central principle is that choices must be defensible, has a much better track record of contributing to rational explanations of surprising behaviour than do its rivals. I don’t know of real world situations in which we see Evidential Decision Theory play a particularly useful role in explaining what people do. Are there any papers based on EDT that as good as Akerlof’s original paper on the market for lemons? Even if there are no such papers, it is possible that there are institutional reasons for this. Maybe not enough economists or political scientists get taught EDT, or maybe malicious journal editors conspire to not publish papers using models based on EDT. But if the role of decision theory is, as I’ve argued, to contribute to these kinds of explanations, it would be useful to see how much of a contribution to rival philosophical theories of decision could actually make.

3.3 Why Do Ideal Decision Theory

The previous section was on why philosophers should care about decision theory. But what I’m doing in this book isn’t just decision theory, it’s a very specific kind of decision theory. What I’m doing might be called ideal decision theory. It’s the theory of how idealised agents make decisions. And I’m going to appeal to those idealisations a fair bit in what follows. This section is about why we should care about such an idealised theory, and in particular about why a ratificationist decision theory should care about it.

One lesson of cases like Salesman is that every theory of idealised decision making needs to be complemented with a theory of non-ideal decision making. But this isn’t the only lesson one could take from the example. Another lesson could be that ideal decision theory is a pointless enterprise. It should not be supplemented by non-ideal decision theory, but replaced. I don’t think that’s right, but it takes some argument to say why it isn’t right. That longer argument will come in chapter 5, but for now I’ll just say what positive role I think ideal decision theory has to play.

Let’s start with some things that ideal theory cannot do. It can’t give people a target they should approximate. That’s because the following is a very bad argument.

  1. The ideal is X.
  2. So, Chooser should be as much like X as they can be.

We know that isn’t right for reasons set out by Lipsey and Lancaster (1956). If one can’t be like the ideal, it is often best to do other things that the ideal chooser would not do to offset these failings. Here’s one simple example. The ideal chooser, in decision theory, can do all the reasoning that is needed for a problem instantaneously. So it’s a bad idea for them to stop and have a think about it before making a big decision. Since they have thought all the thoughts that are needed, that would just be a waste of time. But it’s often a very good idea for Chooser to stop and have a think about it before making a big decision. Not doing that, in order to be more like the ideal, is a mistake.

The following argument isn’t as bad, but it isn’t right either.

  1. The ideal is X.
  2. Chooser’s situation is approximately ideal.
  3. So Chooser should do approximately X.

The situations where this fails are a bit more contrived than the situations where the previous argument failed, at least for typical individuals. But here the details of Lipsey and Lancaster’s argument matter. At least when Chooser is designing institutions, like market structures or taxation systems, it turns out to very often be the case that the the second best solution looks dramatically different to the best solution. And someone who is approximately ideal might only be able to find the second best solution, not the best one, in a reasonable time. So it’s possible that someone with very mild computational limitations to do something very different from what the ideal agent would do, and yet be acting optimally given their limitations.

The following two arguments, however, are good arguments. And the two main uses of ideal decision theory are related to these two good arguments.

  1. The ideal is X.
  2. The differences between Chooser’s situation and the ideal are irrelevant.
  3. So Chooser should do X.

Ideal theory can provide advice, in situations that are like the ideal in suitable ways. It isn’t trivial for the ideal theory to provide such advice though. The second premise is often very hard to justify. But in some cases it is not that hard—the computations that have to be made are easy, and the stakes are high, so it is worth spending the resources to make all the computations. And in those cases, we expect Chooser to act like the ideal. “Expect” here has both a normative and a descriptive meaning, and let’s make the latter of those explicit with another good argument that uses ideal decision theory.

  1. The ideal is X.
  2. The differences between Chooser’s situation and the ideal are irrelevant.
  3. So, Chooser will do X.

If we interpret ‘situation’ in premise 2 to include the fact that Chooser is (approximately) rational, then this is a good argument too. And when we have an argument like this, we can use it to predict what Chooser will do, and explain what Chooser has done.

This kind of argument can be used in explanations that have the structure Michael Strevens (2008) argues that explanations involving idealisations always have. They include a model saying what would happen in idealised situations. And they include a premise that doesn’t just say that the real situation is close to the ideal, but that the differences don’t matter for the purposes of what we’re trying to explain. That kind of structure covers simple explanations using the ideal gas law you might learn in introductory chemistry, and it also covers explanations using ideally rational agents. What we need for, e.g., Akerlof’s explanation of used car prices to work is not that everyone in that market is perfectly rational, but that they are close enough to rational for the purposes of predicting how they will behave in the used car market. That’s important because participants in the used car market are not perfectly rational; they might not even be close to it. But just as real molecules can be modeled by things that are infinitely smaller than them, real buyers and sellers of used cars can be modeled by actors that are infinitely more rational than they are.

And that’s the big picture project that I want this book to be contributing to. There are both epistemic and moral reasons to look for explanations of behaviour in terms of individuals acting rationally. These explanations will be idealised explanations. Idealised explanations involve describing carefully how things work in the ideal, and arguing that the differences between the ideal and the reality are unimportant for the particular thing being explained. The latter task is hard, and often not done with sufficient care, but isn’t impossible. This book contributes primarily to the former task, though especially in chapter 5 I’ll have things to say relevant to the latter task as well.

David Lewis gives a similar account of the purpose of decision theory in a letter to Hugh Mellor. The context of the letter, like the context of this section, is a discussion of why idealisations are useful in decision theory. Lewis writes,

We’re describing (one aspect of) what an ideally rational agent would do, and remarking that somehow we manage to approximate this, and perhaps – I’d play this down – advising people to approximate it a bit better if they can. (Lewis 2020, 432)

To conclude this section on an historical note, I want to compare the view I’m adopting to the position Frank Knight puts forward in this famous footnote.

It is evident that the rational thing to do is to be irrational, where deliberation and estimation cost more than they are worth. That this is very often true, and that men still oftener (perhaps) behave as if it were, does not vitiate economic reasoning to the extent that might be supposed. For these irrationalities (whether rational or irrational!) tend to offset each other. The applicability of the general “theory” of conduct to a particular individual in a particular case is likely to give results bordering on the grotesque, but en masse and in the long run it is not so. The market behaves as if men were wont to calculate with the utmost precision in making their choices. We live largely, of necessity, by rule and blindly; but the results approximate rationality fairly well on an average. (Knight 1921, 67n1)

Like Knight, I think that explanations in social science can treat people as rational even if they are not, even if it would “give results bordering on the grotesque” to imagine them as perfectly rational. That’s because, at least in the right circumstances, the irrationalities are irrelevant, or they cancel out, and the “as if” explanation goes through. Now I do disagree with the somewhat blithe attitude Knight takes towards the possibility that these imperfections will not cancel out, that they will in fact reinforce each other and be of central importance in explaining various phenomena. But that’s something to be worked out on a case-by-case basis. We should not presuppose in advance either that the imperfections be irrelevant or that they will be decisive.

There is one other point of agreement with Knight that I want to emphasise. If we don’t act by first drawing Marshallian curves and solving optimisation problems, how do we act? As he says, we typically act “by rule”. Our lives are governed, on day-by-day, minute-by-minute basis, by a series of rules we have internalised for how to act in various situations. The rules will typically have some kind of hierarchical structure - do this in this situation unless a particular exception arises, in which case do this other thing, unless of course a further exception arises, in which case, and so on. And the benefit of adopting rules with this structure is that they, typically, produce the best trade off between results and cognitive effort.

One other useful role for ideal decision theory is in the testing and generation of these rules. We don’t expect people who have to make split-second decisions to calculate expected utilities. But we can expect them to learn some simple heuristics, and we can expect theorists to use ideal decision theory to test whether those heuristics are right, or whether some other simple heuristic would be better. This kind of approach is very useful in sports, where athletes have to make decisions very fast, and there is enough repetition for theorists to calculate expected utilities with some precision. But it can be used in other parts of life, and it is a useful role for ideal decision theory alongside its roles in institutional design (as in the rock-paper-scissors example), and in explanation (as in the used cars example).

3.4 Why Not Proceduralism

Let’s take stock. This chapter has been a response to the following two kinds of worries.

  • The best thing that decision theory could do would be to provide a procedure for making good decisions.
  • If decision theory can’t do that, it’s a pointless activity.

So far the attention has been primarily on the second point. I’ve argued that it isn’t true - that decision theory can have an important role in explanations of social phenomena even if it doesn’t provide a procedure for making good decisions. But what about the first point? Even if this is a role for decision theory, wouldn’t a procedure for good decisions be better? In some sense perhaps it would be, but there is no reason to think that anything like ideal decision theory is going to be part of such a procedure for creatures like us. For just one example, the optimal procedure in cases like Salesman will give advice that contradicts what every theory of ideal decision on the market says.

In practice, the best thing a decision theory can do is be part of a broader project of helping us understand and navigate the world. To do that, it need not provide a procedure that can be followed on any given occasion, and indeed it could not do that. It is better to provide tools that can be used in one or other multi-stage process. For instance, it can provide reasons for thinking that this or that institutional design will fail or succeed in a particular way. Or it can compare heuristics that humans are capable of applying. Neither of these roles require that decision theory be proceduralist. Indeed, a non-proceduralist theory that says that we can’t predict how a certain institution will work, because it generates a decision problem for individuals with multiple solutions, might be more useful than one that posits a false certainty about what will happen. So, in short, there is no argument from the purpose to which decision theory can or should be put to the conclusion that decision theory should be proceduralist.

Nothing in this chapter, however, is an argument against proceduralism. It has been a long argument against the necessity of proceduralism, but nothing more. In the next two chapters, I will argue directly that the right decision theory is not proceduralist.