Chapter 6 Against Coherence Norms
Some say decision theory is just about doing well by one’s own lights.
I follow Comesana 2020 in saying rational action requires rational belief.
Actually doesn’t quite require - Knight’s kid playing in the field is fine
And I mean I think something stronger than Comesana - I mean that there isn’t anything extra good re coherence
6.1 Coherence for the Incoherent
What is it to do best by one’s own lights
If one believes p, q -> -p, and q, is it best to act as if p, or as if q
At best we can get rules for how to be more coherent if you are a bit coherent
And that’s maybe something, but not a lot
But even this requires there being a gap between coherence norms and evidence norms, and I don’t think there is
6.2 Coherence is a Substantive Norm
Summarise Worsnip book
Intuition about guy who punches self in head because he thinks it will mean some trivial thing he cares about is realised
Intuition about guy who goes from p to p v q1, p v q2, etc, etc, all day every day
Intuition that Dummett/Priest etc are, if wrong about anything, wrong about a substantive matter
Worsnip response - everyone has a tendency to be coherent
Response - what, and I cannot stress this enough, the f
There is no way that’s a correct description of the dialethist, the intuitionist, etc
6.3 Coherence in Signaling Games
So far I’ve offered the following argument against the view that decision theory is only about how people should act given their existing beliefs and desires, and has no interest in the rationality of the beliefs.
- The view does not make sense when applied to people whose beliefs are not just irrational, but incoherent.
- So the view needs a distinction between coherence norms on belief, which must be satisfied for decision theory to be applicable, and substantive norms on belief, which are irrelevant to decision theory.
- But thinking about heterodox logicians reveals that there is no distinction to be found here.
- So, the view does not ultimately make sense.
Here I want to change tack and offer a direct argument, from with decision theory, for the argument that the decision-theoretic notion of rational action is sensitive to the rationality of the chooser’s underlying beliefs. The argument is going to be that the best solution to the beer-quiche game (Cho and Kreps 1987) requires that we look at the rationality of the underlying beliefs, not just at which actions flow in the right way from existing beliefs.
To start, let’s translate the beer-quiche game into decision-theoretic terms, using an arbitrarily accurate demon. The problem is a little more complicated than Newcomb-like problems often are, but it should be reasonably familiar if one is used to the kind of signaling games first developed by David Lewis (1969). The game goes through the following stages.
- Both Chooser and Demon are informed of all the following facts, and it is made clear that they are common knowledge.
- Chooser is randomly assigned to one of two types, which we’ll call \(u\) and \(d\), for Up and Down. This assignment is done by a random device which has an 0.6 chance of assigning Chooser to \(u\), and an 0.4 chance of assigning Chooser to \(d\). Demon is not told of the assignment, and cannot predict how random devices work.
- Chooser will then make a choice of two options, which we will label \(U\) and \(D\). Demon will be told which option Chooser takes.
- Demon will then try to guess which type Chooser is.
- In making this guess, Demon will use their arbitrarily good ability to predict Chooser’s strategy. The strategy, in the relevant sense, is Chooser’s function from type assignment to choice. Chooser can randomise, so a function is a pair of probabilities - what probability of selecting \(U\) if they are type \(u\), and what probability of selecting \(U\) if they are type \(d\).
- Chooser gets 2 utils if Demon predicts they are type \(u\), and 1 util if their choice ‘matches’ their type, i.e., if they select \(U\) if they are \(u\) or \(D\) if they are \(d\).
- Demon gets 1 util if their guess is correct.
Figure 6.1 presents the game in graphical form.

Figure 6.1: A Signaling Game
The game starts in the middle. Nature assigns Chooser to a type, and we move either up, if they are assigned \(u\), or down, if they are assigned \(d\). Then Chooser chooses an option. We move left if they choose \(U\), and right if they choose \(R\). Then Demon chooses, and we move up or down on the angled lines. The dotted lines around the two nodes are there because Demon doesn’t know precisely which node they are at. They know what Chooser chose, the nodes inside the dashed lines are alike in that respect. But they don’t know which type assignment was made. And then we get the payouts, using the formulae in lines 6 and 7.
Note that while Demon can perfectly predict Chooser’s strategy, it doesn’t follow that they will perfectly predict Chooser’s type. This can be true even if Chooser uses a non-probabilistic strategy. In particular, it is true if Chooser adopts what’s called a pooling strategy, of playing the same option whatever type they are. If Chooser plays \(U\) whether they are \(u\) or \(d\), the Demon will get no information from the play, and have to use their prior credence that Chooser has an 0.6 chance of being \(u\). And so, it will be expected utility maximising for Demon to guess that Chooser is \(u\), and that’s what they will do. And the same goes for the situation where Chooser’s strategy is to play \(D\) no matter what.
The non-pooling strategies, on the other hand, are not stable. If Chooser gives Demon information about their type, that will mean Demon is more likely to accurately guess their type. And that’s bad news for choosers who are of type \(d\). But since Chooser knows their type when they act, they will not perform an act that’s bad for their type. So they will not do anything other than play one of the two pooling strategies. (I’m glossing over a lot of the details here, but this is well worked out territory. See, inter alia, Cho and Kreps (1987) for the more careful version of the argument in this paragraph.)
So Chooser will play a pooling strategy. But which one will they play? Playing \(U\) makes sense. If they are of type \(u\), then they will get the best possible return, so they will be happy to follow through on the strategy. And if they are of type \(d\), they will get a return of 2 from this strategy, and a return of 1 if they deviate from the strategy and play \(D\).
The best solution to beer-quiche puts constraints on priors
These aren’t coherence constraints in any recognisable sense
But they are the kind of thing decision theory should take into account
So decision theory should take substantive notions of rationality into account