# Knowledge, Bets and Interests

This paper argues that the interest-relativity of knowledge cannot be explained by the interest-relativity of belief. The discussion starts with an argument that knowledge plays a key pair of roles in decision theory. It is then argued that knowledge cannot play that role unless knowledge is interest-relative. The theory of the interest-relativity of belief is reviewed and revised. That theory can explain some of the cases that are used to suggest knowledge is interest-relative. But it can’t explain some cases involving ignorance, or mistake, about the odds at which a bet is offered. The paper ends with an argument that these cases require positing interest-relative defeaters, which affect whether an agent knows something without affecting whether she believes it, or is justified in believing it.

Brian Weatherson http://brian.weatherson.org (University of Michigan)https://umich.edu
July 26 2012

When you pick up a volume like this one, which describes itself as being about ‘knowledge ascriptions,’ you probably expect to find it full of papers on epistemology, broadly construed. And you’d probably expect many of those papers to concern themselves with cases where the interests of various parties (ascribers, subjects of the ascriptions, etc.) change radically, and this affects the truth values of various ascriptions. And, at least in this paper, your expectations will be clearly met.

But here’s an interesting contrast. If you’d picked up a volume of papers on ‘belief ascriptions,’ you’d expect to find a radically different menu of writers and subjects. You’d expect to find a lot of concern about names and demonstratives, and about how they can be used by people not entirely certain about their denotation. More generally, you’d expect to find less epistemology, and much more mind and language. I haven’t read all the companion papers to mine in this volume, but I bet you won’t find much of that here.

This is perhaps unfortunate, since belief ascriptions and knowledge ascriptions raise at least some similar issues. Consider a kind of contextualism about belief ascriptions, which holds that (L) can be truly uttered in some contexts, but not in others, depending on just what aspects of Lois Lane’s psychology are relevant in the conversation.1

1. Lois Lane believes that Clark Kent is vulnerable to kryptonite.

We could imagine a theorist who says that whether (L) can be uttered truly depends on whether it matters to the conversation that Lois Lane might not recognise Clark Kent when he’s wearing his Superman uniform. And, this theorist might continue, this isn’t because ‘Clark Kent’ is a context-sensitive expression; it is rather because ‘believes’ is context-sensitive. Such a theorist will also, presumably, say that whether (K) can be uttered truly is context-sensitive.

1. Lois Lane knows that Clark Kent is vulnerable to kryptonite.

And so, our theorist is a kind of contextualist about knowledge ascriptions. But they might agree with approximately none of the motivations for contextualism about knowledge ascriptions put forward by , or . Rather, they are a contextualist about knowledge ascriptions solely because they are contextualist about belief ascriptions like (L).

Call the position I’ve just described doxastic contextualism about knowledge ascriptions. It’s a kind of contextualism all right; it says that (K) is context sensitive, and not merely because of the context-sensitivity of any term in the ‘that’-clause. But it explains the contextualism solely in terms of the contextualism of belief ascriptions. The more familiar kind of contextualism about knowledge ascriptions we’ll call non-doxastic contextualism. Note that the way we’re classifying theories, a view that holds that (K) is context-sensitive both because (L) is context-sensitive and because Cohen et al are correct is a version of non-doxastic contextualism. The label ‘non-doxastic’ is being used to mean that the contextualism isn’t solely doxastic, rather than as denying contextualism about belief ascriptions.

We can make the same kind of division among interest-relative invariantist, or IRI, theories of knowledge ascriptions. Any kind of IRI will say that there are sentences of the form S knows that p whose truth depends on the interests, in some sense, of $$S$$. But we can divide IRI theories up the same way that we divide up contextualist theories.

Doxastic IRI

Knowledge ascriptions are interest-relative, but their interest-relativity traces solely to the interest-relativity of the corresponding belief ascriptions.

Non-Doxastic IRI

Knowledge ascriptions are interest-relative, and their interest-relativity goes beyond the interest-relativity of the corresponding belief ascriptions.

Again, a theory that holds both that belief ascriptions are interest-relative, and that some of the interest-relativity of knowledge ascriptions is not explained by the interest-relativity of belief ascriptions, will count as a version of non-doxastic IRI. I’m going to defend a view from this class here.

In my I tried to motivate Doxastic IRI. It isn’t completely trivial to map my view onto the existing views in the literature, but the idea was to renounce contextualism and all its empty promises, and endorse a position that’s usually known as ‘strict invariantism’ about these classes of statements:

• $$S$$ is justified in having credence $$x$$ in $$p$$;

• If $$S$$ believes that $$p$$, she knows that $$p$$;

while holding that the interests of S are relevant to the truth of statements from these classes:

• $$S$$ believes that $$p$$;

• $$S$$ justifiably believes that $$p$$;

• $$S$$ knows that $$p$$.

But I didn’t argue for all of that. What I argued for was Doxastic IRI about ascriptions of justified belief, and I hinted that the same arguments would generalise to knowledge ascriptions. I now think those hints were mistaken, and want to defend Non-Doxastic IRI about knowledge ascriptions.2 My change of heart has been prompted by cases like those Jason calls ‘Ignorant High Stakes’ cases.3 But to see why these cases matter, it will help to start with why I think some kind of IRI must be true.

Here’s the plan of attack. In section 1, I’m going to argue that knowledge plays an important role in decision theory. In particular, I’ll argue (a) that it is legitimate to write something onto a decision table iff the decision maker knows it to be true, and (b) it is legitimate to leave a possible state of the world off a decision table iff the decision maker knows it not to obtain. I’ll go on to argue that this, plus some very plausible extra assumptions about the rationality of certain possible choices, implies that knowledge is interest-relative. In section 2 I’ll summarise and extend the argument from that belief is interest-relative. People who are especially interested in the epistemology rather than the theory of belief may skip this. But I think this material is important; most of the examples of interest-relative knowledge in the literature can be explained by the interest-relativity of belief. I used to think all such cases could be explained. Section 3 describes why I no longer think that. Reflections on cases like the Coraline example suggests that there are coherence constraints on knowledge that go beyond the coherence constraints on justified true belief. The scope of these constraints is, I’ll argue, interest-relative. So knowledge, unlike belief or justified belief, has interest-relative defeaters. That’s inconsistent with Doxastic IRI, so Doxastic IRI is false.

## The Interest-Relativity of Knowledge

### The Struction of Decision Problems

Professor Dec is teaching introductory decision theory to her undergraduate class. She is trying to introduce the notion of a dominant choice. So she introduces the following problem, with two states, $$S_1$$ and $$S_2$$, and two choices, $$C_1$$ and $$C_2$$, as is normal for introductory problems.

 $$S_1$$ $$S_2$$ $$C_1$$ -$200$1000 $$C_2$$ -$100$1500

She’s hoping that the students will see that $$C_1$$ and $$C_2$$ are bets, but $$C_2$$ is clearly the better bet. If $$S_1$$ is actual, then both bets lose, but $$C_2$$ loses less money. If $$S_2$$ is actual, then both bets win, but $$C_2$$ wins more. So $$C_2$$ is better. That analysis is clearly wrong if the state is causally dependent on the choice, and controversial if the states are evidentially dependent on the choices. But Professor Dec has not given any reason for the students to think that the states are dependent on the choices in either way, and in fact the students don’t worry about that kind of dependence.

That doesn’t mean, however, that the students all adopt the analysis that Professor Dec wants them to. One student, Stu, is particularly unwilling to accept that $$C_2$$ is better than $$C_1$$. He thinks, on the basis of his experience, that when more than $1000 is on the line, people aren’t as reliable about paying out on bets. So while $$C_1$$ is guaranteed to deliver$1000 if $$S_2$$, if the agent bets on $$C_2$$, she might face some difficulty in collecting on her money.

Given the context, i.e., that they are in an undergraduate decision theory class, it seems that Stu has misunderstood the question that Professor Dec intended to ask. But it is a little harder than it first seems to specify just exactly what Stu’s mistake is. It isn’t that he thinks Professor Dec has misdescribed the situation. It isn’t that he thinks the agent won’t collect $1500 if she chooses $$C_2$$ and is in $$S_2$$. He just thinks that she might not be able to collect it, so the expected payout might really be a little less than$1500.

But Stu is not the only problem that Professor Dec has. She also has trouble convincing Dom of the argument. He thinks there should be a third state added, $$S_3$$. In $$S_3$$, there is a vengeful God who is about to end the world, and take everyone who chose $$C_1$$ to heaven, while sending everyone who chose $$C_2$$ to hell. Since heaven is better than hell, $$C_2$$ does not dominate $$C_1$$; it is worse in $$S_3$$. If decision theory is to be useful, we must say something about why we can leave states like $$S_3$$ off the decision table.

So in order to teach decision theory, Professor Dec has to answer two questions.4

1. What makes it legitimate to write something on the decision table, such as the ‘$1500’ we write in the bottom right cell of Dec’s table? 2. What makes it legitimate to leave something off a decision table, such as leaving Dom’s state $$S_3$$ off the table? Let’s start with a simpler problem that helps with both questions. Alice is out of town on a holiday, and she faces the following decision choice concerning what to do with a token in her hand.  Choice Outcome Put token on table Win$1000 Put token in pocket Win nothing

This looks easy, especially if we’ve taken Professor Dec’s class. Putting the token on the table dominates putting the token in her pocket. It returns $1000, versus no gain. So she should put the token on the table. I’ve left Alice’s story fairly schematic; let’s fill in some of the details. Alice is on holiday at a casino. It’s a fair casino; the probabilities of the outcomes of each of the games is just what you’d expect. And Alice knows this. The table she’s standing at is a roulette table. The token is a chip from the casino worth$1000. Putting the token on the table means placing a bet. As it turns out, it means placing a bet on the roulette wheel landing on 28. If that bet wins she gets her token back and another token of the same value. There are many other bets she could make, but Alice has decided not to make all but one of them. Since her birthday is the 28$$^{\text{th}}$$, she is tempted to put a bet on 28; that’s the only bet she is considering. If she makes this bet, the objective chance of her winning is $$\frac{1}{38}$$, and she knows this. As a matter of fact she will win, but she doesn’t know this. (This is why the description in the table I presented above is truthful, though frightfully misleading.) As you can see, the odds on this bet are terrible. She should have a chance of winning around $$\frac{1}{2}$$ to justify placing this bet.5 So the above table, which makes it look like placing the bet is the dominant, and hence rational, option, is misleading.

Just how is the table misleading though? It isn’t because what is says is false. If Alice puts the token on the table she wins $1000; and if she doesn’t, she stays where she is. It isn’t, or isn’t just, that Alice doesn’t believe the table reflects what will happen if she places the bet. As it turns out, Alice is smart, so she doesn’t form beliefs about chance events like roulette wheels. But even if she did, that wouldn’t change how misleading the table is. The table suggests that it is rational for Alice to put the token on the table. In fact, that is irrational. And it would still be irrational if Alice believes, irrationally, that the wheel will land on 28. A better suggestion is that the table is misleading because Alice doesn’t know that it accurately depicts the choice she faced. If she did know that these were the outcomes to putting the token on the table versus in her pocket, it seems it would be rational for her to put it on the table. If we take it as tacit in a presentation of a decision problem that the agent knows that the table accurately depicts the outcomes of various choices in different states, then we can tell a plausible story about what the miscommunication between Professor Dec and her students was. Stu was assuming that if the agent wins$1500, she might not be able to easily collect. That is, he was assuming that the agent does not know that she’ll get $1500 if she chooses $$C_2$$ and is in state $$S_2$$. Professor Dec, if she’s anything like other decision theory professors, will have assumed that the agent did know exactly that. And the miscommunication between Professor Dec and Dom also concerns knowledge. When Dec wrote that table up, she was saying that the agent knew that $$S_1$$ or $$S_2$$ obtained. And when she says it is best to take dominating options, she means that it is best to take options that one knows to have better outcomes. So here are the answers to Stu and Dom’s challenges. 1. It is legitimate to write something on the decision table, such as the ‘$1500’ we write in the bottom right cell of Dec’s table, iff the decision maker knows it to be true.

2. It is legitimate to leave something off a decision table, such as leaving Dom’s state $$S_3$$ off the table, iff the decision maker knows it not to obtain.

Perhaps those answers are not correct, but what we can clearly see by reflecting on these cases is that the standard presentation of a decision problem presupposes not just that the table states what will happen, but the agent stands in some special doxastic relationship to the information explicitly on the table (such as that Alice will get $1500 if $$C_2$$ and $$S_2$$) and implied by where the table ends (such as that $$S_3$$ will not happen). Could that relationship be weaker than knowledge? It’s true that it is hard to come up with clear counterexamples to the suggestion that the relationship is merely justified true belief. But I think it is somewhat implausible to hold that the standard presentation of an example merely presupposes that the agent has a justified true belief that the table is correct, and does not in addition know that the table is correct. My reasons for thinking this are similar to one of the reasons Timothy Williamson (Williamson 2000 Ch. 9) gives for doubting that one’s evidence is all that one justifiably truly believes. To put the point in Lewisian terms, it seems that knowledge is a much more natural relation than justified true belief. And when ascribing contents, especially contents of tacitly held beliefs, we should strongly prefer to ascribe more rather than less natural contents.6 So the ‘special doxastic relationship’ is not weaker than knowledge. Could it be stronger? Could it be, for example, that the relationship is certainty, or some kind of iterated knowledge? Plausibly in some game-theoretic settings it is stronger – it involves not just knowing that the table is accurate, but knowing that the other player knows the table is accurate. In some cases, the standard treatment of games will require positing even more iterations of knowledge. For convenience, it is sometimes explicitly stated that iterations continue indefinitely, so each party knows the table is correct, and knows each party knows this, and knows each party knows that, and knows each party knows that, and so on. An early example of this in philosophy is in the work by David on convention. But it is usually acknowledged (again in a tradition extending back at least to Lewis) that only the first few iterations are actually needed in any problem, and it seems a mistake to attribute more iterations than are actually used in deriving solutions to any particular game. The reason that would be a mistake is that we want game theory, and decision theory, to be applicable to real-life situations. There is very little that we know, and know that we know, and know we know we know, and so on indefinitely (Williamson 2000 Ch. 4). There is, perhaps, even less that we are certain of. If we only could say that a person is making a particular decision when they stand in these very strong relationships to the parameters of the decision table, then people will almost never be making the kinds of decision we study in decision theory. Since decision theory and game theory are not meant to be that impractical, I conclude that the ‘special doxastic relationship’ cannot be that strong. It could be that in some games, the special relationship will involve a few iterations of knowledge, but in decision problems, where the epistemic states of others are irrelevant, even that is unnecessary, and simple knowledge seems sufficient. It might be argued here that we shouldn’t expect to apply decision theory directly to real-life problems, but only to idealised versions of them, so it would be acceptable to, for instance, require that the things we put in the table are, say, things that have probability exactly 1. In real life, virtually nothing has probability 1. In an idealisation, many things do. But to argue this way seems to involve using ‘idealisation’ in an unnatural sense. There is a sense in which, whenever we treat something with non-maximal probability as simply given in a decision problem that we’re ignoring, or abstracting away from, some complication. But we aren’t idealising. On the contrary, we’re modelling the agent as if they were irrationally certain in some things which are merely very very probable. So it’s better to say that any application of decision theory to a real-life problem will involve ignoring certain (counterfactual) logical or metaphysical possibilities in which the decision table is not actually true. But not any old abstraction will do. We can’t ignore just anything, at least not if we want a good model. Which abstractions are acceptable? The response I’ve offered to Dom’s challenge suggests an answer to this: we can abstract away from any possibility in which something the agent actually knows is false. I don’t have a knock-down argument that this is the best of all possible abstractions, but nor do I know of any alternative answer to the question which abstractions are acceptable which is nearly as plausible. We might be tempted to say that we can abstract away from anything such that the difference between its probability and 1 doesn’t make a difference to the ultimate answer to the decision problem. More carefully, the idea would be that we can have the decision table represent that $$p$$ iff $$p$$ is true and treating $$\Pr(p)$$ as 1 rather than its actual value doesn’t change what the agent should do. I think this is the most plausible story one could tell about decision tables if one didn’t like the knowledge first story that I tell. But I also don’t think it works, because of cases like the following. Luc is lucky; he’s in a casino where they are offering better than fair odds on roulette. Although the chance of winning any bet is , if Luc bets$10, and his bet wins, he will win $400. (That’s the only bet on offer.) Luc, like Alice, is considering betting on 28. As it turns out, 28 won’t come up, although since this is a fair roulette wheel, Luc doesn’t know this. Luc, like most agents, has a declining marginal utility for money. He currently has$1,000, and for any amount of money $$x$$, Luc gets utility $$u(x) = x^{\frac{1}{2}}$$ out of having $$x$$. So Luc’s current utility (from money) is, roughly, 31.622. If he bets and loses, his utility will be, roughly, 31.464. And if he bets and wins, his utility will be, roughly, 37.417. So he stands to gain about 5.794, and to lose about 0.159. So he stands to gain about 36.5 as much as he stands to lose. Since the odds of winning are less than $$\frac{1}{36.5}$$, his expected utility goes down if he takes the bet, so he shouldn’t take it. Of course, if the probability of losing was 1, and not merely $$\frac{37}{38}$$, he shouldn’t take the bet too. Does that mean it is acceptable, in presenting Luc’s decision problem, to leave off the table any possibility of him winning, since he won’t win, and setting the probability of losing to 1 rather than $$\frac{37}{38}$$ doesn’t change the decision he should make? Of course not; that would horribly misstate the situation Luc finds himself in. It would misrepresent how sensitive Luc’s choice is to his utility function, and to the size of the stakes. If Luc’s utility function was $$u(x) = x^{\frac{3}{4}}$$, then he should take the bet. If his utility function is unchanged, but the bet was $1 against$40, rather than $10 against$400, he should take the bet. Leaving off the possibility of winning hides these facts, and badly misrepresents Luc’s situation.

I’ve argued that the states we can ‘leave off’ a decision table are the states that the agent knows not to obtain. The argument is largely by elimination. If we can only leave off things that have probability 1, then decision theory would be useless; but it isn’t. If we say we can leave off things if setting their probability at 1 is an accepable idealisation, we need a theory of acceptable idealisations. If this is to be a rival to my theory, the idealisation had better not be it’s acceptable to treat anything known as having probability 1. But the most natural alternative idealisation badly misrepresents Luc’s case. If we say that what can be left off is not what’s known not to obtain, but what is, say, justifiably truly believed not to obtain, we need an argument for why people would naturally use such an unnatural standard. This doesn’t even purport to be a conclusive argument, but these considerations point me towards thinking that knowledge determines what we can leave off.

I also cheated a little in making this argument. When I described Alice in the casino, I made a few explicit comments about her information states. And every time, I said that she knew various propositions. It seemed plausible at the time that this is enough to think those propositions should be incorporated into the table we use to represent her decision. That’s some evidence against the idea that more than knowledge, perhaps iterated knowledge or certainty, is needed before we add propositions to the decision table.

### From Decision Theory to Interest-Relativity

This way of thinking about decision problems offers a new perspective on the issue of whether we should always be prepared to bet on what we know.7 To focus intuitions, let’s take a concrete case. Barry is sitting in his apartment one evening when he hears a musician performing in the park outside. The musician, call her Beth, is one of Barry’s favourite musicians, so the music is familiar to Barry. Barry is excited that Beth is performing in his neighbourhood, and he decides to hurry out to see the show. As he prepares to leave, a genie appears an offers him a bet.8 If he takes the bet, and the musician is Beth, then the genie will give Barry ten dollars. On the other hand, if the musician is not Beth, he will be tortured in the fires of hell for a millenium. Let’s put Barry’s options in table form.

 Musician is Beth Musician is not Beth Take Bet Win $10 1000 years of torture Decline Bet Status quo Status quo Intuitively, it is extremely irrational for Barry to take the bet. People do make mistakes about identifying musicians, even very familiar musicians, by the strains of music that drift up from a park. It’s not worth risking a millenium of torture for$10.

But it also seems that we’ve misstated the table. Before the genie showed up, it seemed clear that Barry knew that the musician was Beth. That was why he went out to see her perform. (If you don’t think this is true, make the sounds from the park clearer, or make it that Barry had some prior evidence that Beth was performing which the sounds from the park remind him of. It shouldn’t be too hard to come up with an evidential base such that (a) in normal circumstances we’d say Barry knew who was performing, but (b) he shouldn’t take this genie’s bet.) Now our decision tables should reflect the knowledge of the agent making the decision. If Barry knows that the musician is Beth, then the second column is one he knows will not obtain. So let’s write the table in the standard form.

### What Coraline Knows and What She Believes

Assume, for reductio, that Coraline knows that $$p$$. Then the choice she faces looks like this.

 $$q$$ $$\neg q$$ Take bet 100 1 Decline bet 0 0

Since taking the bet dominates declining the bet, she should take the bet if this is the correct representation of her situation. She shouldn’t take the bet, so by modus tollens, that can’t be the correct representation of her situation. If she knew $$p$$, that would be the correct representation of her situation. So, again by modus tollens, she doesn’t know $$p$$.

Now let’s consider three possible explanations of why she doesn’t know that $$p$$.

1. She doesn’t have enough evidence to know that $$p$$, independent of the practical stakes.

2. In virtue of the practical stakes, she doesn’t believe that $$p$$;

3. In virtue of the practical stakes, she doesn’t justifiably believe that $$p$$, although she does actually believe it.

4. In virtue of the practical stakes, she doesn’t know that $$p$$, although she does justifiably believe it.

I think option 1 is implausibly sceptical, at least if applied to all cases like Coraline’s. I’ve said that the probability of $$p$$ is 0.99, but it should be clear that all that matters to generating a case like this is that $$p$$ is not completely certain. Unless knowledge requires certainty, we’ll be able to generate Coraline-like cases where there is sufficient evidence for knowledge. So that’s ruled out.

Option 2 is basically what the Doxastic IRI theorist has to say. If Coraline has enough evidence to know $$p$$, but doesn’t know $$p$$ due to practical stakes, then the Doxastic IRI theorist is committed to saying that the practical stakes block belief in $$p$$. That’s the Doxastic IRI position; stakes matter to knowledge because they matter to belief.

But that’s also an implausible description of Coraline’s situation. She is very confident that $$p$$. Her confidence is grounded in the evidence in the right way. She is insensitive in her actual deliberations to the difference between her evidence for $$p$$ and evidence that guarantees $$p$$. She would become sensitive to that difference if someone offered her a bet that she knew was a 1000-to-1 bet on $$p$$, but she doesn’t know that’s what is on offer. In short, there is no difference between her unconditional attitudes, and her attitudes conditional on $$p$$, when it comes to any live question. That’s enough, I think, for belief. So she believes that $$p$$. And that’s bad news for the Doxastic IRI theorist; since it means here that stakes matter to knowledge without mattering to belief. I conclude, reluctantly, that Doxastic IRI is false.

### Stakes as Defeaters

That still leaves two options remaining, what I’ve called options 3 and 4 above. Option 3, if suitably generalised, says that knowledge is practically sensitive because the justification condition on belief is practically sensitive. Option 4 says that practical considerations impact knowledge directly. As I read them, Jeremy Fantl and Matthew McGrath defend a version of Option 3. In the next and last subsection, I’ll argue against that position. But first I want to sketch what a position like option 4 would look like.

Knowledge, unlike justification, requires a certain amount of internal coherence among mental states. Consider the following story from David Lewis:

I speak from experience as the repository of a mildly inconsistent corpus. I used to think that Nassau Street ran roughly east-west; that the railroad nearby ran roughly north-south; and that the two were roughly parallel.

I think in that case that Lewis doesn’t know that Nassau Street runs roughly east-west. (From here on, call the proposition that Nassau Street runs roughly east-west $$N$$.) If his belief that it does was acquired and sustained in a suitably reliable way, then he may well have a justified belief that $$N$$. But the lack of coherence with the rest of his cognitive system, I think, defeats any claim to knowledge he has.

Coherence isn’t just a requirement on belief; other states can cohere or be incoherent. Assume Lewis corrects the incoherence in his beliefs, and drops the belief that Nassau Street the railway are roughly parallel. Still, if Lewis believed that $$N$$, preferred doing $$\varphi$$ to doing $$\psi$$ conditional on $$N$$, but actually preferred doing $$\psi$$ to doing $$\varphi$$, his cognitive system would also be in tension. That tension could, I think, be sufficient to defeat a claim to know that $$N$$.

And it isn’t just a requirement on actual states; it can be a requirement on rational states. Assume Lewis believed that $$N$$, preferred doing $$\varphi$$ to doing $$\psi$$ conditional on $$N$$, and preferred doing $$\varphi$$ to doing $$\psi$$, but should have preferred doing $$\psi$$ to doing $$\varphi$$ given his interests. Then I think the fact that the last preference is irrational, plus the fact that were it corrected there would be incoherence in his cognitive states defeats the claim to know that $$N$$.

A concrete example of this helps make clear why such a view is attractive, and why it faces difficulties. Assume there is a bet that wins $2 if $$N$$, and loses$10 if not. Let $$\varphi$$ be taking that bet, and $$\psi$$ be declining it. Assume Lewis shouldn’t take that bet; he doesnt have enough evidence to do so. Then he clearly doesn’t know that $$N$$. If he knew that $$N$$, $$\varphi$$ would dominate $$\psi$$, and hence be rational. But it isn’t, so $$N$$ isn’t known. And that’s true whether Lewis’s preferences between $$\varphi$$ and $$\psi$$ are rational or irrational.

Attentive readers will see where this is going. Change the bet so it wins a penny if $$N$$, and loses $1,000 if not. Unless Lewis’s evidence that $$N$$ is incredibly strong, he shouldn’t take the bet. So, by the same reasoning, he doesn’t know that $$N$$. And we’re back saying that knowledge requires incredibly strong evidence. The solution, I say, is to put a pragmatic restriction on the kinds of incoherence that matter to knowledge. Incoherence with respect to irrelevant questions, such as whether to bet on $$N$$ at extremely long odds, doesn’t matter for knowledge. Incoherence (or coherence obtained only through irrationality) does. The reason, I think, that Non-Doxastic IRI is true is that this coherence-based defeater is sensitive to practical interests. The string of cases about Lewis and $$N$$ has ended up close to the Coraline example. We already concluded that Coraline didn’t know $$p$$. Now we have a story about why - belief that $$p$$ doesn’t cohere sufficiently well with what she should believe, namely that it would be wrong to take the bet. If all that is correct, just one question remains: does this coherence-based defeater also defeat Coraline’s claim to have a justified belief that $$p$$? I say it does not, for three reasons. First, her attitude towards $$p$$ tracks the evidence perfectly. She is making no mistakes with respect to $$p$$. She is making a mistake with respect to $$q$$, but not with respect to $$p$$. So her attitude towards $$p$$, i.e. belief, is justified. Second, talking about beliefs and talking about credences are simply two ways of modelling the very same things, namely minds. If the agent both has a credence 0.99 in $$p$$, and believes that $$p$$, these are not two different states. Rather, there is one state of the agent, and two different ways of modelling it. So it is implausible to apply different valuations to the state depending on which modelling tools we choose to use. That is, it’s implausible to say that while we’re modelling the agent with credences, the state is justified, but when we change tools, and start using beliefs, the state is unjustified. Given this outlook on beliefs and credences, it is natural to say that her belief is justified. Natural, but not compulsory, for reasons Jeremy Fantl pointed out to me.27 We don’t want a metaphysics on which persons and philosophers are separate entities. Yet we can say that someone is a good person but a bad philosopher. Normative statuses can differ depending on which property of a thing we are considering. That suggests it is at least coherent to say that one and the same state is a good credence but a bad belief. But while this may be coherent, I don’t think it is well motivated, and it is natural to have the evaluations go together. Third, we don’t need to say that Coraline’s belief in $$p$$ is unjustified in order to preserve other nice theories, in the way that we do need to say that she doesn’t know $$p$$ in order to preserve a nice account of how we understand decision tables. It’s this last point that I think Fantl and McGrath, who say that the belief is unjustified, would reject. So let’s conclude with a look at their arguments. ### Fantl and McGrath on Interest-Relativity Fantl and McGrath’s argue for the principle (JJ), which entails that Coraline is not justified in believing $$p$$. (JJ) If you are justified in believing that $$p$$, then $$p$$ is warranted enough to justify you in $$\varphi$$-ing, for any $$\varphi$$. In practice, what this means is that there can’t be a salient $$p, \varphi$$ such that: • The agent is justified in believing $$p$$; • The agent is not warranted in doing $$\varphi$$; but • If the agent had more evidence for $$p$$, and nothing else, the agent would be be warranted in doing $$\varphi$$. That is, once you’ve got enough evidence, or warrant, for justified belief in $$p$$, then you’ve got enough evidence for $$p$$ as matters for any decision you face. This seems intuitive, and Fantl and McGrath back up its intuitiveness with some nicely drawn examples. But I think it is false, and the Coraline example shows it is false. Coraline isn’t justified in taking the bet, and is justified in believing $$p$$, but more evidence for $$p$$ would suffice for taking the bet. So Coraline’s case shows that (JJ) is false. But there are a number of possible objections to that position. I’ll spend the rest of this section, and this paper, going over them.28 Objection: The following argument shows that Coraline is not in fact justified in believing that $$p$$. 1. $$p$$ entails that Coraline should take the bet, and Coraline knows this. 2. If $$p$$ entails something, and Coraline knows this, and she justifiably believes $$p$$, she is in a position to justifiably believe the thing entailed. 3. Coraline is not in a position to justifiably believe that she should take the bet. 4. So, Coraline does not justifiably believe that $$p$$ Reply: The problem here is that premise 1 is false. What’s true is that $$p$$ entails that Coraline will be better off taking the bet than declining it. But it doesn’t follow that she should take the bet. Indeed, it isn’t actually true that she should take the bet, even though $$p$$ is actually true. Not just is the entailment claim false, the world of the example is a counterinstance to it. It might be controversial to use this very case to reject premise 1. But the falsity of premise 1 should be clear on independent grounds. What $$p$$ entails is that Coraline will be best off by taking the bet. But there are lots of things that will make me better off that I shouldn’t do. Imagine I’m standing by a roulette wheel, and the thing that will make me best off is betting heavily on the number than will actually come up. It doesn’t follow that I should do that. Indeed, I should not do it. I shouldn’t place any bets at all, since all the bets have a highly negative expected return. In short, all $$p$$ entails is that taking the bet will have the best consequences. Only a very crude kind of consequentialism would identify what I should do with what will have the best returns, and that crude consequentialism isn’t true. So $$p$$ doesn’t entail that Coraline should take the bet. So premise 1 is false. Objection: Even though $$p$$ doesn’t entail that Coraline should take the bet, it does provide inductive support for her taking the bet. So if she could justifiably believe $$p$$, she could justifiably (but non-deductively) infer that she should take the bet. Since she can’t justifiably infer that, she isn’t justified in taking the bet. Reply: The inductive inference here looks weak. One way to make the inductive inference work would be to deduce from $$p$$ that taking the bet will have the best outcomes, and infer from that that the bet should be taken. But the last step doesn’t even look like a reliable ampliative inference. The usual situation is that the best outcome comes from taking an ex ante unjustifiable risk. It may seem better to use $$p$$ combined with the fact that conditional on $$p$$, taking the bet has the highest expected utility. But actually that’s still not much of a reason to take the bet. Think again about cases, completely normal cases, where the action with the best outcome is an ex ante unjustifiable risk. Call that action $$\varphi$$, and let $$B \varphi$$ be the proposition that $$\varphi$$ has the best outcome. Then $$B \varphi$$ is true, and conditional on $$B \varphi$$, $$\varphi$$ has an excellent expected return. But doing $$\varphi$$ is still running a dumb risk. Since these kinds of cases are normal, it seems it will very often be the case that this form of inference leads from truth to falsity. So it’s not a reliable inductive inference. Objection: In the example, Coraline isn’t just in a position to justifiably believe $$p$$, she is in a position to know that she justifiably believes it. And from the fact that she justifiably believes $$p$$, and the fact that if $$p$$, then taking the bet has the best option, she can infer that she should take the bet. Reply: It’s possible at this point that we get to a dialectical impasse. I think this inference is non-deductive, because I think the example we’re discussing here is one where the premises are true and the conclusion false. Presumably someone who doesn’t like the example will think that it is a good deductive inference. Having said that, the more complicated example at the end of was designed to raise the same problem without the consequence that if $$p$$ is true, the bet is sure to return a positive amount. In that example, conditionalising on $$p$$ means the bet has a positive expected return, but still possibly a negative return. But in that case (JJ) still failed. If accepting there are cases where an agent justifiably believes $$p$$, and hence justifiably believes taking the bet will return the best outcome, and knows all this, but still can’t rationally bet on $$p$$ is too much to accept, that more complicated example might be more persuasive. Otherwise, I concede that someone who believes (JJ) and thinks rational agents can use it in their reasoning will not think that a particular case is a counterexample to (JJ). Objection:If Coraline were ideal, then she wouldn’t believe $$p$$. That’s because if she were ideal, she would have a lower credence in $$q$$, and if that were the case, her credence in $$p$$ would have to be much higher (close to 0.999) in order to count as a belief. So her belief is not justified. Reply: The premise here, that if Coraline were ideal she would not believe that $$p$$, is true. The conclusion, that she is not justified in believing $$p$$, does not follow. It’s always a mistake to identify what should be done with what is done in ideal circumstances. This is something that has long been known in economics. The locus classicus of the view that this is a mistake is . A similar point has been made in ethics in papers such as and . And it has been extended to epistemology by . All of these discussions have a common structure. It is first observed that the ideal is both $$F$$ and $$G$$. It is then stipulated that whatever happens, the thing being created (either a social system, an action, or a cognitive state) will not be $$F$$. It is then argued that given the stipulation, the thing being created should not be $$G$$. That is not just the claim that we shouldn’t aim to make the thing be $$G$$. It is, rather, that in many cases being $$G$$ is not the best way to be, given that $$F$$-ness will not be achieved. Lipsey and Lancaster argue that (in an admittedly idealised model) that it is actually quite unusual for $$G$$ to be best given that the system being created will not be $$F$$. It’s not too hard to come up with examples that fit this structure. Following , we might note that I’m justified in believing that there are no ideal cognitive agents, although were I ideal I would not believe this. Or imagine a student taking a ten question mathematics exam who has no idea how to answer the last question. She knows an ideal student would correctly answer an even number of questions, but that’s no reason for her to throw out her good answer to question nine. In general, once we have stipulated one departure from the ideal, there’s no reason to assign any positive status to other similarities to the idea. In particular, given that Coraline has an irrational view towards $$q$$, she won’t perfectly match up with the ideal, so there’s no reason it’s good to agree with the ideal in other respects, such as not believing $$p$$. Stepping back a bit, there’s a reason the interest-relative theory says that the ideal and justification come apart right here. On the interest-relative theory, like on any pragmatic theory of mental states, the identification of mental states is a somewhat holistic matter. Something is a belief in virtue of its position in a much broader network. But the evaluation of belief is (relatively) atomistic. That’s why Coraline is justified in believing $$p$$, although if she were wiser she would not believe it. If she were wiser, i.e., if she had the right attitude towards $$q$$, the very same credence in $$p$$ would not count as a belief. Whether her state counts as a belief, that is, depends on wide-ranging features of her cognitive system. But whether the state is justified depends on more local factors, and in local respects she is doing everything right. Objection: If Coraline is justified in believing $$p$$, then Coraline can use $$p$$ as a premise in practical reasoning. If Coraline can use $$p$$ as a premise in practical reasoning, and $$p$$ is true, and her belief in $$p$$ is not Gettiered, then she knows $$p$$. By hypothesis, her belief is true, and her belief is not Gettiered. So she should know $$p$$. But she doesn’t know $$p$$. So by several steps of modus tollens, she isn’t justified in believing $$p$$.29 Reply: This objection this one turns on an equivocation over the neologism ‘Gettiered.’ Some epistemologists use this to simply mean that a belief is justified and true without constituting knowledge. By that standard, the third sentence is false. Or, at least, we haven’t been given any reason to think that it is true. Given everything else that’s said, the third sentence is a raw assertion that Coraline knows that $$p$$, and I don’t think we should accept that. The other way epistemologists sometimes use the term is to pick out justified true beliefs that fail to be knowledge for the reasons that the beliefs in the original examples from fail to be knowledge. That is, it picks out a property that beliefs have when they are derived from a false lemma, or whatever similar property is held to be doing the work in the original Gettier examples. Now on this reading, Coraline’s belief that $$p$$ is not Gettiered. But it doesn’t follow that it is known. There’s no reason, once we’ve given up on the JTB theory of knowledge, to think that whatever goes wrong in Gettier’s examples is the only way for a justified true belief to fall short of knowledge. It could be that there’s a practical defeater, as in this case. So the second sentence of the objection is false, and the objection again fails. Once we have an expansive theory of defeaters, as I’ve adopted here, it becomes problematic to describe the case in the language Fantl and McGrath use. They focus a lot on whether agents like Coraline have ‘knowledge-level justification’ for $$p$$, which is defined as “justification strong enough so that shortcomings in your strength of justification stand in the way of your knowing.” . An important part of their argument is that an agent is justified in believing $$p$$ iff they have knowledge-level justification for $$p$$. I haven’t addressed this argument, so I’m not really addressing the case on their terms. Well, does Coraline have knowledge-level justification for $$p$$? I’m not sure, because I’m not sure I grasp this concept. Compare the agent in Harman’s dead dictator case . Does she have knowledge-level justification that the dictator is dead? In one sense yes; it is the existence of misleading news sources that stops her knowing. In another sense no; she doesn’t know, but if she had better evidence (e.g., seeing the death happen) she would know. I want to say the same thing about Coraline, and that makes it hard to translate the Coraline case into Fantl and McGrath’s terminology. Blome-Tillmann, Michael. 2009. “Contextualism, Subject-Sensitive Invariantism, and the Interaction of ‘Knowledge’-Ascriptions with Modal and Temporal Operators.” Philosophy and Phenomenological Research 79 (2): 315–31. https://doi.org/10.1111/j.1933-1592.2009.00280.x. Braddon-Mitchell, David, and Frank Jackson. 2007. The Philosophy of Mind and Cognition, Second Edition. Malden, MA: Blackwell. Brown, Jessica. 2008. “Knowledge and Practical Reason.” Philosophy Compass 3 (6): 1135–52. https://doi.org/10.1111/j.1747-9991.2008.00176.x. Cohen, Stewart. 1988. “How to Be a Fallibilist.” Philosophical Perspectives 2: 91–123. https://doi.org/10.2307/2214070. DeRose, Keith. 1995. “Solving the Skeptical Problem.” Philosophical Review 104 (1): 1–52. https://doi.org/10.2307/2186011. Fantl, Jeremy, and Matthew McGrath. 2009. Knowledge in an Uncertain World. Oxford: Oxford University Press. Feltz, Adam, and Chris Zarpentine. 2010. “Do You Know More When It Matters Less?” Philosophical Psychology 23 (5): 683–706. https://doi.org/10.1080/09515089.2010.514572. Gettier, Edmund L. 1963. “Is Justified True Belief Knowledge?” Analysis 23 (6): 121–23. https://doi.org/10.2307/3326922. Hammond, Peter J. 1988. “Consequentialist Foundations for Expected Utility.” Theory and Decision 25: 25–78. https://doi.org/10.1007/BF00129168. Harman, Gilbert. 1973. Thought. Princeton: Princeton University Press. Hawthorne, John. 2004. Knowledge and Lotteries. Oxford: Oxford University Press. Hawthorne, John, and Jason Stanley. 2008. Knowledge and Action.” Journal of Philosophy 105 (10): 571–90. https://doi.org/10.5840/jphil20081051022. Ichikawa, Jonathan. 2009. “Explaining Away Intuitions.” Studia Philosophica Estonica 22 (2): 94–116. https://doi.org/10.12697/spe.2009.2.2.06. Jackson, Frank. 1991. “Decision Theoretic Consequentialism and the Nearest and Dearest Objection.” Ethics 101 (3): 461–82. https://doi.org/10.1086/293312. Kennett, Jeanette, and Michael Smith. 1996a. “Frog and Toad Lose Control.” Analysis 56 (2): 63–73. https://doi.org/10.1111/j.0003-2638.1996.00063.x. ———. 1996b. “Philosophy and Commonsense: The Case of Weakness of Will.” In The Place of Philosophy in the Study of Mind, edited by Michaelis Michael and John O’Leary-Hawthorne, 141–57. Norwell, MA: Kluwer. https://doi.org/10.1017/CBO9780511606977.005. Lewis, David. 1969. Convention: A Philosophical Study. Cambridge: Harvard University Press. ———. 1982. “Logic for Equivocators.” Noûs 16 (3): 431–41. https://doi.org/10.1017/cbo9780511625237.009. ———. 1996. “Elusive Knowledge.” Australasian Journal of Philosophy 74 (4): 549–67. https://doi.org/10.1080/00048409612347521. Lipsey, R. G., and Kelvin Lancaster. 1956. “The General Theory of Second Best.” Review of Economic Studies 24 (1): 11–32. https://doi.org/10.2307/2296233. Runyon, Damon. 1992. Guys & Dolls: The Stories of Damon Runyon. New York: Penguin. Stalnaker, Robert. 2008. Our Knowledge of the Internal World. Oxford: Oxford University Press. Stanley, Jason. 2005. Knowledge and Practical Interests. Oxford University Press. Sturgeon, Scott. 2008. “Reason and the Grain of Belief.” Noûs 42 (1): 139–65. https://doi.org/10.1111/j.1468-0068.2007.00676.x. Watson, Gary. 1977. “Skepticism about Weakness of Will.” Philosophical Review 86 (3): 316–39. https://doi.org/10.2307/2183785. Weatherson, Brian. 2003. What Good Are Counterexamples? Philosophical Studies 115 (1): 1–31. https://doi.org/10.1023/A:1024961917413. ———. 2005. Can We Do Without Pragmatic Encroachment? Philosophical Perspectives 19 (1): 417–43. https://doi.org/10.1111/j.1520-8583.2005.00068.x. ———. 2006. “Questioning Contextualism.” In Epistemology Futures, edited by Stephen Cade Hetherington, 133–47. Oxford: Oxford University Press. Williamson, Timothy. 1998. Conditionalizing on Knowledge.” British Journal for the Philosophy of Science 49 (1): 89–121. https://doi.org/10.1093/bjps/49.1.89. ———. 2000. Knowledge and its Limits. Oxford University Press. 1. The reflections in the next few paragraphs are inspired by some comments by Stalnaker in his (2008), though I don’t want to suggest the theory I’ll discuss is actually Stalnaker’s.↩︎ 2. Whether Doxastic or Non-Doxastic IRI is true about justified belief ascriptions turns on some tricky questions about what to say when a subject’s credences are nearly, but not exactly appropriate given her evidence. Space considerations prevent a full discussion of those cases here. Whether I can hold onto the strict invariantism about claims about justified credences depends, I now think, on whether an interest-neutral account of evidence can be given. Discussions with Tom Donaldson and Jason Stanley have left me less convinced than I was in 2005 that this is possible, but this is far too big a question to resolve here.↩︎ 3. I mean here the case of Coraline, to be discussed in section 3 below. Several people have remarked in conversation that Coraline doesn’t look to them like a case of Ignorant High Stakes. This isn’t surprising; Coraline is better described as being mistaken than ignorant, and she’s mistaken about odds not stakes. If they’re right, that probably means my argument for Non-Doxastic IRI is less like Stanley’s, and hence more original, than I think it is. So I don’t feel like pressing the point! But I do want to note that I thought the Coraline example was a variation on a theme Stanley originated.↩︎ 4. If we are convinced that the right decision is the one that maximises expected utility, there is a sense in which these questions collapse. For the expected utility theorist, we can solve Dom’s question by making sure the states are logically exhaustive, and making the ‘payouts’ in each state be expected payouts. But the theory that the correct decision is the one that maximises expected utility, while plausibly true, is controversial. It shouldn’t be assumed when we are investigating the semantics of decision tables.↩︎ 5. Assuming Alice’s utility curve for money curves downwards, she should be looking for a slightly higher chance of winning than $$\frac{1}{2}$$ to place the bet, but that level of detail isn’t relevant to the story we’re telling here.↩︎ 6. I’m here retracting some things I said a few years ago in a paper on philosophical methodology . There I argued that identifying knowledge with justified true belief would give us a theory on which knowledge was more natural than a theory on which we didn’t identify knowledge with any other epistemic property. I now think that is wrong for a couple of reasons. First, although it’s true (as I say in the earlier paper) that knowledge can’t be primitive or perfectly natural, this doesn’t make it less natural than justification, which is also far from a fundamental feature of reality. Indeed, given how usual it is for languages to have a simple representation of knowledge, we have some evidence that it is very natural for a term from a special science. Second, I think in the earlier paper I didn’t fully appreciate the point (there attributed to Peter Klein) that the Gettier cases show that the property of being a justified true belief is not particularly natural. In general, when $$F$$ and $$G$$ are somewhat natural properties, then so is the property of being $$F \wedge G$$. But there are exceptions, especially in cases where these are properties that a whole can have in virtue of a part having the property. In those cases, a whole that has an $$F$$ part and a $$G$$ part will be $$F \wedge G$$, but this won’t reflect any distinctive property of the whole. And one of the things the Gettier cases show is that the properties of being justified and being true, as applied to belief, fit this pattern. Note that even if you think that philosophers are generally too quick to move from instinctive reactions to the Gettier case to abandoning the justified true belief theory of knowledge, this point holds up. What is important here is that on sufficient reflection, the Gettier cases show that some justified true beliefs are not knowledge, and that the cases in question also show that being a justified true belief is not a particularly natural or unified property. So the point I’ve been making in the last this footnote is independent of the point I wanted to stress in “What Good are Counterexamples?” namely, that philosophers in some areas (especially epistemology) are insufficiently reformist in their attitude towards our intuitive reactions to cases.↩︎ 7. This issue is of course central to the plotline in .↩︎ 8. Assume, perhaps implausibly, that the sudden appearance of the genie is evidentially irrelevant to the proposition that the musician is Beth. The reasons this may be implausible are related to the arguments in . Thanks here to Jeremy Fantl.↩︎ 9. The idea that interest-relativity is a way of fending off scepticism is a very prominent theme in .↩︎ 10. On the version of IRI I’m defending, Barry is free to be interested in whatever he likes. If he started wondering about whether it would be rational to take such a bet, he loses the knowledge that Beth is the musician, even if there is no genie and the bet isn’t offered. The existence of the genie’s offer makes the bet a practical interest; merely wondering about the genie’s offer makes the bet a cognitive interest. But both kinds of interests are relevant to knowledge.↩︎ 11. As they make clear in their (2008), Hawthorne and Stanley are interested in defending relatively strong premises linking knowledge and action independently of the argument for the interest-relativity of knowledge. What I’m doing here is showing how that conclusion does not rest on anything nearly as strong as the principles they believe, and so there is plenty of space to disagree with their general principles, but accept interest-relativity. The strategy here isn’t a million miles from the point noted in Fantl and McGrath when they note that much weaker premises than the ones they endorse imply a failure of ‘purism.’↩︎ 12. I have more to say about those cases in section 2.2.↩︎ 13. Also note that I’m not taking as a premise any claim about what Barry knows after the bet is offered. A lot of work on interest-relativity has used such premises, or premises about related intuitions. This seems like a misuse of the method of cases to me. That’s not because we should never use intuitions about cases, just that these cases are too hard to think that snap judgments about them are particularly reliable. In general, we can know a lot about cases by quickly reflecting on them. Similarly, we know a lot about which shelves are level and which are uneven by visual inspection, i.e., ‘eyeballing.’ But when different eyeballs disagree, it’s time to bring in other tools. That’s the approach of this paper. I don’t have a story about why the various eyeballs disagree about cases like Barry’s; that seems like a task best undertaken by a psychologist not a philosopher .↩︎ 14. This is obviously not a full argument against contextualism; that would require a much longer paper than this.↩︎ 15. See, for instance, , or .↩︎ 16. In the last two lines, I use $$U(\phi)$$ to denote the expected utility of $$\phi$$, and $$U(\phi | p)$$ to denote the expected utility of $$\phi$$ conditional on $$p$$. It’s often easier to write this as simply $$U(\phi \wedge p)$$, since the utility of $$\phi$$ conditional on $$p$$ just is the utility of doing $$\phi$$ in a world where $$p$$ is true. That is, it is the utility of $$\phi \wedge p$$ being realised. But we get a nicer symmetry between the probabilistic principles and the utility principles if we use the explictly conditional notation for each.↩︎ 17. This is probably somewhat unrealistic. It’s hard to think about whether $$\Pr(p)$$ is closer to 0.7 or 0.8 without raising to salience questions about, for example, what the second decimal place in $$\Pr(p)$$ is. This is worth bearing in mind when coming up with intuitions about the cases in this paragraph.↩︎ 18. See for discussion of a similar puzzle for anyone trying to tell a unified story of belief and credence.↩︎ 19. There are exceptions, especially in cases where $$p$$ concerns something significant to financial markets, and the agent trades financial products. If you work through the theory that I’m about to lay out, one consequence is that such agents should have very few unconditional beliefs about financially-sensitive information, just higher and lower credences. I think that’s actually quite a nice outcome, but I’m not going to rely on that in the argument for the view.↩︎ 20. The presentation in this section, as in the earlier paper, assumes at least a weak form of consequentialism in the sense of . This was arguably a weakness of the earlier paper. We’ll return to the issue of what happens in cases where the agent doesn’t, and perhaps shouldn’t, maximise expected utility, at the end of the section.↩︎ 21. I’m borrowing this example from Fred Dretske, who uses it to make some interesting points about dispositional belief.↩︎ 22. The recipe here is similar to that given in , but the motivation is streamlined. Thanks to Jacob Ross for helpful suggestions here.↩︎ 23. Some consequentialists say that what the agent should do depends on whether $$p$$ is true. If $$p$$ is true, she should do $$\psi$$, and if $$p$$ is false she should do $$\varphi$$. As we’ll see, I have reasons for thinking this is rather radically wrong.↩︎ 24. The target here is not directly the interest-relativity of their theories, but more general principles about the role of knowledge in action and assertion. Since my theories are close enough, at least in consequences, to Hawthorne and Stanley’s, it is important to note how my theory handles the case.↩︎ 25. I’m more interested in the abstract structure of the case than in whether any real-life situation is modelled by just this structure. But it might be worth noting the rough kind of situation where this kind of situation can arise. So let’s say Coraline has a particular bank account that is uninsured, but which currently paying 10% interest, and she is deciding whether to deposit another$1000 in it. Then $$p$$ is the proposition that the bank will not collapse, and she’ll get her money back, and $$q$$ is the proposition that the interest will stay at 10%. To make the model exact, we have to also assume that if the interest rate on her account doesn’t stay at 10%, it falls to 0.1%. And we have to assume that the interest rate and the bank’s collapse are probabilistically independent. Neither of these are at all realistic, but a realistic case would simply be more complicated, and the complications would obscure the philosophically interesting point.↩︎

26. If she did compute the expected utility, then one of the things that would be salient for her is the expected utility of the bet. And the expected utility of the bet is different to its expected utility given $$p$$. So if that expected utility is salient, she doesn’t believe $$p$$. And it’s going to be important to what follows that she does believe $$p$$.↩︎

27. The following isn’t Fantl’s example, but I think it makes much the same point as the examples he suggested.↩︎

28. Thanks here to a long blog comments thread with Jeremy Fantl and Matthew McGrath for making me formulate these points much more carefully. The original thread is at http://tar.weatherson.org/2010/03/31/do-justified-beliefs-justify-action/.↩︎

29. Compare the ‘subtraction argument’ on page 99 of .

↩︎