Chapter 4 Knowledge

In chapter 3, I argued that to believe something is to take it as given in all relevant inquiries, and in at least one possible inquiry. And I explained what it was to take something as given in terms of how one answers conditional and unconditional questions. In this chapter I’m going to argue that whatever is known can be properly taken as given in all relevant inquiries, where a relevant inquiry is one that one either is or should be conducting. Since some things that are usually known cannot be properly taken as given in some inquiries, this implies that knowledge is sensitive to one’s inquiries and hence to one’s interests.

There is an easy argument for the conclusion of this chapter.

  1. To believe something is to, inter alia, take it as given for all relevant inquiries.
  2. Whatever is known is correctly believed.
  3. So, whatever is known is correctly taken as given in all relevant inquiries.

I think this argument is basically sound. But both premises are controversial, and it isn’t completely obvious that it is even valid. So I’m not going to rely on this argument. Rather, I’ll argue more directly for the conclusion that whatever is known is correctly taken as given in all relevant inquiries. This will provide indirect evidence that the theory of belief in chapter 3 was correct, since we can now take that theory of belief to be an explanation for the claim that whatever is known is correctly taken as given in all relevant inquiries, rather than as part of the motivation for it.

The argument here will be in two parts. First, I’ll focus on practical inquiries, i.e., inquiries about what to do, and argue that what is known can be taken as given in all practical inquiries. Then I’ll extend the discussion to theoretical inquiries, and hence to inquiries in general. Then with the argument complete, I’ll look at two possible objections to the argument - that it has implausible consequences about the role of logical reasoning in extending knowledge, and that it leads to implausible results when a source provides both relevant and irrelevant information.

4.1 Knowledge and Practical Interests

A practical inquiry can often be represented by the kind of decision table that we use in decision theory courses.34 We take something as given in that inquiry iff it is encoded in the right way in the decision table. The primary way that we encode a proposition p into a decision table is to set up the table so that p is true in every column. If we use a table where p is encoded in this way, and p is not known, we are making a mistake. And in particular, we are making an epistemic mistake.

To see this, let’s start with an example. Professor Dec is teaching introductory decision theory to her undergraduate class. She is trying to introduce the notion of a dominant choice. So she introduces the following problem, with two states, S1 and S2, and two choices, C1 and C2, as is normal for introductory problems.

  S1$ *S *2
C1$ -$200 $1000
C2$ -$100 $1500

She’s hoping that the students will see that C1 and C2 are bets, but C2 is clearly the better bet. If S1 is actual, then both bets lose, but C2 loses less money. If S2 is actual, then both bets win, but C2 wins more. So C2 is better. That analysis is clearly wrong if the state is causally dependent on the choice, and controversial if the states are evidentially dependent on the choices. But Professor Dec has not given any reason for the students to think that the states are dependent on the choices in either way, and in fact the students don’t worry about that kind of dependence.

That doesn’t mean, however, that the students all adopt the analysis that Professor Dec wants them to. One student, Stu, is particularly unwilling to accept that C2 is better than C1. He thinks, on the basis of his experience, that when more than $1000 is on the line, people aren’t as reliable about paying out on bets. So while C1 is guaranteed to deliver $1000 if S2, if the agent bets on C2, she might face some difficulty in collecting on her money.

Given the context, i.e., that they are in an undergraduate decision theory class, it seems that Stu has misunderstood the question that Professor Dec intended to ask. But it is not easy to specify just exactly what Stu’s mistake is. It isn’t that he thinks Professor Dec has misdescribed the situation. It isn’t that he thinks the agent won’t collect $1500 if she chooses C2 and is in S2. He just thinks that she might not be able to collect it, so the expected payout might really be a little less than $1500.

But Stu is not the only problem that Professor Dec has. She also has trouble convincing Dom of the argument. He thinks there should be a third state added to the table, S3. In S3, there is a vengeful God who is about to end the world, and take everyone who chose C1 to heaven, while sending everyone who chose C2 to hell. Since heaven is better than hell, C2 does not dominate C1; it is worse in S3. Dom does not think this is particularly likely, but he thinks it is possible, and decision theory should represent possibilities like this. If decision theory is to be useful, we must say something about why we can leave states like S3 off the decision table.

So in order to teach decision theory, Professor Dec has to answer two questions.

  1. What makes it legitimate to write something on the decision table, such as the ‘$1500’ we write in the bottom right cell of Dec’s table?
  2. What makes it legitimate to leave something off a decision table, such as leaving Dom’s state S3 off the table?

When I’ve talked about decision tables so far, the focus has been on what tables thinkers actually use. Here the focus is switching to what tables they should use. And the claim is going to be that what they should use is determined by what they know.

To get to that conclusion, start with a much simpler problem. Mireille is out of town on a holiday, and she faces the following decision choice concerning what to do with a token in her hand.

Choice Outcome
Put token on table Win $1000
Put token in pocket Win nothing

This looks easy, especially if we’ve taken Professor Dec’s class. Putting the token on the table dominates putting the token in her pocket. It returns $1000, versus no gain. So she should put the token on the table.

I’ve left Mireille’s story fairly schematic; let’s fill in some of the details. Mireille is on holiday at a casino. It’s a fair casino in the sense that the probabilities of the outcomes of each of the games is just what you’d expect. And Mireille knows this. The table she’s standing at is a roulette table. The token is a chip from the casino worth $1000.

Putting the token on the table means placing a bet. As it turns out, it means placing a bet on the roulette wheel landing on 28. If that bet wins she gets her token back and another token of the same value. There are many other bets she could make, but Mireille has decided not to make all but one of them. Since her birthday is the 28th, she is tempted to put a bet on 28; that’s the only bet she is considering. If she makes this bet, the objective chance of her winning is 1 in 38, and she knows this. As a matter of fact she will win, but she doesn’t know this. (This is why the description in the table I presented above is truthful, though frightfully misleading.) As you can see, the odds on this bet are terrible. She should have a chance of winning around ½ to justify placing this bet. (It’s a very unfair casino in this sense, but what can you expect at a vacation resort?) So the above table, which makes it look like placing the bet is the dominant, and hence rational, option, is misleading.

Just how is the table misleading though? It isn’t because what is says is false. If Mireille puts the token on the table she wins $1000; and if she doesn’t, she stays where she is. It isn’t, or isn’t just, that Mireille doesn’t believe the table reflects what will happen if she places the bet. As it turns out, Mireille is smart, so she doesn’t form beliefs about chance events like roulette wheels. But even if she did, that wouldn’t change how misleading the table is. The table suggests that it is rational for Mireille to put the token on the table. In fact, that is irrational. And it would still be irrational if Mireille believes, irrationally, that the wheel will land on 28.

A better suggestion is that the table is misleading because Mireille doesn’t know that it accurately depicts the choice she faced. If she did know that these were the outcomes to putting the token on the table versus in her pocket, it would be rational for her to put it on the table. If we take it as understood in a presentation of a decision problem that the agent knows that the table accurately depicts the outcomes of various choices in different states, then we can tell a plausible story about the miscommunication between Professor Dec and her students. Stu was assuming that if the agent wins $1500, she might not be able to easily collect. That is, he was assuming that the agent does not know that she’ll get $1500 if she chooses C2 and is in state S2. Professor Dec, if she’s anything like other decision theory professors, will have assumed that the agent did know exactly that. And the miscommunication between Professor Dec and Dom also concerns knowledge. When Dec wrote that table up, she was saying that the agent knew that S1 or S2 obtained. And when she says it is best to take dominating options, she means that it is best to take options that one knows to have better outcomes. So here are the answers to Stu and Dom’s challenges.

  • It is legitimate to write something on the decision table, such as the ‘$1500’ we write in the bottom right cell of Dec’s table, iff the decision maker knows it to be true.
  • It is legitimate to leave something off a decision table, such as leaving Dom’s state S3 off the table, iff the decision maker knows it not to obtain.

Perhaps those answers are not correct, but what we can clearly see by reflecting on these cases is that the standard presentation of a decision problem presupposes not just that the table states what will happen, but the agent stands in some special doxastic relationship to the information explicitly on the table (such as that the chooser in Professor Dec’s example will get $1500 if C2 and S2) and implied by where the table ends (such as that S3 will not happen).

I think that special doxastic relationship is knowledge. But I don’t need to argue for that here. All I need to argue is that if the person making the decision knows that p, she stands in the special relationship.

But could the ‘special doxastic relationship’ be stronger than knowledge? Could it be, for example, that the relationship is certainty, or some kind of iterated knowledge? Plausibly in some game-theoretic settings it is stronger - it involves not just knowing that the table is accurate, but knowing that the other player knows the table is accurate. In some cases, the standard treatment of games will require positing even more iterations of knowledge. For convenience, it is sometimes explicitly stated that iterations continue indefinitely, so each party knows the table is correct, and knows each party knows this, and knows each party knows that, and knows each party knows that, and so on. An early example of this in philosophy is in the work by David Lewis (1969) on convention. But it is usually acknowledged (again in a tradition extending back at least to Lewis) that only the first few iterations are actually needed in any problem, and it seems a mistake to attribute more iterations than are actually used in deriving solutions to any particular game.

The reason that would be a mistake is that we want game theory, and decision theory, to be applicable to real-life situations. There is very little that we know, and know that we know, and know we know we know, and so on indefinitely (Williamson 2000, Ch. 4). There is, perhaps, even less that we are certain of. If we only could say that a person is making a particular decision when they stand in these very strong relationships to the parameters of the decision table, then people will almost never be making the kinds of decision we study in decision theory. Since decision theory and game theory are not meant to be that impractical, the ‘special doxastic relationship’ cannot be that strong. It could be that in some games, the special relationship will involve a few iterations of knowledge, but in decision problems, where the epistemic states of others are irrelevant, even that is unnecessary, and simple knowledge seems sufficient.

It might be argued here that we shouldn’t expect to apply decision theory directly to real-life problems, but only to idealised versions of them, so it would be acceptable to, for instance, require that the things we put in the table are, say, things that have probability exactly 1. In real life, virtually nothing has probability 1. In an idealisation, many things do. But to argue this way seems to involve using ‘idealisation’ in an unnatural sense. There is a sense in which, whenever we treat something with non-maximal probability as simply given in a decision problem that we’re ignoring, or abstracting away from, some complication. But we aren’t idealising. On the contrary, we’re modelling the agent as if they were irrationally certain in some things which are merely very very probable.

So it’s better to say that any application of decision theory to a real-life problem will involve ignoring certain (counterfactual) logical or metaphysical possibilities in which the decision table is not actually true. But not any old abstraction will do. We can’t ignore just anything, at least not if we want a good model. Which abstractions are acceptable? The response I’ve offered to Dom’s challenge suggests an answer to this: we can abstract away from any possibility in which something the agent actually knows is false. I don’t have a knock-down argument that this is the best of all possible abstractions, but nor do I know of any alternative answer to the question which abstractions are acceptable which is nearly as plausible.

We might be tempted to say that we can abstract away from anything such that the difference between its probability and 1 doesn’t make a difference to the ultimate answer to the decision problem. More carefully, the idea would be that we can have the decision table represent that p iff p is true and treating the probability of p as 1 rather than its actual value doesn’t change what the agent should do. I think this is the most plausible story one could tell about decision tables if one didn’t like the knowledge first story that I tell. But I also don’t think it works, in part because of cases like the following.35

Luc is lucky; he’s in a casino where they are offering better than fair odds on roulette. Although the chance of winning any bet is 1 in 38, if Luc bets $10, and his bet wins, he will win $400. (That’s the only bet on offer.) Luc, like Mireille, is considering betting on 28. As it turns out, 28 won’t come up, although since this is a fair roulette wheel, Luc doesn’t know this. Luc, like most agents, has a declining marginal utility for money. He currently has $1,000, and for any amount of money $x, Luc gets utility x0.5 out of having $x. So Luc’s current utility (from money) is, roughly, 31.622. If he bets and loses, his utility will be, roughly, 31.464. And if he bets and wins, his utility will be, roughly, 37.417. So he stands to gain about 5.794, and to lose about 0.159. So he stands to gain about 36.5 as much as he stands to lose. Since the odds of winning are less than 1 in 36.5, his expected utility goes down if he takes the bet, so he shouldn’t take it. Of course, if the probability of losing was 1, and not merely 37 in 38, he shouldn’t take the bet too. Does that mean it is acceptable, in presenting Luc’s decision problem, to leave off the table any possibility of him winning, since he won’t win, and setting the probability of losing to 1 rather than 37 in 38 doesn’t change the decision he should make? I doubt it; this would misrepresent Luc’s situation in an important way. In particular, it would misrepresent how sensitive Luc’s choice is to his utility function, and to the size of the stakes. If Luc’s utility function had been that he gets utility x0.75 from wealth x, then it would have been wise for Luc to take the bet. Even with his actual utility function, if the bet had been $1~ against $40, rather than $10 against ~$400, he would have been wise to take the bet. Leaving off the possibility of winning hides these facts, and badly misrepresents Luc’s situation.

I’ve argued that the states we can ‘leave off’ a decision table are the states that the agent knows not to obtain. The argument is largely by elimination. If we can only leave off things that have probability 1, then decision theory would be useless; but it isn’t. If we say we can leave off things if setting their probability at 1 is an acceptable idealisation, we need a theory of acceptable idealisations. If this is to be a rival to my theory, the idealisation had better not be it’s acceptable to treat anything known as having probability 1. But the most natural alternative idealisation badly misrepresents Luc’s case. If we say that what can be left off is not what’s known not to obtain, but what is, say, justifiably truly believed not to obtain, we need an argument for why people would naturally use such an unnatural standard. This doesn’t even purport to be a conclusive argument, but these considerations point me towards thinking that knowledge determines what we can leave off.

I also cheated a little in making this argument. When I described Mireille in the casino, I made a few explicit comments about her information states. And every time, I said that she knew various propositions. It seemed plausible at the time that this is enough to think those propositions should be incorporated into the table we use to represent her decision. That’s some evidence against the idea that more than knowledge, perhaps iterated knowledge or certainty, is needed before we add propositions to the decision table.

If knowledge structures decision tables, then there is a simple argument that Anisa loses knowledge when playing the red-blue game. The following would be a bad table for Anisa to use when deciding what to do.

  2+2=4 2+2 ≠ 4
Red-True $50 0
Red-False 0 $50
Blue-True $50 $50
Blue-False 0 0

If she used that table, then it would look like Blue-True is the weakly dominant option. And that would mean that Blue-True is at least a rational choice, and perhaps the rational choice. Since Blue-True is not a rational choice, this table must be wrong. But if Anisa knows that the Battle of Agincourt was in 1415, and knowledge structures decision tables, then everything on this table is correct. So Anisa does not know that the Battle of Agincourt was in 1415.

4.2 Theoretical Knowledge

Knowledge structures proper practical deliberation. And because what things can be taken as structural assumptions differs between different pieces of practical reasoning, knowledge is sensitive to the interests of the inquirer. But this isn’t the only way in which knowledge is sensitive to interests. It is also sensitive to which purely theoretical questions the inquirer is taking an interest in.

I’ve already mentioned one way in which this has to be true. One kind of theoretical question is What should I do in this kind of situation? And if actually being in that kind of situation and having to decide what to do affected what one knows, then thinking abstractly about it should affect what one knows as well.

This kind of comparison, between practical deliberation about what to do, and theoretical deliberation about what one should in just that situation, suggests a few things. It suggests that if practical interests affect knowledge, then so do theoretical interests. And it suggests that they should do so in more or less the same way. So it would be good to have a story that assigns to knowledge the role of structuring theoretical deliberation, in just the way that it structures practical deliberation. And that’s more or less the story I’m going to tell, though there are some complications along the way.

The story I like starts with an observation by Pamela Hieronymi.

A reason, I would insist, is an item in (actual or possible) reasoning. Reasoning is (actual or possible) thought directed at some question or conclusion. Thus, reasons must relate, in the first instance, not to states of mind but to questions or conclusions. (Hieronymi 2013, 115–16)

So to a first approximation the inquirer knows that p only if they can properly use p as a reason in “thought directed at the question” they are considering. That is, they can use p as a step in this reasoning. This way of putting things connects Hieronymi’s view of reasons to the idea present in both Hawthorne and Stanley (2008) and Fantl and McGrath (2009) that things known are reasons. And while I’m going to spend the rest of this section quibbling about whether this is quite right, it’s a good first step.

It’s enough to get us a fairly strong, but also fairly natural, kind of interest-relativity. In normal circumstances, Anisa knows that the Battle of Agincourt was in 1415. Now imagine not that she’s playing the red-blue game, but thinking about how to play it. And she wonders what to do if the red sentence says that two plus two is four, and the blue sentence says that the Battle of Agincourt was in 1415. It would be a mistake for her to reason as follows: Well, the Battle of Agincourt was in 1415, so playing Blue-True will get me $50, and nothing will get me more than $50, so I should play Blue-True. And it looks like the problem is the first step; she just can’t take this for granted in this very context.

This is a very obscure kind of question to wonder about. But there are more natural questions that lead to the same kind of result. Imagine that the day after reading the book, but before playing any weird game, Anisa starts wondering how likely it is that the book was correct. History books do make mistakes, and she wants to estimate how likely it is that this was a mistake. Again, it would be an error to reason as follows: Well, the Battle of Agincourt was in 1415, and that’s what the book says, so the book is certainly correct. And it looks like the problem is the first step; she just can’t take this for granted in this very context.

But it’s not like she can only take for granted in that context things that are certain. If that were true, she couldn’t even start inquiry into how likely it is the book got this wrong. She has to take a bunch of stuff as beyond the scope of present inquiry. She should not question that the book says that the battle was in 1415, or that there was a Battle of Agincourt, or that it is a widely written about (but also widely mythologised) battle, or that 1415 is before the invention of the printing press and this might affect the reliability of records, and so on. None of these things are things that she knows with Cartesian certainty. Indeed, some of them are probably all-things-considered less likely than that the Battle of Agincourt was in 1415. So it’s not like there is some threshold of likelihood, or of evidential support, and inquiring into the likelihood of this statement implies that one can take for granted all and only things that clear this threshold. Rather, individual inquiries have their own logic, their own rules about what can and can’t be taken for granted.

There is an interesting analogy here with the rules of evidence in criminal trials. Whether some facts can be admitted at a trial depends in part on what the trial is. For example, some jurisidictions allow evidence obtained in a search that illegally violated X’s rights to be used in a trial of Y, though it could not be used when X was on trial. The picture I have of knowledge is similar; what one knows is what one can use in inquiry, and what one can use changes depending on the question under discussion. I’ll have much more to say about this in chapter 5.

So the starting point is that what’s known is what can be used. What I’m going to ultimately defend is a much more restricted thesis. Using what is known provides immunity from a particular criticism: that your starting point might not be true. I’m going to say a little bit about why this immunity claim is correct, and then say much more about why I prefer this way of talking about the role of knowledge in reasoning.

When one says that it is good to use what one knows in reasoning, there are two natural ways to interpret this. One is that using what one knows is all-things-considered good unless there is some independent reason to the contrary. The other is to say that there is a kind of badness in reasoning one avoids if one uses what one knows. I’m going to be defending the second kind of reading. That’s what I mean by saying that using what one knows provides immunity from a certain kind of criticism. The alternative requires that we can specify all the ways in which one might go wrong while using what one knows - those are the “independent reasons to the contrary”. And I don’t think that’s smething we’re now in a position to do.

The justification for the immunity claim is quite straightforward. It’s incoherent to say of someone that they know that p, but they shouldn’t have used p in reasoning because it might be false. That’s Moore-paradoxical, if not outright contradictory. And if it is incoherent to say A, and X shouldn’t have done B because C, then A is a defence to the criticism of X that she shouldn’t have done B because C. So knowing that p is a defence to the criticism that one shouldn’t have used p in reasoning because it might be false.

Can we say something stronger? Can we say that knowing that p immunises the reasoner from all criticisms? Surely not; using irrelevant facts in inquiry is a legitimate criticism, even if the facts are known. But could we say something a bit more qualified, but still stronger than the immunity claim that I make?

One possibility would be to say that reasoning that starts with what is known is immune from all criticisms except those on a specified list. What might be on the list? I’ve already mentioned one thing - using irrelevant facts. Another thing might be that the reasoning itself is irrelevant to what one should be doing. If there is a drowning child in front of me, and I start idly musing about what the smallest prime greater than a million might be, I can be criticised for that reasoning. And that criticism can be sustained even if my mathematical reasoning is impeccable, and I get the correct answer. As it turns out, that’s 1,000,003.

Some facts are irrelevant to an inquiry. Others are relevant, but not part of the best path to resolving the inquiry. This can be grounds for criticism as well. It’s in some cases a mild criticism. If one follows an obvious path to solving a problem, when there is an alternative quicker way to solving the problem using a clever trick, it isn’t much of a complaint to say that the reasoning wasn’t maximally efficient. There are many quicker proofs of a lot of things Euclid proved, but this hardly detracts from the greatness of Euclid’s work. And, interestingly for what is to follow, using an inefficient means of inquiry does not prevent the inquiry ending in knowledge. After all, Euclid knew a lot of geometry, even though he rarely had maximally efficient proofs. There is a general lesson here - the fact that an inquirer was imperfect isn’t in itself a reason to deny that they end up with knowledge.

Inefficiency in inquiry is often not a big deal; other mistakes in inquiry are more serious. Sometimes the premises do not support the conclusion. It’s notoriously hard to say what is meant by support here. It seems to have some rough relationship to logical entailment. But it’s hard to say more than that. Sometimes premises support a conclusion they do not entail - that’s what happens in all inductive inquiry. Sometimes premises do not support a conclusion they do entail. If I reason, “3 is the first odd prime greater than 0, so 1,000,003 is the first odd prime greater than 1,000,000, and there are no even primes greater than 2, so 1,000,003 is the first prime greater than 1,000,000”, I reason badly. I can’t know on that basis that 1,000,003 is the first prime greater than 1,000,000. But the premise, that 3 is the first odd prime greater than 0, entails the next step. It just fails to support it, in the relevant sense.

But maybe now we might suspect we’ve got enough criticisms on the table. Is there anything wrong about an inquiry where the following criteria are met?

  • It is worthwhile to conduct the inquiry.
  • It is sensible, and efficient enough, to choose these particular starting points.
  • The starting points are all things that are known to be true.
  • Every step after the starting point is supported by the steps immediately preceding it.

An inquiry with these features looks pretty good. And if there is really nothing to complain about in such an inquiry, then the following is true. An inquirer who starts an inquiry with what they know is immune from all criticisms except perhaps (a) that they shouldn’t be conducting this inquiry at all, (b) that their starting points are irrelevant (or perhaps inefficient) for reaching their conclusion, or (c) that their later steps are not supported by their earlier steps. While those are fairly non-trivial exception clauses, that’s still a fairly strong claim about the role of knowledge in inquiry.

Unfortunately, there are puzzle cases that suggest that even an inquiry with those four features may be flawed. I’ll just mention two such cases here. The point of these cases is that they suggest inquiry can be flawed in ever so many ways, and we should not be confident about putting together a complete list of the ways inquiry can go wrong.

First, there might be moral constraints on inquiry. Consider the following example, drawn from Basu and Schroeder (2019). Casey is at a fancy fundraising party, where the guests and the wait staff are all wearing suits. The person next to Casey is black, and Casey reasons as follows.

  1. Almost all the black people here are on the wait staff.
  2. The person next to me is black.
  3. So, the person next to me is on the wait staff.

That’s not valid, but one might argue that it’s a rational inductive inference. Alternatively, we can consider the case where Casey explicitly concludes that the person next to them is probably black. And we can imagine that all of the following things are true. It is reasonable for Casey to think about whether the person in question is on the wait staff; it matters for the reasonable practical purpose of getting a drink. The wait staff are not wearing distinctive clothes, so seeing what observational characteristics correlate with being on the wait staff is a reasonable approach to that inquiry. Casey knows that the premises of the inquiry are true. And the premises support the conclusion of the inquiry.

And yet, it seems something goes badly wrong if Casey reasons this way. If the conclusion is false, it doesn’t seem like mere inductive bad luck. Arguably, there is a moral prohibition on reasoning in this way. And also arguably, this moral prohibition prevents Casey’s reasoning from providing knowledge.

Now one might well question just about every step of the last two paragraphs. It’s one thing to regret the lack of signals from attire as to who is on the wait staff; it’s another thing to jump to using skin colour as the best proxy. Given how many other things Casey can see about this person (such as how they are moving, what they are carrying, how they are engaging with others), it isn’t clear that the premises support the conclusion, even inductively. And even if all those things are not true, it might be that Casey can get knowledge this way; the inquiry might be morally wrong without having any epistemic flaws that prevent it generating knowledge.

Other examples of morally problematic inquiry suggest that there is no simple connection between an inquiry being morally bad, and it not generating knowledge. Many inquiries are morally problematic because they involve, or even constitute, privacy violations. But that doesn’t mean the privacy violator doesn’t come to know things about their victim. Indeed, part of the wrongness of the privacy violation is that they do come to know things about their victim.

Still, Casey can be criticised for inquiring in this way, even if the criticism does not imply that the inquiry produced no knowledge. And that suggests that there are possible criticisms of inquiries that satisfy the four bullet points listed earlier.

Another source of trouble comes from holistic constraints on reasoning. What I have in mind here are rules that allow for a natural resolution of the puzzles of “transmission failure” that Crispin Wright (2002) discusses. Start with one of Wright’s examples. Ada is walking by a park with a football pitch. It clearly isn’t just a practice; the players are in uniforms and occupying familiar positions on the pitch, there is a referee and a crowd, and so on. One of the players kicks the ball into the net, the referee points to the centre of the ground, and half the players and crowd celebrate. And Ada reasons as follows.

  1. The ball was kicked into the net, and no foul or violation was called.
  2. So, a goal was scored.
  3. So, a football match is being played, as opposed to, e.g., an ersatz match for the purposes of filming a movie.

As Wright points out, there is something wrong with the step from 2 to 3 here. And, as he also points out, it isn’t trivial to say just what it is that’s wrong. After all, 2 entails 3, and Ada knows that 2 entails 3. But it seems wrong to make just this inference.

Here’s one natural suggestion about what’s wrong.36 It’s too simple to be the full story, but it’s a start. The transition Ada makes from 1 to 2 presupposes 3. And 1 is her only evidence for 2. When those two conditions are met, it is wrong to infer from 2 to 3. More generally, there is something wrong with inferring a conclusion from an intermediate step in reasoning if that conclusion must be presupposed in order to even reach that intermediate step.

This is too rough as it stands to be a full theory of what is going on in cases like Ada’s. But the details aren’t important at this point. What is important is that there might be some kind of holistic constraint on reasoning. In some sense, Ada goes wrong in taking 2 for granted when she infers 3. But this doesn’t intuitively undermine her claim to know 2.

One important commonality between the last two cases, the moral encroachment and the transmission failure cases, is that the reasoning is not subject to the following kind of criticism. The speaker can’t be criticised for taking as a premise something that might be false. Maybe there is something wrong with inferring something is probably true of an individual because it is true of most people in the group the individual is part of. But this restriction applies to the inference; not to the premises. We wouldn’t say to the person who made this inference, “You shouldn’t reason like that; it might not be true that most people in the group have this feature.” If we did say that, they would have an easy reply. And if Ada does do the problematic reasoning, it would be wrong to reply to her “You shouldn’t reason like this; it might not have been a goal.” She could simply, and correctly, say that it quite clearly was a goal.

This is the key to the correct rule linking knowledge and reasoning. If the inquirer uses as a step in reasoning something that she knows to be true, then she is immune to a certain kind of criticism. She is immune to the criticism that the premise she used might not be true.

What I started this section doing was saying that such a reasoner is immune to all criticism, then trying to work out exceptions to that principle. So an exception needed to be included to allow that the reasoner might be criticised for using an irrelevant reason. And the hope was that eventually a full list of such exceptions could be found. But this project seems wildly optimistic. I don’t know that we need to include further exceptions to handle the moral encroachment or transmission failure cases. But I also don’t know that we don’t need to include extra exceptions. And I have no idea, and no idea how to find out, whether we need yet more exceptions.

Rather than say knowledge provides immunity to criticism except in these cases, and then try to fill out the list of cases, it’s better to say that knowledge provides a particular kind of immunity. If the reasoner knows that the premise they use is true, they can’t be criticised on the grounds that it might be false. This isn’t a trivial claim. There were several examples involving Anisa where she could be criticised for using a premise that might be false. And all of those seemed like legitimate criticisms even though the premise was one she knew before starting the inquiry. But it says nothing about cases like the moral encroachment case, or the transmission failure case, or other cases like them that may be discovered.

So that’s the key principle I’ll be working with. One cannot be criticised for using what one knows in an inquiry on the grounds that one is using what might be false. That’s a bit of a mouthful, so sometimes I’ll simply say that one can rationally take for granted what one knows. I’ll have a lot more to say about this principle in the rest of this book, especially in chapter 9.

But I’ll spend the rest of this chapter talking about how this principle relates to the idea that knowledge is closed under competent deduction. There are interesting examples that seem to show that the principle leads to several distinct kinds of violations of that principle. And I’ll argue that this is not right, and for any plausible closure principle, adding the idea that one can take for granted what one knows does not yield a new objection to that principle.

The principle as stated is a little ambiguous, and to defend it I need to resolve that ambiguity. Surprisingly, I need to resolve it by taking the logically stronger disambiguation. Normally if a principle is ambiguous, and might lead to problems, the trick is to insist on the weaker reading. That’s not what’s about to happen.

When I say that an inquirer can rationally take for granted the things they know, this should be understood collectively. That’s to say, I endorse the collective and not (merely) the individual version of the immunity to criticism principles stated here.

Take for Granted (Individual)
If an inquirer knows some things, then each of those things are such that they can take that thing for granted in conducting the inquiry.
Take for Granted (Collective)
If an inquirer knows some things, then they can take all of those things for granted in conducting the inquiry.

I’ll come back to the difference between these principles, and why I need to endorse the collective version, in subsection 4.3.2. Until then I’ll be talking about single pieces of knowledge at a time.

4.3 Knowledge and Closure

Here are two very plausible principles about knowledge, both due to John Hawthorne (2005).

Single Premise Closure
If one knows p and competently deduces q from p, thereby coming to believe q, while retaining one’s knowledge that p, one comes to know that q. (Hawthorne 2005, 43)
Multiple Premise Closure
If one knows some premises and competently deduces q from those premises, thereby coming to believe q, while retaining one’s knowledge of those premises throughout, one comes to know that q. (Hawthorne 2005, 43)

Hawthorne endorses the first of these, but has reservations about the second for reasons related to the preface paradox. I’m similarly going to endorse the first and have reservations about the second. But my reasons don’t have anything to do with the preface paradox. I argued in “Can We Do Without Pragmatic Encroachment” (Weatherson 2005a) that concerns about the preface paradox are over-rated, and I think those arguments still hold up. But I have a slightly different qualification than Hawthorne does to Multiple Premise Closure, and I will discuss that more in section 4.3.2.

It is not trivial to prove that my version of IRT satisfies these closure conditions. One reason for this is that I have not stated a sufficient condition for knowledge. all that I have said is that knowledge is incompatible with a certain kind of caution. So in principle I cannot show that if some conditions obtain then someone knows something. What I can show is that introducing new conditions linking knowledge with relevant questions does not introduce new violations of the closure conditions.

4.3.1 Single Premise Closure

But it turns out that even showing this is not completely trivial. Imagine yet another version of the red-blue game.37 In this game, both of the sentences are claims about history that are well supported without being certain. And both of them are supported in the very same way. It turns out to be a little distracting to use concrete examples in this case, so just call the claims A and B. And imagine that the player read both of these claims in the same reliable but not infallible history book, and she knows the book is reliable but not infallible, and she aims to maximise her expected returns. Then all four of the following things are true about the game.

  1. Unconditionally, the player is indifferent between playing red-true and playing blue-true.
  2. Conditional on A, the player prefers red-true to blue-true, because red-true will certainly return $50 while blue-true is not completely certain to win the money.
  3. Conditional on B, the player prefers blue-true to red-true, because blue-true will certainly return $50 while red-true is not completely certain to win the money.
  4. Conditional on AB, the player is back to being indifferent between playing red-true and playing blue-true.

From 1, 2 and 3, it follows in my version of IRT that the player does not know either A or B. After all, conditionalising on either one of them changes her answer to a relevant question. The question being, Which option maximises my expected returns?, where this is understood as a mention-all question.

But look what happens at point 4. Conditionalising on AB does not change the answer to that question. So, assuming there is no other reason that the player does not know AB, arguably she does know AB. And that would be absurd; how could she know a conjunction without knowing either conjunct?

Here is how I used to answer this question. Define a technical notion of interest. Say that a person is interested in a conditional question If p, Q? if they are interested, in the ordinary sense, in both the true-false question p? and they are interested in the question Q?. And if conditionalising on a proposition changes (or should change) their answer to any question they are interested in in this technical sense, then they don’t know that proposition. This solves the problem because conditionalising on AB does change their answer to the question If A, which option maximises expected returns? on its mention-some reading. So even though 4 is correct, this does pose a problem for closure.

But this is not an entirely satisfactory solution for two reasons. One is that it seems extremely artificial to say that someone is interested in these conditional questions that they have never even formulated. Another is that it is hard to motivate why we should care that conditionalisation changes (or should change) one’s answers to these artificial questions.

There was something right about the answer I used to give. It is that we should not just look at whether conditionalisation changes the answers a person gives to questions they are interested in. We should also look at whether it changes things ‘under the hood’; whether it changes how they get to that answer. The idea of my old theory was that looking at these artificial questions was a way to indirectly look under the hood. But it is not clear why we should look for these indirect approaches, rather than just looking at what is going on in the player’s mind.

So let’s look again at the two questions that are relevant. And this time, don’t think about what answer the player gives, but about how they get to that answer.

  1. Which option maximises expected returns?
  2. If AB which option maximises expected returns?

On the most natural way to understand what the player does, there will be a step in her answer to 5 that has no parallel in her answer to 6.

She will note, and rely on, the fact that she has equally good evidence for A as for B. That is why each option is equally good by her lights. The equality of evidence really matters. If she had read that A in three books, but only one of those books added that B, then the two options would not have the same expected returns. She should check that nothing like this is going on; that the evidence really is equally balanced.

But nothing like this happens in answering 6. In that case, AB is stipulated to be given. So there is no question about how good the evidence for either is. When answering a question about what to do if a condition obtains, we don’t ask how good the evidence for the condition is. We just assume that it holds. So in answering 6, there is no step that acknowledges the equality of the evidence for both A and B.

So in fact the player does not answer the two questions the same way. She ends up with the same conclusion, but she gets there by a different means. And that is enough, I say, to make it a different answer. If she knew AB she could follow exactly the same steps in answering 5 and 6, but she cannot.

What should we say if she does follow the same steps? If this is irrational, nothing changes, since what matters for knowledge is which questions should be answered the same way, not which questions are answered the same way. (It does matter for belief, but that is not the current topic.) So I will assume that it is possible for the player to rationally answer both questions the same way. (I will have much more to say about why this is a coherent assumption in chapter 6.)

The way she should answer 6 is to take AB as given. And hence she will take either option, red-true or blue-true, as being equivalent to just taking $50. And she knows that is the best she can do in the game. So in answering question 6, she will take it as given that both of these options are maximally good.

By hypothesis, she is answering question 5 and question 6 the same way. So she will take it to be part of the setup of question 5 that both options return a sure $50 After all, that is part of the setup of question 6. But if she takes that as given, then conditionalising on either A or B does not change her expected returns. So now claims 2 and 3 are wrong; conditionalising on either conjunct won’t make a difference because she treats each conjunct as given.

And that is the totally general case. Assume that someone has competently deduced Y from X, and they know X. So they are entitled to answer the questions Q? and If X, Q? by the same method. Since the method for the latter takes X as given, so can the method for the former. So they can answer Q? taking X as given. What one can appropriately take as given is closed under competent deduction? (Why? Because in the answer to Q? that starts with X, you can just go on to derive Y, and then see that it is also a way to answer If Y, Q?.) So they can answer Q? taking Y as given. So they can answer Q? in the same way they answer If Y, Q?.

So assuming there is no other reason to deny Single Premise Closure, adding a clause about how one may answer questions does not give us a new reason to deny it.

4.3.2 Multiple Premise Closure

So that shows that IRT satisfies Single Premise Closure. The argument that it satisfies Multiple Premise Closure starts with the observation that Multiple Premise Closure more or less follows from Single Premise Closure plus a principle I’ll call And-Introduction Closure.

And-Introduction Closure
If one knows some propositions, and one comptenently infers their conjunction from those propositions, while retaining one’s knowledge of all those propositions, then one knows the conjunction.

Start with the standard assumption that a conclusion is entailed by some premises iff it is entailed by their conjunction. (It would take us way too far afield to investigate what happens if we dropped that assumption.) Given that assumption, in principle the only inferential rule one needs with multiple premises is And-Introduction. In practice, people do not generally reason via conjunctions in this way. Someone who knows AB, and who knows ¬A, does not first infer (AB) ∧ ¬A, and then infer B from that. They just infer B. But I think it’s a harmless enough idealisation to model them as first inferring the conjunction whenever they use multiple premises. So I will assume that if I can show that IRT does not cause problems for And-Introduction Closure, and I’ve already argued that it does not cause problems for Single Premise Closure, then it does not cause problems for Multiple Premise Closure.

Here is the quick argument that IRT does not cause problems for And-Introduction Closure.

  1. The key feature of IRT, the one that potentially causes problems for And-Introduction closure, is that one knows that p only if one can take p for granted in one’s current inquiry.
  2. If, in the course of an inquiry, one knows some premises, then one can take them for granted in that inquiry.
  3. If one can take some premises for granted in an inquiry, then one can take their conjunction for granted in that inquiry.
  4. So, there is no IRT-based reason that And-Introduction Closure fails.

Premise 1 is just a restatement of my version of IRT. And premise 3 should be uncontroversial. If one can take some premises for granted, then one (rationally) is ruling out possibilities where they are false. And to rule out possibilities where they are false just is to take their conjunction for granted. So those premises should be fairly uncontroversial. What is controversial is that the argument is sound, and, in particular, that premise 2 is correct.

The conclusion is not that Multiple Premise Closure holds. Maybe you think it fails for some independent reason, distinct from IRT. I don’t think the other reasons that have been offered in the literature are compelling. But I am not building the failure of these reasons into IRT. So the main assumption behind the argument is that if adding the ‘take for granted’ clause to our theory of knowledge does not lead to closure violations, then nothing else in the theory does. And the argument for that is basically that there isn’t much more to the theory. So I think the argument is sound.

But it might look like the argument must be wrong. After all, it is easy to cook up cases where it looks like IRT leads to a closure failure. Here is one such example. It is another version of the red-blue game. In this version, the red sentence is, once again, Two plus two equals four. And the blue sentence is a conjunction A and B, where both A and B express historical facts that the player has excellent, but not perfect, evidence for.38 Now the following four claims all seem true.

  1. Unconditionally, the only rational play is Red-True.
  2. Conditional on A, the only rational play is Red-True. Even given A, playing Blue-True requires betting that B is true, and that’s a pointless risk to run when playing Red-True only requires that two and two make four.
  3. Conditional on B, the only rational play is Red-True. Even given B, playing Blue-True requires betting that A is true, and that’s a pointless risk to run when playing Red-True only requires that two and two make four.
  4. Conditional on AB, Blue-True is rationally permissible, and arguably rationally mandatory, since it weakly dominates Red-True.

So conditionalising on either one of A or B doesn’t change anything, but conditionalising on AB does change how the player answers a question. So it looks like in this case the player might know A, know B, and for all I’ve said be fully aware that these two things entail AB, but not know AB. So what’s happened? How is this not a counterexample to premise 2?

The key thing to note is that when the player is choosing what to do, the following things are all true about them.

  • They can take A for granted. That is, they are rationally permitted to take A for granted in resolving their inquiry about what to do.
  • Similarly, they can take B for granted.
  • But they cannot both take A for granted and take B for granted. If both those things are taken for granted, then they can rationally infer that Blue-True will have a maximal payout, and hence that it is a rational play. And they cannot infer that.

It is cases like this one that required the clarification that I made at the end of section 4.2. The player here cannot take both of A and B for granted. And so they don’t know both those things. So this is not a case where they know A, know B, and don’t know AB. Since they cannot take both A and B for granted, they do not know both of those things.

The picture I’m presenting here is similar to the picture Thomas Kroedel (2012) offers as a solution to the lottery paradox.39 He argues that we can solve the lottery paradox if we take justification to be a kind of permissibility, not a kind of obligation. And just as we can have individual permissions that don’t combine into a collective permission, we can have individually justified beliefs that are such that we can’t justifiably believe each of them. This isn’t exactly how I’d put it. For one thing, I’m talking about knowledge not justification. For another, it’s not that knowledge is a species of permission, as much as it behaves like permission in certain contexts, and those are just the contexts where counterexamples to And-Introduction Closure arise. But these are minor points of difference; I’m agreeing in large part with his picture.

And thinking of things the way Kroedel suggests helps say something positive about what is going on in this game. So far I’ve said something negative - the player does not know both that A and that B. And that’s enough to show that the case is not a counterexample to And-Introduction Closure. A counterexample would, after all, have to be a case where the player knows both A and B. But saying what’s not the case is not a helpful way to say what is the case. To say something more positive, it helps to think about other cases where permissions do not agglomerate. To that end, I’ll talk through one case involving professional norms.

Professor Paresseux is, like most academics, in a situation where professional morality requires he do his fair share, but is fairly open about what tasks he does that will constitute doing his fair share. Right now he has two requests for work, R1 and R2, and while he is not obliged to do both, he is obliged to do at least one. So he may turn down R1, and he may turn down R2, but he may not turn down both. So as not to keep the reader in suspense, let’s say up front that he is going to turn down both. Our question will be, what exactly does Professor Paresseux do that’s wrong?

To make this a little more concrete, and a little more complicated, I want to add two features to the case. First, accepting R1 would be better than accepting R2. He is uniquely well placed to do R1, and it would create more work for others if he turns it down. (As, indeed, he will.) But the norms governing Professor Paresseux are not maximising norms, and he does not violate them if he accepts R2 and rejects R1. Second, Professor Paresseux first turns down R1, let’s say in the morning, and then later that day, let’s say after a hearty lunch, turns down R2. Given that, there are three models we can have for the case, all of which have some plausibility.

The first model says that he was wrong to turn down R1. Here’s a little argument for that, using language that seems natural. He should have accepted one of the requests. And since he was well placed to perform R1, it’s also true that if he did one of them, it should have been R1. So he should have accepted R1, and turning it down was the mistake. Oddly, it turns out to have been made true that he did the wrong thing in turning down R1 by his latter decision to turn down R2, but that’s just an odd feature of the case.

The second model says that odd feature is intolerably odd. It says he was wrong to turn down R2. Here’s a little argument for that. At lunchtime, he hadn’t done anything wrong. True, he had turned down R1, but he had moral permission to do that. It was only after lunch that he made it the case that he violated a norm. So the violation must have been after lunch. And so the violation was in turning down R2.

A third model says that both of these arguments are inconclusive. What’s really true is simply that Professor Paresseux should not have turned down both requests. Which one individually was wrong? That, says the third model, is indeterminate. One of them must be, since he could not permissibly turn down both. But there is no fact of the matter about which it is.

If I had to choose, I would say that the third is the most plausible model. The arguments for the first two models are not terrible - indeed I think both are plausible models - but the arguments are equally compelling, and incompatible. So I suspect neither is entirely right. The third model, which says both of them are partially right - there is something not quite ok about both refusals - seems to better fit the scenario. But what I more strongly think is that each of these models is more plausible than either of the following two.

The fourth model is that there is a strong kind of agglomeration failure. It is determinately true that Professor Paresseux acted permissibly it turning down R1, and it is determinately true that he acted permissibly in turning down R2, but overall he acted impermissibly. It’s true that in the abstract Professor Paresseux could have turned down each one. But in the particular context he is in, where these are the options to fulfil his duty to do his share of the work, and he does neither, is not a context where he can (determinately) avail himself of both of these permissions.

The fifth model says that since he had to do his share and did not, and both refusals are ways of not doing his share, both of them are impermissible. This seems like overkill. It is much more intuitive that Professor Paresseux has done one wrong thing than that he has done two wrong things.

I hope I haven’t traumatised too many readers with tales of people shirking professional responsibilities, because having Professor Paresseux’s example on the table helps us lay out the options for what to say about Player. Player plays the version of the Red-Blue game I just described, where the blue sentence is the conjunction of two plausible (and true) claims from a well regarded history book he just read, and the red sentence is that two plus two is four. Player looks at the rules, infers via his historical knowledge that playing Blue-True will have a maximal return, and so plays Blue-True. I think that this play is irrational, and if Player knew the conjunction it would be rational, so Player does not know the conjunction. But what do we say about Player’s knowledge of each conjunct? It turns out that there are five somewhat natural options that correspond to the five models I offered about Professor Paresseux. I’ll simply list them here.

  1. Player knows the conjunct for which he has better evidence, and does not know the conjunct for which he has less good evidence. It was impermissible to take for granted the thing that was less well supported. This parallels the idea that Professor Paresseux did something wrong in turning down the request he was better placed to fulfil.
  2. Player knows the conjunct that he first took for granted, and not the conjunct that he took for granted second. When he first took one of the conjuncts for granted, that was a permissible mental act, but given that he had done it, it was impermissible to take the second for granted. This parallels the idea that whichever request Professor Paresseux turns down second is the impermissible turn-down, because it’s then he becomes in violation of his duty.
  3. It is indeterminate which conjunct Player knows. He doesn’t know both, because if he did then he could take both for granted, and he cannot take both for granted. Given both conjuncts, Blue-True is a rational play. So he must not know one, but there is no reason to say it is this one rather than that one, so it is indeterminate which he doesn’t know. This parallels the indeterminacy solution to Professor Paresseux’s puzzle.
  4. Player does know both conjuncts, since knowledge requires permissible taking for granted, and each of his takings for granted are individually permissible. But he doesn’t know the conjunction, and so And-Introduction Closure fails.
  5. Player does not know either conjunct.

The fifth model seems like the least plausible. Somewhat unfortunately, it is also the model I defended (or at least committed myself to) in “Can We Do Without Pragmatic Encroachment”. There I said knowledge requires that conditionalising on the known doesn’t change any answers to interesting questions, and any question taken conditional on an interesting proposition is interesting. So each of the questions What should I play given the first conjunct is true? and What should I play given the second conjunct is true? are both interesting questions (in this technical sense of ‘interesting’). And inquiring into the first question is incompatible with knowing the second conjunct, while inquiring into the second question is incompatible with knowing the first conjunct. This was a fun way out the problem, but it was also overkill. Player loses one bit of knowledge, not two, so I don’t think this is right.

Which of the other four models is correct? I think the fourth, which violates And-Introduction Closure, is the least plausible. That’s largely because it violates And-Introduction Closure. But the other three are all plausible, and are all consistent with And-Introduction Closure. (And note that all five are consistent with IRT. IRT itself says very little about this puzzle.) My preferred version of IRT says that the common case is the third - usually in cases like this it is indeterminate what is known.

There are mix-and-match options available. Perhaps if Player’s evidence for the first conjunct is (much) stronger than their evidence for the second conjunct, and it was the first one that they took for granted in reasoning, then they (determinately) know the first but not the second conjunct. I don’t need to take a stance on whether cases like this ever arise to defend And-Introduction Closure. That’s because all I need is that for any case like this, one of the first three models is right. And that can be true even if it is different models in different cases.

4.4 Summary

Putting all that together, IRT is consistent with Single Premise Closure and with And-Introduction Closure. Assuming that it is a harmless idealisation to treat anyone who uses multiple premises in reasoning as reasoning from the truth of all the conjunction of their premises, it follows that IRT is consistent with Multiple Premise Closure.

But this isn’t quite the end of the story. Even if the arguments of the last two sections work, what they show is that there must be some way to explain away any apparent conflict between IRT and closure principles. The arguments do not, on their own, tell us what that explanation will look like, or whether it will have unacceptable consequences. And without such an explanation, we might be sceptical of the arguments of this chapter, and indeed of IRT itself. So I’ll come back several times to issues about closure. In chapter 6, I’ll go over what IRT says about cases like Zweber’s, and Anderson and Hawthorne’s, more thoroughly.

Before I get to that though, it is time to say more about a notion that has done a lot of work so far but which has not been adequately investigated: inquiry.


  1. This section is based on §1.1 of my (2012).↩︎

  2. The cases I’ll discuss in sections 8.2 and 8.3 also raise problems for this proposal. And the proposal is very similar to the theory I’ll call IRT-CP in chapter 6, a theory that has trouble dealing with situations involving close calls.↩︎

  3. This is far from an original suggestion. See Weisberg (2010) for discussion of it, and of related proposals, and for more discussion of the literature on Wright’s examples.↩︎

  4. This game will resemble the examples that Zweber (2016) and Anderson and Hawthorne (2019b) use to raise doubts about whether pragmatic theories like mine reall do endorse single premise closure.↩︎

  5. If you want to make this more concrete, pick a random history book off the shelf and choose two claims that are both reasonably specific - so there could easily be a mistake about the details - and not something that was independently warranted.↩︎

  6. Different writers take different things to be the lottery paradox. In all cases, they concern what kind of non-probabilistic attitude an ideal agent would take towards the proposition that a particular ticket in a large, fair, lottery will lose. It seems unintuitive to say that they will not believe this, since the ticket might win. And this will lead to an inconsistency, since they will believe of every ticket that it will not win, but also believe that a ticket will win. But if you say it is not belief, you seem to either get scepticism, or the view that the ideal agent can believe p, and not believe q, even though they think q is more probable than p. Which of the four problems I just mentioned is most salient to a writer tends to depend on their background commitments, but most people defend views on which at least one of the problems is genuinely problematic.↩︎