5 Inquiry
The next three chapters are primarily defensive; they are responding to the three objections to IRT that seem to me most serious. But they aren’t just defensive. I’m not just saying why the theory from the chapters to date is immune to these arguments. I’m also developing the theory. That’s especially true in this chapter, which is why it is first. So what are these objections?
The first is what I’ll call the objection from double checking. As Jessica Brown (2008) argued, there are plenty of cases where intuitively a person knows that p, but should check whether p is true. This seems to be a problem for IRT, since it is motivated by the thought that what’s known is an appropriate starting point in inquiry. And intuitively it’s very weird to have an inquiry into p, when the inquirer is in a position to simply say p, therefore p. I used to think that in these cases the defender of IRT would have to either say that they are not really cases of knowledge, or not really cases of appropriate inquiry. And I tried both options at various times, without much success. I now think the objection should be addressed head on. It is possible to properly conduct an inquiry into p, even when one knows that p, and even when knowledge provides appropriate starting points for inquiry. That’s because it is often appropriate to deliberately restrict oneself in inquiry, and use fewer resources than are otherwise available. The aim of this chapter is to defend the claims made in the last two sentences, and to show how they provide a response to the objection from double checking.
The second is what I’ll call the objection from close calls. As Alex Zweber (2016) and, separately, Charity Anderson and John Hawthorne (2019) showed, some simple versions of IRT say implausible things about cases where a person is choosing between very similar options. Now it turns out the version of IRT that I had developed in previous work does not say the particular implausible thing they were accusing IRT of. On the other hand, the views I had developed in those works did say something just as implausible, perhaps even more implausible, about their cases. They had argued that IRT would lead to closure failures in these cases. I had designed a version of IRT that couldn’t possibly have closure failures, so when I first saw these arguments I thought they couldn’t possibly apply to my version of IRT. Unfortunately, the theory avoids closure failures by being implausibly sceptical, and that’s still bad. What I’m going to argue is that the problem their cases raise is not due to IRT, which is correct, but to the background assumption that choosers should maximise expected utility. And my response is going to be that in the cases they describe, choosers should not maximise expected utility. That might sound like an absurdly radical view, since expected utility theory is at the heart of all contemporary decision theory. But expected utility theory has fairly implausible things to say about close call cases. And a better theory, one that takes account of deliberation costs, is both more plausible, and consistent with IRT. I’ll say much more about this in chapter 6.
The third is what I’ll call the objection from abominable conjunctions. This is the IRT-equivalent of the blank stare objection to modal realism. Many people find it simply implausible that knowledge could depend on something like interests, which are not relevant to the truth of what is purportedly known. And the defender of IRT owes a reply to this widespread feeling. Part of my reply came back in chapter 1. I think this feeling is a result of being in a very strange place in the history of epistemology, where the focus is on fallibilist, interest-invariant, concepts. But we can do better than that. It is hard to articulate the intuition behind the unhappiness with IRT without lapsing into the JTB theory of knowledge. And most plausible solutions to the problems with the JTB theory end up introducing kinds of interest-relativity for independent reasons. I’ll go over these responses in chapter 7.
So those are the three objections I’m going to spend a lot of time on. There are three other classes of objection I’m not going to spend much time on.
The first class are objections to IRT that assume that knowledge changes when and only when one is in a ‘high stakes’ situation. Since I don’t assume that, those objections don’t raise problems for my version of IRT.
The second class are objections to IRT that assume that some parts of epistemology are interest-invariant, while some are interest-relative. I used to endorse such a theory, but I don’t any more. This book defends a global interest-relativism where knowledge, belief, rationality and evidence are all interest-relative (in different ways). So these objections don’t raise problems for my version of IRT either.
The third class are objections to IRT that only apply to versions of IRT that add on an opposition to contextualism or relativism. With this addition, IRT becomes what has been called interest-relative invariantism, or IRI. While I’ve defended that in the past, I’m not going to defend it here. The thesis of this book is that knowledge is interest-relative. If you want to understand the word ‘knowledge’ in the previous sentence in a contextualist or relativist way, go right ahead. Whatever metasemantic theory you have about the kind of words ‘knows’ and ‘knowledge’ are, I will be willing to defend the claim that knowledge is interest-relative.
5.1 Starting and Settling
At the heart of the influential picture of inquiry developed by Jane Friedman (2017, 2019b, 2019a, 2020) is the view that humans are capable of a number of distinctive attitudes. To be inquiring into some question, she argues, is to have a questioning attitude towards that question. That’s to say, she does not identify inquiry with particular actions, or at least with particular bodily movements. An actor might mimic the movements an inquirer makes without actually inquiring; a genuine inquirer might be sitting in an armchair quietly synthesizing their evidence. So particular movements are neither sufficient nor necessary for real inquiry. Rather, inquiry is a state of mind, a questioning state of mind.
The contrast to having a questioning attitude is having a settled attitude.1 Friedman holds that to believe something is to treat the question of whether it is true as being affirmatively settled, and I’m adopting the same position here. This attitude is deeply related to inquiry. Typically things are settled as the result of inquiry. Also typically, one does not inquire into something one has settled. Friedman holds a further claim: if one does inquire into something one has settled, this is a kind of mistake. It is incoherent to both have a questioning and a settled attitude towards the same question. I’m going to disagree with this further claim, while mostly adopting the broad picture she develops.
1 These are contrasts, but they don’t exhaust the space. One might not have an attitude to a question. And one might not treat a question as settled while not inquiring into it, because one treats the question as unworthy of effort, or impossible to make progress on.
The main difference between her picture and the picture of inquiry I’m using concerns where beliefs go in inquiry. I think that treating something as settled is most fundamentally about willingness to use it as the beginning of a new inquiry. The essential feature of belief is that it starts inquiry, not that it ends inquiry. What makes an attitude a belief is not that inquiry into it is settled, it’s that it can be used in the process of settling open questions. I used to think that whether one identified beliefs with settled states, or with the inputs to inquiry, was only a difference of emphasis, and a pretty minor one at that. After all, beliefs are typically the outputs of one inquiry and then serve as inputs to another; whether one takes one or other of these roles to be more fundamental seems like a pretty esoteric question. But I’ve come to think that actually quite a bit turns on it. If you think beliefs are fundamentally the things that inquiry start with, then there is a little gap in the argument that one should not inquire into what one already believes.
That argument, the one to the conclusion that one should not inquire into what one already believes, seems pretty simple. Assume one believes that p and is inquiring into the question p?. Our theory is that beliefs are appropriate starting points for inquiry, so it looks like this one should end pretty quickly. One can just argue p, therefore p, and close the inquiry. If the inquiry stays open longer than that, one is doing it wrong.
And this looks like a pretty strong argument for a conclusion that a number of people have reached via different routes.2.
2 These quotes were compiled by Elise Woodard (2020).
If one knows the answer to some question at some time then one ought not to be investigating that question, or inquiring into it further … at that time. (Friedman 2017, 131)
There is something to be said for the claim that the person who knows they have turned the coffee pot off should not be going back to check. (Hawthorne and Stanley 2008 ,587)
Any such cases [of believing while inquiring] involve peculiarities (such as irrationality or fragmentation). (McGrath 2021 ,482n37)
So how could that argument fail? It could fail if there are reasons for adopting constraints on an inquiry. If there are reasons to not use all the tools at our disposal, there could be cases where an inquiry into p gets started, and we have reasons not to just say p, therefore p. At the highest possible level of abstraction, this doesn’t sound very likely. It seems at first like there should be something like a principle of total evidence for inquiry, saying that you can use whatever tools, whatever evidence, you have to hand. Such a principle, however, turns out to be false.
To warm up to this, consider an analogy to legal inquiries. There we are all familiar with the idea that some evidence might be inadmissible in some inquiries. Now the reasons for this are typically not epistemic. It’s rather that we think the system as a whole will be more just if some kinds of evidence are excluded from some inquiries. And that looks a bit different to the situation where an individual inquirer is just trying to find what’s true. But we’ll see that the analogy here is not quite as bad as it first looks.
In the rest of this section, I’ll go over six kinds of cases where one can sensibly inquire into what one already knows. I don’t think any of these examples constitute knock-down proofs of the possibility of rational inquiry into what one knows, and for reasons I’ll get to later in the chapter, I don’t really need them to. It is helpful to see the range of cases where inquiry into what one knows is useful.
5.1.1 Sensitivity Chasing
Guido Melchior (2019) argues that the point of checking is to establish a sensitive belief in the checked proposition. To motivate this, think about the following case. Florian has just weighed out the coffee beans for his morning pot of coffee. Naturally he uses the best scales he has for this purpose; it’s important to get the coffee right. He starts wondering whether his scales have recently stopped being reliable. What does he do next? Here’s one thing he doesn’t do. He doesn’t look at the beans on the scale, note that the scale says 24g, note that he knows they are 24g (via that excellent scale), and conclude that the scale is still working. That’s no good at all; he has to use some other scale to check this one. This is like the Problem of Easy Knowledge (Cohen 2002), but note that it doesn’t rely on the scale being a source of basic knowledge. Florian might have lots of independent evidence that the scale is good; it’s from a good manufacturer and has been producing plausible results for a while. Still, if he wants to check it, he has to use something else. And here’s the part that seems most surprising to me. Add to the story that he has a backup scale, one that he thinks is pretty good but not as good as his best scale. It’s fine to use the backup scale to check the main scale, and not fine to use the scale to check itself. The best explanation for this is that checking requires sensitivity. Using the scale to test itself is a method that isn’t sensitive to whether the scale is working. Using some other scale, even a less reliable one, to check whether it is working, is at least somewhat sensitive. Checking is, at least in part, a matter of sensitivity chasing. One reason it is often good to check what one knows is that sensitivity chasing is often sensible.
Sensitivity chasing is perfectly acceptable goal in inquiry. One might inquire into p for the purpose of making one’s belief in p more sensitive. Now assume, as most epistemologists believe, that one can know p even if one’s belief is insensitive in various ways. One can know p even if one would still believe p were p false.3 If one has insensitive knowledge, it might be worthwhile to inquire into what one knows with the aim of generating sensitive knowledge. Indeed, this seems like a primary aim of what we call checking. Inquiring into p by saying p therefore p will not increase one’s sensitivity to whether p is true. So it’s worthwhile to not allow that move in the inquiry, if the aim is to increase sensitivity.
3 One simple example from Saul Kripke (2011). I know that I do not falsely believe that I was born on the Galapagos Islands. But while this is knowledge, it is not a sensitive belief.
There are other examples that show the difference between knowing and checking. Slightly modifying an example from Frank Jackson (1987), imagine that someone wants to know what _The Age_said was the result of last night’s game. One way to learn what The Age said would be to look up the result in The Guardian, and use one’s background knowledge that they both report the same (correct) result. That’s a way to come to know what The Age said. But it’s not a way to check what The Age said. And it’s not a way to check because had The Age said anything different, you wouldn’t have known. That’s a kind of insensitivity. It’s an insensitivity that’s consistent with knowledge; one can know what a newspaper says by knowing the truth and that it reports the truth. But it is one that is removed by proper checking. So checking aims for sensitivity that goes beyond belief, and beyond knowledge. And given that checking, i.e., chasing this kind of sensitivity, is rational, so is inquiring into what one knows.
5.1.2 Rules
It’s hard to always be perfectly rational. Sometimes it makes sense to not think too hard about things where getting the right answer would be quite literally more trouble than it’s worth. I’ll have much more to say about this point in chapter 6, where I make much of this insight from Frank Knight.
It is evident that the rational thing to do is to be irrational, where deliberation and estimation cost more than they are worth. (Knight 1921, 67fn1)
Knight is interested in the case where the rational thing to do is not inquire when inquiry would have minimal gains. But there is another case that is more relevant here. Sometimes it is worth having a simple rule that says Always inquire in these situations, rather than having a meta-inquiry into whether inquiry is worthwhile right now. To make this a little less abstract, it might be worthwhile always checking that the door is locked when one closes it, even if one frequently knows that one has just locked the door. As Hawthorne and Srinivasan (2013) point out, given the non-luminosity of evidence and knowledge, a simple rule like this might do better any other realistic rule.
Often following rules about when to inquire will be part of one’s professional responsibilities. I presented an example like this in chapter 7 of Normative Externalism - an inspector who is sent to do a random check of an establishment he had checked just a few days before. He knows everything is working well; he just checked it! But it’s his job to check, and it’s good to have random spot checks on top of regular checks, so it’s good to run this inquiry. That’s true even though the inspector knows how it will end.
5.1.3 Understanding
There is a famous puzzle about moral testimony. Something seems off about a person who simply believes moral principles on the basis of testimony, even from a trusted testifier. It’s odd to convert to vegetarianism simply because someone you trust says that’s what morality requires. There is also a famous answer to this puzzle, due to Alison Hills (2009). (There are other answers too, including ones that deny the puzzle exists. But to avoid going down too many rabbit holes, I’m going to assume for now the answer Hills gives is correct.) Hills says that moral testimony can give us moral knowledge, like any kind of testimony can provide knowledge, but it can’t provide understanding. What’s weird about the person who becomes a vegetarian on testimonial grounds alone is that the can’t explain their actions, since they don’t know why they are acting this way.
Beyond moral testimony, there seem to be many everyday cases of knowledge without understanding. One can know that Franz Ferdinand was assassinated in Sarajevo on June 28, 1914, without knowing why that happened. Or, indeed, one can know why one part of that is true, e.g., why it was that Franz Ferdinand was assassinated in Sarajevo on June 28, 1914, without knowing why he was assassinated in Sarajevo, or on June 28, 1914. Given those facts, it is possible to seek understanding of something that one already knows.
In many cases, but not all, the search for understanding will look like a somewhat different inquiry to the search for knowledge. If one wants to know why Franz Ferdinand was assassinated in Sarajevo, one will inquire into the role that city plays in the history of relations between Austria-Hungary and Serbia. That will be a different kind of inquiry to determining whether the assassination really happened. But in the moral case things aren’t this clear. Imagine again our person who hears from a trusted source that meat eating is wrong, but doesn’t understand why this is so. They should do some moral inquiry. And the inquiry will look, as far as I can see, very similar to the inquiry they will do in case they are working out whether meat eating is wrong. That is, it will look just like an inquiry into whether meat eating is wrong.
I think the best way to systematise things here is to take appearances at face value. Even once one is convinced meat eating is in fact wrong, if one doesn’t know why it is, one will continue to inquire into the morality of meat eating. And this inquiry is justified by the aim of coming to understand the wrongness of meat eating.
5.1.4 Defragmentation
Recall Professor Paresseux from subsection 4.5.2. He’s told that the visiting speaker this week is his old graduate school colleague Professor Assidue. But he puts no effort into remembering this fact, and it slips from the front of his mind. The talk is approaching, and Paresseux wonders to himself, who’s talking to us this afternoon? So he Googles the department talk schedule, sees that it is Assidue, and then says to himself “Ah, I knew that, I saw the email the other day.”
It is very hard to fit the category of information that has ‘slipped one’s mind’ into familiar epistemological categories. I think we should say that Paresseux is correct, and he did indeed know the answer to his inquiry before he started looking. After all, he could have retrieved the information by simply thinking hard about what had happened this week. And the best explanation for why that’s possible is that he did still know that Professor Assidue would be the speaker. But I also think it made sense for him to conduct an inquiry into this thing that he knew. It’s much easier to Google something than to trawl one’s memory for the answer. More reliable too. So this looks like a sensible inquiry for him to have conducted.
Following Andy Egan (2008), I think we should think of this as a case where Paresseux’s mind is ‘fragmented’, in the sense of Lewis (1982) and Stalnaker (1984). There is a part that contains the information about who the speaker is. That part isn’t at the front of his attention, so he doesn’t act on it. Still it is a part of him; he knows that stuff. Still, it is better to conduct an inquiry, i.e., a Google search, than to rely on this knowledge. So it is rational to inquire into something one knows.
5.1.5 Public Reason
One unfortunate position an inquirer can find themselves in is knowing something is true, even understanding why it is true, and being unable to convince anyone of their result. At this point one needs more reasons, but where to find them? Often, the way to find them will be to do what anyone else would do if they were trying to find out if the thing itself were true. Here are two such examples, drawn from rather different parts of philosophy.
Michael Strevens (2020) argues that the effectiveness of science in the last 350 years is partially due to the fact that scientists have adopted an “iron rule”: only empirical evidence counts. There are any number of ways one might come to rationally believe a scientific theory other than evidence. It might follow from broadly metaphysical principles one holds (at least in the early modern sense of metaphysical), it might be more elegant than any other theory, it might promise to unify seemingly disparate phenomena. But if you want to convince the scientific community, meaning convince both the collective community and most of the scientists who make it up, you need data. So you go looking for data, even for theories you know are true on non-empirical grounds. Strevens thinks this is individually irrational, but collectively for the best. It’s irrational for any one person to have just one way to come to believe things. But by incentivising the search for data in this way, we’ve collectively created an institution that has taken the measure of the world in ways previously unimaginable. There is something else valuable about data - it’s available, at least in principle, to everyone. So even if you can’t recreate my metaphysical intuitions, you can rerun my experiments. The iron rule doesn’t just lead to more measurements being taken, it imposes a kind of public reason constraint on science. Only evidence that everyone can accept as evidence, and indeed that they could (at least in theory) create for themselves, counts.
This way of putting the point should remind us of an important strand in contemporary political philosophy, namely that political rules should satisfy a public reason constraint. As Jonathan Quong puts it
Public reason requires that the moral or political rules that regulate our common life be, in some sense, justifiable or acceptable to all those persons over whom the rules purport to have authority. (Quong 2018)
Now as a matter of fact, we haven’t had as much uptake of this meta-rule in politics as in science. But we can imagine a society where there is, in practice, a kind of public reason constraint. If you want your favorite rule to be part of the regulation of society, you have to come up with a justification of it that satisfies this constraint. In such a society, there will be people who have idiosyncratic ideas for rules that would be good rules for the community, ideas that they don’t have public justifications for. In practice, the vast majority of these ideas will be bad ones. But some of them will not be. Indeed, a handful will even know that their ideas are good. Still, if this knowledge comes via idiosyncratic sources, they will need to come up with more public reasons if they want to see their rule implemented. And as I suggested in the previous subsection, the way to find reasons for a moral claim is generally to inquire into whether that claim is true. Or, at least, to act like that’s what one is doing.
5.1.6 Evidence Gathering
In section 9.6 I’m going to argue that having p as part of one’s evidence might license inductive inferences that are not licensed by a smaller evidence set that doesn’t include p, even if one knows p on the basis of that smaller set. If that’s right, evidence gathering could be epistemically useful even if one already knows the evidence to be gathered.
5.1.7 Possible Responses
If this was a paper dedicated to proving that it is rational to inquire into what one knows, at this stage I’d have to show that a philosopher who denies that is ever rational has no good story to tell about these six cases. And that would be a lot to show, since actually there is plenty that such a philosopher could say. They could deny that the inquiries are indeed rational. They could deny that the inquirers in question really do know the thing they are inquiring into, perhaps using IRT to back up that denial. They could deny that these are real inquiries, as opposed to some kind of ersatz inquiry. Or they could deny that this is really an inquiry into the very thing known, as opposed to an inquiry into some related proposition, like what the causal history of that thing was. And they wouldn’t even have to choose between these four; they could mix-and-match to deal with the putative counterexamples.
At the end of the day, I don’t think these responses will cover all the cases. But it would be a massive digression to defend that claim, and it isn’t necessary for what’s going to happen in the rest of this chapter. All I need is that there are people who very much look like they are conducting rational, genuine inquiries into things they already know. If there is a subtle way of explaining away that appearance, that won’t matter for the story that’s to come, since such subtleties will end up being good news for my side of the debate about IRT. The worry we’re building up to is that IRT has no good explanation of what’s happening in cases where someone seems to rationally, genuinely inquire into something they already know. If there are in fact no such cases, that can’t be a problem!
One reason for thinking that some of these cases will work is that there is a fairly general recipe for constructing the cases. It’s due to Elise Woodard (2020) and (independently) Arienne Falbo (2021). Start with the following two assumptions. First, inquiry is not just about collecting knowledge, but generally about improving one’s epistemic position. Second, given fallibilism, one can know p but have a sub-optimal epistemic position. So one can know p, but (rationally) want to improve one’s epistemic position with respect to p. And if one acts to address that want, one will be inquiring into what one knows, and doing so rationally. Given IRT you should worry about whether every step in the last few sentences really does follow from the ones before it. But I suspect the general picture is right, especially, as Melchior (2019) stresses, in checks aimed at increasing sensitivity.
Looking ahead a little, the primary aim of the rest of the chapter will be to defuse some potential counterexamples to IRT that involve someone rationally inquiring, especially checking, what they know. And my response will be disjunctive. Either inquiry solely aims at knowledge, or it does not. If inquiry does solely aim at knowledge, appearances in this cases are deceiving, and the inquiry is not in fact rational. If, as I think, inquiry does not solely aim at knowledge, then the cases are not in fact counterexamples to IRT.
5.2 Using Knowledge in Inquiry
Sometimes an inquirer has reasons to deliberately hobble their own inquiry. They have reasons to conduct an inquiry with one hand tied behind their back. Perhaps those reasons come from the social norms of the enterprise they are engaged in, as Strevens suggests. Perhaps those reasons come from the fact that they are sensitivity chasing, as Melchior suggests, and only a restricted inquiry will increase sensitivity. Perhaps those reasons come from the fact that they are trying to follow rules, and the rules do not allow certain kinds of tools to be used. The unifying theme is that sometimes the inquirer wants not just to run an inquiry, but to run it in a particular way.
The core principle in my version of IRT is that someone who uses what they know in inquiry is immune to criticism on the grounds that what they are doing is epistemically risky. Equivalently, they are immune to criticism on the grounds that their premises might be false. That’s compatible with saying that someone can know p, and be properly criticised for using p in inquiry. I motivated that restriction in section 4.4 by looking at people whose use of p in inquiry can be criticised on relevance grounds. In this chapter we see several more reasons. Someone who has reasons to perform a restricted inquiry, especially someone whose aims can only be realised by conducting a properly restricted inquiry, can be criticised for overstepping those restrictions. That’s fine, and totally consistent with IRT, as long as we pay attention not just to whether someone is being criticised, but why they are being criticised.
It isn’t just my idiosyncratic version of IRT that escapes this criticism. Jeremy Fantl and Matthew McGrath defend a version of IRT that uses the following principle.
When you know a proposition p, no weaknesses in your epistemic position with respect to p—no weaknesses, that is, in your standing on any truth-relevant dimension with respect to p—stand in the way of p justifying you in having further beliefs. (Fantl and McGrath 2009, 64)
I’m going to come back in section 9.9 to why I don’t quite think that’s right. But my disagreement turns on a fairly small technical point; I’m following Fantl and McGrath’s lead much more than I’m diverging from them. And these examples of properly restricted inquiry show how they too can accept rational inquiry into what one already knows.
Consider a person who is sensitivity chasing; they know p but want to have a more sensitive belief that p. So they conduct an inquiry into p, and reason to themselves p, therefore p. This closes the inquiry. Something has gone wrong. It isn’t bad reasoning; can’t go wrong with identity. And it isn’t that they use something they know as a premise; anything one knows can be used as a premise. It’s that they had an aim that could only be met by a restricted inquiry, and they violated those restrictions. That’s the incoherence here.
There is a way to read Fantl and McGrath’s principle so that this case is a problem for them, but I don’t think it’s the right reading. The sensitivity of one’s belief is, in their terms, part of the strength of one’s epistemic position. So if one’s belief was more sensitive, one wouldn’t have a reason to be chasing sensitivity. So in this case, you might think it’s weakness of epistemic position that’s relevant; the weakness of epistemic position explains why the inquiry is being conducted in the first place. But I don’t think that’s fair. The principle only talks about how inquiry should be conducted, not about whether the inquiry should be conducted. So Fantl and McGrath could say, and I think this is the right way to read what they do say, that knowledge is compatible with the weakness in one’s epistemic position explaining why an inquiry is in order. It’s just that knowledge is not compatible with weakness of epistemic position preventing the knowledge being used once the inquiry starts.
5.3 Independence
These reflections on the nature of inquiry help tidy up a loose end from Normative Externalism (Weatherson 2019). In that book I argued against David Christensen’s Independence principle, but I didn’t offer a fully satisfactory explanation for why the principle should seem plausible. Here’s the principle in question.
Independence: In evaluating the epistemic credentials of another’s expressed belief about P, in order to determine how (or whether) to modify my own belief about P, I should do so in a way that doesn’t rely on the reasoning behind my initial belief about P. (Christensen 2011, 1–2).
This is expressly stated as a principle about disagreement, but it is meant to apply to any kind of higher-order evidence. (This is made clear in “Formulating Independence” (Christensen 2019), which also includes some new thoughts about how Christensen now thinks the principle should be stated.) I argued that this couldn’t be right in general; it gives the wrong results in clear cases, and leads to regresses. But something like it does sound right. It sounds like there should be some kind of true claim in the vicinity. In Normative Externalism I hinted at an inquiry-theoretic proposal about what that nearby truth might be. (See, for example, the response to Littlejohn (2018), at the top of page 178.) But I never really spelled it out. Here’s what I now think the right thing to say is. 4
4 The picture I’m about to give is really similar to the one laid out by Andy Egan (2008). We’re interested in different kinds of cases, but the idea that a cognitive system might work best by allowing one part to check on another using just the evidence the first part has endorsed is one I’m just taking from him. If I’d seen this connection when writing Normative Externalism I would have connected it to the discussion of Madisonian moral psychology in part I of that book.
Peer disagreement, or really any other kind of higher order evidence, gives a thinker a reason to conduct an inquiry into whether their earlier thinking was correct. And not just that, it gives them reason to conduct an inquiry that is restricted in a particular way. The restriction is that they should not rely on the reasoning from their earlier thinking. Putting those two things together, we get that disagreement about p gives someone who believes p reason to inquire into p using a different approach, any different approach, from what they previously used.
Once we’ve got a principle about reasons, we could try formulating this as a defeasible rule. It’s plausible that one should adopt the defeasible rule of conducting such an inquiry whenever one sees a disagreement, or some other kind of potentially defeating higher-order evidence. And as long as one builds enough into the defeasibility clause, such a rule won’t be subject to the counterexamples I described, or the ones that have caused Christensen (2019) to have second thoughts about the right formulation of the rule. After all, every counterexample will naturally fall into the defeasibility clause.
Such a rule could be justified by the observation that it will probably be beneficial in the long run for people like us to adopt it. Double checking isn’t that hard. And it can have a lot of benefits in cases where it makes a difference; even if most of the time it doesn’t. Getting stuck in a bad epistemic picture can have devastating consequences; it’s good to step back from time to time to look if that’s happening to us. And disagreements with peers are a natural trigger for that kind of inquiry. Those same benefits can explain why disagreement, or other kinds of higher-order evidence, give us reason to double check.
But why should one conduct a restricted inquiry here? Given the stakes, we’re trying to work out whether we’ve got ourselves into a bad epistemic state, shouldn’t we through everything we have at the problem? That would be bad, since Independence expressly bars the thinker from using some of the tools at their disposal. It requires them to not do the same kind of inquiry they did before, which presumably was the one they thought best suited to the problem. That’s a big restriction, and needs some justification. I can offer two kinds of justification, not entirely distinct.
The point of having a rule like this, a rule like Double-check your reasoning when a peer disagrees, is to prevent us falling into epistemic states that are local but not global equilibria. The states we’re worried about are ones where any small change will make the epistemic state worse, but large changes will make things better. Picturesquely, we’ve reached the top of a small hill when we want to climb a mountain. We should be somewhere higher, but any step will be downhill. It’s good to not get stuck in places like this, and nudges from friends are a way out.
If we want to check whether we’re in such a bad situation, we want a test that is sensitive to whether we are. That is, we want a test that would say something different if we were in that situation to what it would say if we were doing well. (This is Melchior’s point about the aim of tests.) And just conducting the same inquiry we previously conducted will typically not be sensitive in this way. Or, more precisely, it will be sensitive to something like performance errors, but not competence errors. We need something more sensitive if the aim is to avoid getting stuck in local equilibria, and that requires setting aside the work we’ve previously done.
One of the reasons that local equilibria can be sticky is that we know our way around them well. We know all the ways in which one part of the picture we have supports the other parts. We typically don’t know how to think about other pictures so clearly. We don’t know, don’t see, the ways in which other pictures might ‘hang together’ as well as ours does. We are inevitably going to be biased towards our own ways of thinking. So it’s worthwhile to try to level the playing field, by looking at how things would seem if we didn’t have our own distinctive way of thinking.
None of this is to take back anything I said in Normative Externalism. Disagreement with a peer known to have the same evidence does not give someone a reason to reject a well-formed belief. It gives them a reason to double-check that belief. But, as I’ve been stressing all chapter, one can double-check one’s beliefs, and even one’s knowledge. And that is what should happen here.
Finally, thinking of disagreement as providing a reason to double check provides a nice explanation of one of the harder examples in Normative Externalism, the case of Efrosyni on page 222. She does a calculation, then double checks it by a different technique, then hears that a peer disagrees. What should she do now? I think typically she should do nothing. The disagreement gives her a reason to double check each calculation she did, but she’s already carried out that double check. This is, I think, the intuitively right result. If someone has already double checked their work, they should infer that someone who disagrees is wrong. Perhaps in some rare case they could get reason to double check the ‘combined’ inquiry, consisting of the initial inquiry plus the double check. But that’s rare; usually they should just point out their work.
With this picture of the relationship between knowledge, inquiry, and checking in place, it’s time (at last) to return to potential counterexamples to IRT.
5.4 Double Checking
An example from Jessica Brown
A student is spending the day shadowing a surgeon. In the morning he observes her in clinic examining patient A who has a diseased left kidney. The decision is taken to remove it that afternoon. Later, the student observes the surgeon in theatre where patient A is lying anaesthetised on the operating table. The operation hasn’t started as the surgeon is consulting the patient’s notes. The student is puzzled and asks one of the nurses what’s going on:
Student: I don’t understand. Why is she looking at the patient’s records? She was in clinic with the patient this morning. Doesn’t she even know which kidney it is? Nurse: Of course, she knows which kidney it is. But, imagine what it would be like if she removed the wrong kidney. She shouldn’t operate before checking the patient’s records.
I think this is a good inquiry even though the doctor knows.
I don’t have as snappy a story about her next example, Affair. I have several inconclusive thoughts about it.
- It strikes me as less compelling that it is coherent
- It’s striking that it gets much less attention than surgeon; I wonder if others share my suspicion (but maybe Surgeon is just first)
- Feels like loose talk, like “I knew there was a fire here”, or “I knew we’d lose the game”
- Or maybe it’s just an information possession sense of ‘knows’. We sometimes use that as well, and it’s very different to what the vast majority of epistemologists talk about. (Maybe not contextualists, maybe not Dretske.)
- In any case, I’m not committed to ordinary usage being good around here. I’m aiming to find a theoretically interesting notion that fits the roles knowledge should fit.
5.5 The Need to Inquire
So far I’ve mostly talked about inquiries that a person is actually conducting. But we should also think about the inquiries that they should conduct. Consider the following two abstractly described possibilities.
A person believes p for good reasons, and it is true, and there are no weird things happening that characterise typical gaps between rational true belief and knowledge. There is some action 𝜑 they are considering that will have mildly good consequences if p, and absolutely catastrophic consequences if ¬p. And one of the alternatives to 𝜑 is first checking whether p, which would be trivial, and then doing 𝜑 iff p. We’ve seen lots of these cases before, but here’s the new twist. The person absolutely does not care about the catastrophic consequences. They will all fall on people the person could not care less about. So they are planning to simply do 𝜑, for the good consequences. Since p is true, nothing bad will happen. Still, it seems something has gone wrong. We want to say that they’ve been reckless, that they’ve taken an immoral risk. But it isn’t risky to do something that you know won’t have bad consequences. So they do not know that p, and for similar reasons to why Anisa doesn’t know that p. Yet the version of IRT that I’ve given so far doesn’t say that they don’t know that p.
The second case has the same initial structure as the first. The person believes p for good reasons, it’s true, and there is no funny business going on - no fake barns or the like blocking knowledge. They are thinking about doing 𝜑. They know that if p is true, 𝜑 will have a small benefit. They also know that it would be completely trivial to verify whether p is true. They also in some sense know that if they do 𝜑, and p is false, it will be absolutely catastrophic. And they care about the catastrophe. But they’ve sort of forgotten this fact about 𝜑. It’s not that it has totally vanished from their mind. But they aren’t attending to it, and it doesn’t form any part of their deliberation when thinking about 𝜑. So they do 𝜑, nothing bad happens, and later when someone asks them whether they were worried about the possible catastrophe, they are shocked that they would do something so reckless. They are shocked, that is, that they forgot that it was important to confirm whether p was true before doing 𝜑. It feels, from the inside, like they got away with taking a terrible risk. But if they knew p, it should not seem like a risk, it should seem like rational action. (Just like they would think doing 𝜑 after checking whether p was rational action.) So this too should be a case where we say knowledge fails for practical reasons. (I’m going to come back to a version of this case in section 8.1, where it will be useful for distinguishing one of the few points where I disagree with the theory that Jeremy Fantl and Matthew McGrath (2002, 2009) endorse.)
The natural thing to say here is that in each case, the person should conduct an inquiry. They should check whether p is true. In that inquiry, they shouldn’t take p for granted. They shouldn’t take it for granted for a very particular reason, because it might be false. If they knew *p*, they could take it for granted, or, at least, if they couldn’t, it would be for some reason other than that p might be false. So they don’t know that *p*.
What these two types of case show is that knowledge is not just sensitive to what one is actually inquiring into, it is also sensitive to what one should be inquiring into. If one should inquire into Q, and were one to inquire into Q, one shouldn’t take p for granted because it might be false, one doesn’t know p.
This is a kind of moral encroachment in the sense of [Basu and Schroeder (2019)]5. What one knows might be sensitive to one’s moral obligations in inquiry. Imagine two people both take p for granted in making a decision that affects other people. This is mostly fine because p is true, and they had good reasons to take it for granted. Still, there was some risk to others, and they could have checked whether p was actually true before acting, but in each case they had other things they would rather be doing than checking p. What differs between the two people is what they would rather be doing. The first could have checked, but it would have taken them away from a rescue operation in progress; the second could have checked, but it would have taken them away from their social media feed. If the theory I’ve developed so far is correct, then the first knows that p, and the second does not, and the difference comes down to the differing moral importance of contributing to rescue operations and social media.
5 I discussed a famous example from Basu and Schroeder’s paper back in section 4.4.
It’s worth recalling here that the methodology I’m using in this book is perhaps a little different to a common methodology in this area. I don’t think that if you fill out the two cases from the last paragraph in full detail, it will be intuitively obvious that one person knows and the other doesn’t, and that’s evidence for IRT. Rather, I think that it’s plausible that one isn’t being reckless by acting on what one knows, and this principle, combined with anti-sceptical principles and judgments about which acts are indeed reckless, leads to IRT. As always, these cases allow for four broad classes of response: the sceptic who denies there is knowledge even in the low-stakes case; the epistemicist who denies the intuitions about which actions are reckless; the orthodox theorist who says that acting on what one knows can be reckless; and the pragmatist, who accepts both the intuitions about which acts are reckless and how knowledge connects to recklessness, and infers that knowledge is sensitive to pragmatic, and in this case moral, factors.
5.6 Inquiry Realism
- There’s a fact of the matter about what people are inquiring about - and what they are doing with that inquiry.
- This is independent of what their credences are
- And there are facts about what they should inquire about; some of these are given by consequentialist considerations, some by (broadly) deontological
- They may be inquiring into multiple things; this is ok, they lose a lot of information, but they retain probabilistic information
- So this makes the current view sit uneasily in the current dualism/reductionism debate
- I do think belief is reducible to credences plus subjects of inquiry
- But I don’t think they are reducible to credences alone
- Is this dualism or not? I don’t really care.