9  Circles, Epistemic and Benign

9.1 Normative Externalism and Circularity

Some of the views that I’m opposing are motivated by anti-circularity considerations. Consider, for instance, the principle David Christensen calls Independence, which is a version of the bracketing principle that was the focus of the previous section. I’m quoting it here with the argument for it that immediately follows.

Independence

In evaluating the epistemic credentials of another’s expressed belief about P, in order to determine how (or whether) to modify my own belief about P, I should do so in a way that doesn’t rely on the reasoning behind my initial belief about P.

The motivation behind the principle is obvious: it’s intended to prevent blatantly question-begging dismissals of the evidence provided by the disagreement of others. It attempts to capture what would be wrong with a P-believer saying, e.g., “Well, so-and-so disagrees with me about P. But since P is true, she’s wrong about P. So however reliable she may generally be, I needn’t take her disagreement about P as any reason at all to question my belief.”  (Christensen 2011, 1–2)

To my eyes, this argument seems to involve a category mistake. Moves in a dialectic can be question-begging or not. But here Christensen seems to want to put restrictions on rational judgments on the grounds that the alternative would be question-begging. That seems like the wrong way to get the desired end. If we want to stop “blatantly question-begging dismissals” we can just remind people not to be rude.

I think the problem Christensen is highlighting is not to do with question-begging, but to do with circularity. The problem is that if we violate Independence, we can use our reasoning to conclude that our reasoning is reliable, and that’s circular. Or, to be more accurate, it has a whiff of circularity about it. Trying to turn this into an argument for Independence though will be difficult.

Part of the difficulty is that it isn’t easy to say exactly what the circularity involved is. Consider the following little example, where Chiyoko and Aspasia are discussing arithmetic. They know that exactly one of them has taken a drug that makes people bad at simple arithmetic. Chiyoko does some sums in her head, listens to Aspasia, and reasons as follows.

  1. 2+2=4, and 3+3=6, and 4+5=9, and 7+9=16.
  2. Aspasia believes that 2+2=5, and 3+3=7, and 4+5=8, and 7+9=15, while I believe that 2+2=4, and 3+3=6, and 4+5=9, and 7+9=16.
  3. So, she got those four sums wrong, and I got them right.
  4. It is likely that I would get at least one of them wrong if I’d taken the drug, and unlikely that she would get all four wrong unless she’d taken the drug.
  5. So, probably, I have not taken the drug, and she has.
  6. So I should not modify my beliefs about arithmetic in light of what Aspasia says; she has taken a drug that makes her unreliable.

It isn’t clear to me just which step is meant to be circular. If Chiyoko had reasoned as follows, I could see how we might take her reasoning to be circular.

  1. It seems to me that 2+2=4, and 3+3=6, and 4+5=9, and 7+9=16.
  2. It seems to Aspasia that 2+2=5, and 3+3=7, and 4+5=8, and 7+9=15.
  3. From 1, it’s true that 2+2=4, and 3+3=6, and 4+5=9, and 7+9=16.
  4. From 1, 2 and 3, my arithmetic seemings are reliable, and Aspasia’s are not.
  5. So, probably, I have not taken the drug, and she has.
  6. So I should not modify my beliefs about arithmetic in light of what Aspasia says; she has taken a drug that makes her unreliable.

If Chiyoko reasons this way, the only reason for thinking she is right and Aspasia is wrong is her own judgment, which is exactly what is at issue in 6. But that isn’t at all how people usually reason. Nor is it a sensible rational reconstruction of their reasoning. Rather, the first version of the inference is much more like the way normal human beings do, and should, reason. And in this case the symmetry of the dispute between Chiyoko and Aspasia is broken by a fact recorded at line 1, namely that 2 plus 2 really is 4, 3 plus 3 really is 6, and so on. And while Chiyoko uses her mathematical competence to come ot know that fact, she doesn’t learn it by reasoning about her mathematical competence. If she did, it would be a posteriori knowledge, whereas in fact it is a priori knowledge. So if there is some circular reasoning going on in the first inference, the circularity is fairly subtle, and it won’t be easy to say just what it is.1

  • 1 David James Barnett (2014) also notes that it is important to distinguish the case where Chiyoko uses her mental faculties from the case where she reasons about them. He thinks, and I agree, that once we attend to this distinction, it is far from clear that there is anything problematically circular about what Chiyoko does.

  • 2 Cases with the same structure as Dharmottara’s became the focus of some discussion in the Anglophone philosophical tradition after they were independently discovered by Edmund Gettier (1963).

  • Still, there is some vague feeling of circularity that goes along with even that first inference. And in principle we shouldn’t say that some reasoning is acceptable just because we can’t precisely articulate the sin it commits. Compare: We shouldn’t say that the Dharmottara cases described by Jennifer Nagel (2014, 57) are cases of knowledge just because it is hard to say exactly what makes them not knowledge.2 Call this the ‘whiff of circularity’ objection to normative externalism, since normative externalism arguably licences the first form of reasoning, but there is a whiff of circularity about it. The aim of this chapter is to respond to the whiff of circularity objection. Much of our time will be spent trying to make the objection more precise. (As David Lewis almost said, I cannot reply to a whiff.) We’ll start with the worry that the objection trades on a fundamental confusion between inference and implication.

    9.2 Inference, Implication and Transmission

    As Gilbert Harman (1986) has pointed out, it is very important to separate the theory of implication, i.e., logic, from the theory of inference, which sits in the intersection between psychology and epistemology. The following argument is perfectly valid, even though following it would make a lousy inference.

    1. The Eiffel Tower is large.
    2. The Eiffel Tower is not large.
    3. So, London is pretty.

    Using terminology drawn from work by Crispin Wright (2000, 2002), we might say this is a case where warrant does not transmit from the premises to the conclusion. An agent could not gain warrant for the conclusion of this argument by gaining warrant for its premises. But that does not tell against the validity of the argument. Whenever the premises are true, so is the conclusion. Any proof of the premises can be converted into a proof of the conclusion. And so we have excellent reason to believe the argument is valid, even though it does not ground any good inference.

    Rather than using Wright’s slightly technical term ‘warrant’, we’ll focus on the class of Potential Teaching Arguments, or PTAs. These are arguments where an agent could come to learn the conclusion by first learning the premises, and then reasoning from them to the conclusion. The modal term ‘could’ there is context-sensitive, and vague. The context sensitivity comes from the fact that whether an argument is a PTA might depend on which agent we are focussing on, and on how that agent came to know the premises. Imagine, for example, that Marie is a scientist who is working on a machine to measure the relative radioactivity of two substances. The machine is, it turns out, very accurate, but it is also the first of its kind, and the theory behind it is somewhat speculative. Now consider this argument.

    1. Marie’s machine says that a is more radioactive than b.
    2. In fact, a is more radioactive than b.
    3. So, Marie’s machine is accurate about a and b.

    That’s a valid argument, but it isn’t a PTA. At least, it isn’a PTA for Marie while she is in the process of building and testing her machine, if her evidence for 2 is simply that 1 is true. She can’t learn that the machine is accurate by simply trusting its readings. That’s true even if it is, in fact, reliably accurate. Jonathan Vogel (2000) has argued that this is a problem for many forms of reliabilism. Stewart Cohen (2002, 2005) has offered a generalisation of Vogel’s argument that threatens normative externalism plus evidentialism, and we’ll return to Cohen’s argument later in this chapter. But for now we just need to note that this argument is not a PTA for Marie, using her new machine, while it might be for other agents. A historian of science a century after Marie, trying to retrospectively figure out how accurate Marie’s innovative machine was, could use this argument in their inquiry.

    So when we say that an argument is, or is not, a PTA, we mean to be talking about a particular, contextually supplied, agent, using something like the methods for learning the premises that they actually use. The phrase ‘something like’ is obviously rather vague, but the vagueness shouldn’t worry us overly, as it won’t compromise the discussion to come.

    We have already seen some valid arguments that are not PTAs. The argument from the Eiffel Tower to London might not be a PTA for anyone in any possible world. There is a radical version of the view that inference and implication must be kept separate which says that there are literally no valid PTAs. On this view, we never learn by following arguments from premises to conclusions, and thinking we do is a sign one has not properly appreciated the inference/implication distinction. I doubt this view is right. It is worth being sceptical about how often we use valid arguments in inference, but do seem to be some cases where we do. This schema, for instance, seems to be one we can easily use.

    1. a1 is the most recent F, and it is G.
    2. a2 is the second most recent F, and it is G.
    3. a3 is the third most recent F, and it is G.
    4. So the last three Fs are G.

    For a concrete instance of this, let F be President of the USA, G be is left-handed, and the ai be Barack Obama, George W. Bush and Bill Clinton, and imagine someone considering the argument in 2009. More generally, consider cases where G is a coincidental property of the last 3 (or more) Fs, and we see that the last few Fs have this property by simply working through the cases. The result is a conclusion that we learn simply by remembering the premises, and then doing a very simple deduction. So there are some PTAs, even if not every valid argument is a PTA.

    The clearest example of a valid argument that is not a PTA, for any agent, is A, therefore A. By definition, a PTA is one where the agent could first learn the premise, and then, in virtue of that, later come to learn the conclusion. But one cannot first learn the premise of A, therefore A, and later come to believe the conclusion. For similar reasons, it will be rare that A and B, therefore A, could be a PTA for an agent, though perhaps there are some possible instances of this schema, and some possible agents, for whom this is a PTA.

    Why isn’t the argument about radioactivity a PTA for Marie? In some sense, we might say that it is because it is circular. Marie can’t use her new machine to learn that one of the premises is true, then use the argument to learn that the machine is reliable. And then, presumably, go on to use the fact that the machine is reliable to defend the second premise of the argument. Something looks to have gone wrong.

    It is tempting now to generalise from Marie’s case to the principle that no argument whose conclusion is that a particular method or tool is reliable, and whose premises were based on that method or tool, could be a PTA. But this is too quick. Or at least, as I’ll argue in the next section, those of us who are not sceptics should think it is too quick.

    9.3 Liberalism, Defeaters and Circles

    In this section I discuss the following argument.

    1. Normative externalism says that some arguments that exemplify defeater circularity are PTAs.
    2. No argument that exemplifies defeater circularity is a PTA.
    3. So, normative externalism is false.

    I’m going to spend a bit of time setting up what defeater circularity is. But the basic idea behind premise 2 is that the principle suggested at the end of the previous section is true. And the idea behind premise 1 is that if we reject level-crossing and accept normative externalism, we end up committed to violations of that principle. I will mostly be concerned to argue against premise 2, though I’ll note that there ways we could push back against premise 1 as well. The ideas of this section draw heavily on work by James Pryor (2004) and we’ll start with an important distinction he draws.

    Pryor distinguishes three different approaches epistemological theorists might take towards different epistemological methods. He offers labels for two of these approaches; I’ve added a label for the third that naturally extends his metaphor. In every case, we assume agent S used method M to get a belief in proposition p. And we’ll say the proposition M works is the conjunction of every proposition of the form (M represents that q) → q for every salient q, where → is material implication. Then we have the following three views.3

  • 3 I’m modifying Pryor’s views a bit to make these attitudes towards methods, rather than towards propositions; this makes everything a touch clearer I think. But I’m following Pryor, and the literature that has built up around his work, in focussing on justification rather than rationality. For reasons that I discussed in chapter 7, I would rather focus on rationality. I think the difference between the two concepts is not significant to this part of the discussion.

  • Conservatism
    S gets a justified belief in p only if she antecedently has a justified belief that M works.
    Liberalism
    S can in some circumstances get a justified belief in p without having an antecedently justified belief that M works, but in some other circumstances she can properly use M and not get a justified belief in p, because her prior evidence defeats the support that M provides for p.
    Radicalism
    As long as S uses M correctly, and M genuinely says that p, and M actually works, then no matter what evidence S has against M works, she gets a justified belief in p.

    Whether conservatism, liberalism or radicalism is the most intuitive initial view will vary depending on which particular method we are considering.

    Scientific advances naturally produce a lot of methods that we should treat conservatively. This is what we saw in the case of Marie and her machine; she couldn’t learn things about how radioactive some things are until and unless she knew the machine worked. And that’s true in general of new methods we develop. But it isn’t true, isn’t even intuitively true, of all methods.

    Arguably we should be radicals about our most fundamental methods, such as introspection. A child doesn’t antecedently need to know that introspection is reliable to come to have introspective knowledge that she’s in pain. As long as introspection works, it isn’t clear this is defeasible. If as the child grows up, she hears from some fancy philosophers that there is no such thing as pain, she might get some reasons to doubt that introspection works. But when she introspectively (and perhaps involuntarily) forms the belief that she’s in pain, she knows she is in pain.

    It is a little trickier to say which methods we should be liberals about. Pryor (2000) suggests that we should be liberals about perception. Many epistemologists, following C. A. J. Coady (1995) are liberals about testimony. They deny that we need antecedent reason to believe that a particular speaker is reliable, i.e., that that person’s testimony work’s before getting testimonial knowledge. But we shouldn’t just believe everything we hear, so testimonial justification is defeasible.

    Conservatism and radicalism are fairly well defined views. That is, the class of conservative views all share a strong family resemblance to each other, as do the class of radical views. The main thing we need to say about distinguishing different types of conservatism is that some conservatives have supplementary views that greatly alter the effect of their conservatism. For instance, the Cartesian sceptic is a conservative about perception who denies that we can believe perception works without having perceptual beliefs. But some other philosophers are conservatives about perception who also believe that it is a priori that perception works. Those positions will be radically anti-sceptical. So conservatism may have rather different effects elsewhere in epistemology, depending on what it is combined with. But the basic idea that one can use M iff one has prior justification for believing M works gets us a fairly well defined region of philosophical space, as does the view that one can use M under any circumstances at all.

    In contrast to conservativism and radicalism, liberalism covers a wide variety of fairly disparate theories. The liberal essentially makes a negative claim, antecedent justification for believing that M works is not needed for getting a justified belief that p, and an existential claim, there is some way of blocking the support M provides to p. Different liberals may have very different views about when that existential claim is instantiated.

    A conservative-leaning liberal thinks that there are a lot of ways to block the support that M provides to p. One way to be a conservative-leaning liberal is to say that whenever S has any reason to doubt that M works, the use of M does not justify belief in p. Pryor’s own view on perception is that this kind of conservative-leaning liberalism is true about perception. If any kind of liberalism about testimony is correct, then presumably it is a very conservative-leaning liberalism, since it is easy to block the support that testimony that p provides to p.

    A radical-leaning liberal thinks that there are very few ways to block the support that M provides to p, even if in principle there are some. One natural way to be a radical-leaning liberal is that the support is blocked only if S believes, or is rational in believing, that M works is false. An even more radical view says that the support is blocked only if S knows that M works is false. A fairly radical form of liberalism seems intuitively plausible for memory; we are entitled to trust memories unless we have good reason to doubt them. It’s worth keeping these radical forms of liberalism in mind when thinking about whether pure radicalism is ever true.

    Pryor also notes an interesting way in which arguments can seem to be circular. He doesn’t give this a name, but we’ll call it defeater circularity.4

  • 4 I’m assuming throughout this chapter that it makes sense to talk about defeaters for beliefs. I actually don’t want to commit to that being true. But the assumption is safe nevertheless. Dialectically, the situation is this. I’m trying to respond to the best arguments I know of that normative externalism licences a problematic form of circular reasoning. If the whole ideology of defeaters is misguided, there isn’t any danger that a defeater based argument will threaten work. But I’m not going to have the defence of normative externalism rest on that ideological claim.

  • Defeater Circularity
    An argument exemplifies defeater circularity iff evidence against the conclusion would (to at least some degree) undermine the justification the agent has for the premises. This is Pryor’s Type 4 dependence; see Pryor (2004, 359).

    It is important that Pryor uses ‘undermine’ here rather than something more general, like ‘defeat’. Any valid one premise argument will be such that evidence against the conclusion will rebut, at least to some degree, the justification for the premises. But it won’t be necessary that this evidence undermines that justification. If one reasons X is in Ann Arbor, so X is in Michigan, then evidence against the conclusion will rebut whatever evidence one had that X is in Ann Arbor. But that might not undermine the support the premise provides to the conclusion, or that the evidence supplies to the premise. If one thought X was in Ann Arbor because a friend said that they just saw X, the counter-evidence need not impugn the friend’s reliability in general. It might just mean the friend got this one wrong.

    It is not preposterous to think that arguments which exemplify defeater circularity are defective in some way. Indeed, it is not preposterous to think that they are not PTAs. If the falsity of the conclusion would undermine the premises, then the premises rely, in some intuitive sense, on the conclusion being true. And that suggests the argument is circular. And circular arguments are not PTAs. Or at least so we might intuitively reason.

    Pryor argues that some arguments which exemplify defeater circularity are, in the language being used here, PTAs. He gives two arguments for this conclusion. First, he offers direct examples of arguments that he says exemplify defeater circularity, but which could, it seems, be used to form justified beliefs in their conclusions. As he notes, however, the intuitive force of these examples is not strong. His second argument is that defeater circularity arguments suffer from some other vice, such as a dialectical vice, and we confuse this for their not being sources of justification.

    Most forms of liberalism imply that there will be good arguments that exemplify defeater circularity. If liberalism about M is true, and S can sometimes observe that she is using M, then she should be able to make the following argument, which we’ll call the M argument.

    1. p.
    2. M says that p.
    3. So M got this one right.

    By hypothesis, this could be a way that S comes to know that M is working. Since liberalism about M is true, she doesn’t need to know that antecedently to using p to get the first premise. But the conclusion is obviously entailed by the premises. So it looks like it could be learned by learning the premises an doing a little reasoning. A kind of liberalism that says that whenever S recognises which method she is using, that method is blocked from providing support, would not licence this reasoning. But that’s a kind of liberalism that doesn’t seem particularly plausible.

    But the M argument does exemplify defeater circularity, at least if we’re assuming a not-too-radical form of liberalism about M. If S got evidence against the conclusion, that would trigger the clause saying that evidence that M does not work blocks the support that the agent gets for p by using M. That is, in the presence of such evidence, the first premise would not be supported. So we have the conditions needed for defeater circularity. So if some not-too-radical form of liberalism is true, then some arguments that exemplify defeater circularity can generate knowledge, and are in that sense not viciously circular.

    This is all relevant to us because it is plausible that defeater circularity is the kind of circularity that’s at issue in debates over Independence. Return again to Chiyoko and Aspasia, and recall the reasoning Chiyoko does.

    1. 2+2=4, and 3+3=6, and 4+5=9, and 7+9=16.
    2. Aspasia believes that 2+2=5, and 3+3=7, and 4+5=8, and 7+9=15, while I believe that 2+2=4, and 3+3=6, and 4+5=9, and 7+9=16.
    3. So, she got those four sums wrong, and I got them right.
    4. It is likely that I would get at least one of them wrong if I’d taken the drug, and unlikely that she would get all four wrong unless she’d taken the drug.
    5. So, probably, I have not taken the drug, and she has.
    6. So I should not modify my beliefs about arithmetic in light of what Aspasia says; she has taken a drug that makes her unreliable.

    This violates Independence. Chiyoko believes that Aspasia is the unreliable one because she calculated some sums, and realises that Aspasia got them wrong. And she uses this to conclude that disagreement with Chiyoko should not move her.

    But where has Chiyoko gone wrong? If, as the defender of Independence insists, she should not have ended up where she did, where was her first mistake? All parties agree that statements like premise 2 are usable in debates. And step 3 follows from steps 1 and 2, presumably in a way that Chiyoko can realise. Step 4 is true, and isn’t anything she has any reason to doubt. Step 5 follows from 4 in a simple way, so Chiyoko can sensibly go from 4 to 5. And step 6 follows from 5 on any plusible theory of disagreement. One shouldn’t modify one’s beliefs in light of disagreement with someone who has taken accuracy-destroying drugs.

    So the problem must be with step 1. Now it isn’t immediately obvious what is problematic about step 1. But perhaps we can see the problem if we think about things in terms of defeater circularity. The argument from step 1 to step 5 does, plausibly, exemplify defeater circularity. If Chiyoko had reason to believe that step 5 was false, she would have arguably have a defeater for step 1. So here we have an argument that the normative externalist thinks is a perfectly good argument, indeed a PTA, and a kind of circularity that it exemplifies. I suspect this is probably the best case for the claim that normative externalists are committed to a dubious kind of circularity.

    There is a tricky dialectical point here. The normative externalist need not themselves agree that Chiyoko’s argument exemplifies defeater circularity. After all, they think that Chiyoko can reason well about arithmetic even if she has misleading evidence that she has been drugged. But it would be good to not have to rely on this aspect of the theory in order to defend the theory. So let’s just note that the objection does to some extent rely on a premise the normative externalist may wish to question, and move on.

    My main reply to the objection is that exemplifying defeater circularity cannot, in general, prevent arguments being PTAs. And that’s because there is a general argument that there must be at least some PTAs that exemplify defeater circularity. Here’s the argument for that conclusion.

    1. Liberalism is true about some method of forming beliefs or other, though we aren’t necessarily in a position to know which method it is.
    2. If liberalism is true about some method of forming beliefs or other, then some PTAs exemplify defeater circularity.
    3. So, some PTAs exemplify defeater circularity.

    I think this argument can be found in Pryor (2004), though he spends more time on arguing that particular exemplifications of defeater circularity are PTAs than directly defending the existential claim.

    I’ve already argued for premise 2, in the discussion of liberalism. And the argument is valid. So the important thing is to argue for premise 1. The main argument here is a scepticism-avoidance argument. I’m going to make an argument very similar to one found in reecnt work by David Alexander (2011) and Matthias Steup (2013). They both argue, and I agree, that otherwise plausible anti-circularity principles lead to intolerably sceptical conclusions. My version of this argument goes via Pryor’s notions of conservatism, liberalism and radicalism.

    Call someone an extremist if they are anti-liberal about all methods. One way to be an extremist is to be a global conservative. The Pyrrhonian sceptics we will meet soon are global conservatives, and that’s why they reach such implausibly sceptical conclusions. But there are more extremists than that. Someone who thought that for any method, either radicalism or conservatism is true of that method is an extremist in my sense.

    It actually isn’t too hard to motivate extremism. I suspect many philosophers would find the following argument at least somewhat plausible.

    1. For any method of forming beliefs, either it is a priori knowable that it works, or it is not.
    2. We should be radicals about any method of forming beliefs such that it is a priori that the method works.
    3. We should be conservatives about any method of forming beliefs such that it is not a priori that the method works.
    4. So we should be extremists about all methods.

    For what it’s worth, I think both premises 2 and 3 are false. But starting with this connection to the a priori helps bring out the connection between the argument against extremism and what I’ve written elsewhere about Humean scepticism  (Weatherson 2005, 2014). The problem with extremism is that it implies external world scepticism, and we should not be external world sceptics.

    Why think that extremism implies external world scepticism? One strong reason is the long-running failure of anyone to come up with a plausible extremist response to sceptical doubts. To my mind, there is only one such response that even seems remotely plausible. This is the view that says we should be radicals about inference to the best explanation and introspection, plus the premise that the best explanation of our introspected phenomenology is that the external world exists. This kind of approach is defended, though not exactly in these terms, by Bertrand Russell (1912/1997, ch. 2), Frank Jackson (1977), Jonathan Vogel (1990), Laurence BonJour (2003), and other internalists.

    Perhaps you think this kind of view can be made to work; my hopes for this project are dim. Let’s just note one problem, one boldly conceded by BonJour. Since most humans have not justified their use of perception, etc by inference to the best explanation, it follows that most people do not have (doxastically) justified beliefs. That’s implausible on its face, and it’s symptomatic of a deeper problem. Figuring out, or even being sensitive to, the quality of different explanations of the way the world appears is cognitively downstream from the kind of simple engagement with the world that we get in perception. So it is impossible to use inference to the best explanation to justify our belief that perception is reliable, at least if conservatism about perception is correct, because we need perception to make plausible judgments about the quality of explanations.

    If that’s all correct, then liberalism must be true about some methods. And that implies exemplifying defeater circularity cannot always be a bad-making feature of arguments. So the fact that normative externalists are committed to the goodness of arguments that exemplify defeater circularity cannot be, on its own, an argument against normative externalism.

    And there is even more that the normative externalist can say. Assume I’m wrong in the last few paragraphs, and actually extremism is correct. Then we have a further question to ask: Is global conservatism correct or not? If not, some kinds of radicalism are correct. And if some kinds of radicalism are correct, then a strong form of normative externalism is true, at least with respect to beliefs formed by some methods. That’s because radicalism implies that certain belief-forming methods are immune to all kinds of defeat, including belief that they don’t work, or evidence that they don’t work, or even knowledge that they don’t work. That’s a very strong form of normative externalism! Now it’s true that what we get here isn’t normative externalism in general, because all we get here is that for some belief-forming methods, higher order evidence is irrelevant. That’s consistent with higher order evidence mattering sometimes, in a way that normative externalists deny. But if the position I’m imagining here - that higher order evidence is relevant to beliefs formed by certain methods - is correct, then general objections to normative externalism, ones that are insensitive to the methods by which people form beliefs, must be wrong.

    On the other hand, if radicalism is never true, and extremism is true, then global conservatism is true. And global conservatism is a very implausible doctrine. To see how implausible, it’s worth working through some varieties of sceptical argument.

    9.4 Pyrrhonian Scepticism and Normative Externalism

    In the previous section I argued that the principle that no PTA exemplifies defeater circularity leads to external world scepticm. But perhaps that was understating the case. Perhaps it really leads to Pyrrhonian scepticism, and Pyrrhhonian scepticism is a kind of reason scepticism. (The next few paragraphs draw on a discussion of scepticism by Peter Klein (2015).)

    Pyrrhonian scepticism starts with reflection on the problem of the criterion. Any knowledge we get must be via some method or other. But, says the Pyrrhonian sceptic, we can’t use a method to gain knowledge unless we antecedently know that it is a knowledge-producing method. And plausibly that implies knowing it is reliable, since methods that are unreliable do not produce knowledge. So the Pyrrhonian is a global conservative, in the terminology of the previous section. Now knowing that a method is reliable is a piece of knowledge. So to know anything, there is something we need to know before we can know anything. That’s impossible, so we know nothing.

    The problem of the criterion is potentially a very strong argument. After all, the conclusion of the last paragraph was not that we know nothing about the unobservable, or about the external world, or even about contingent matters. It is that we know nothing at all. That even extends to philosophical knowledge. So the problem of the criterion is naturally an argument for Pyrrhonian scepticism, the view that we cannot know anything, even the truth of philosophical claims like Pyrrhonian scepticism.

    For much the same reason, the view looks so strong as to be self-defeating. You might think that by the lights of the Pyrrhonian sceptic, we can’t even assert Pyrrhonian scepticism, since we can’t know it to be true. That’s too quick, since it assumes as a premise that Only assert what you know is a valid rule, and that’s both false in academic contexts, and easily denied by the Pyrrhonian. But still, a view that says we can’t know that we exist, we can’t know that we are thinking, we can’t know that ¬(0 = 1), and so on is just absurd.

    And worse still, it is an argument for an absurd conclusion with really only one key premise, namely global conservatism. Sometimes arguments can have absurd conclusions, but at least they present us with a challenge to identify where things have gone wrong. Not here! The mistake is obviously the global conservatism, since that’s the only premise there is. I’m assuming here that we are reading ‘method’ so weakly that it is uncontroversial that any knowledge is gained by some method or other.

    And so most epistemologists do indeed reject that premise. Reliabilists say that any reliable method can produce knowledge, whatever the user of that method knows about the method’s reliability. Other philosophers might say that we can use induction in advance of knowing that induction is reliable, and hence in advance of knowing it is knowledge-producing. Or perhaps we can, as Descartes suggests, use clear and distinct perception before we know it is reliable. One way or the other, the overwhelming majority of epistemologists reject global conservatism somewhere.5

  • 5 The regress argument I’ve given here requires that the conservative view be stated a little carefully. It matters that the conservative says that M only provides justification if the subject antecedently believes, with justification, that M works. A view that says that M provides justification as long as M works was antecedently justifiably believable is not conservative as I’m carving up the space of views.

  • If global conservatism is false, then either liberalism is true somewhere, or radicalism is true somewhere. And we have already seen that either of these conclusions would be very bad news for circularity based objections to normative externalism. They certainly suggest that the argument from defeater circularity against normative externalism fails. If liberalism is true somewhere, then some PTAs exemplify defeater circularity, contra premise 2 of the argument. And if radicalism is true somewhere, then it is possible to be a normative externalist without committing to the view that the problematic arguments exemplify defeater circularity, contra premise 1 of the argument.

    9.5 Easy Knowledge

    The normative externalist looks like they will be subject to what Stewart (Cohen 2002, 2005) calls “The Problem of Easy Knowledge”. This might be a better way to cash out the intuition that normative externalism leads to problematic kinds of circular reasoning.

    The problem of easy knowledge arises for any theory that says an agent can use a method to gain knowledge without knowing that it is knowledge-producing. Say M is one such method, and S one such agent. And assume, at least for now, that S can identify how and when she is using M. That is, when she forms a belief that p using M, she at least often knows that she is doing so. Say that she forms beliefs p1, …, pn this way, and each of these beliefs amount to knowledge. Then she can reason as follows.

    1. p1 ∧ … ∧ pn
    2. M said that p1 ∧ … ∧ pn
    3. So, M is fairly reliable.

    What could be wrong with this argument? We’ve assumed that the agent knows premise 1 and premise 2, so as long as she can use whatever she knows in an argument, she is in a position to run the argument. The argument is not deductive, but it seems like a decent inductive argument. Perhaps it could fail if there were external defeaters, but we can assume there are no such defeaters in S’s situation. And if the sample size strikes you as too small for the inductive inference, we can increase the size of n.

    So given some weak assumptions, it looks like S can use this argument to gather inductive support for the claim that M is fairly reliable. That is to say, she can use M itself to gather inductive support for the claim that M is fairly reliable. And that has struck many philosophers as absurd. This is, in essence, is the Problem of Easy Knowledge. Here are a few quotes from Cohen setting out what he takes the Problem to be. (The ‘evidentialist foundationalist’ in these quotes is the theorist who thinks that an agent can gain knowledge by drawing appropriate conclusions from evidence in advance of knowing that evidence reliably correlates with the appropriate conclusion. This is a form of normative externalism, and it’s at least arguable that if Cohen’s arguments work against the evidentialist foundationalist, they will generalise to all forms of normative externalism.)

    For example, if I know the table is red on the basis of its looking red, then it follows by the closure principle that I can know that it’s not the case that the table is white but illuminated by red lights. Presumably, I cannot know that it’s not the case that the table is white but illuminated by red lights, on the basis of the table’s looking red. So the evidentialist foundationalist will have to treat this case analogously to the global deception case: I can know the table is red on the basis of its looking red, and once I know the table is red, I can infer and come to know that it is not white but illuminated by red lights. But, it seems very implausible to say I could in this way come to know that I’m not seeing a white table illuminated by red lights.  (Cohen 2002, 313)

    It’s counterintuitive to say we could in this way know the falsity of even the alternative that the table is white but illuminated by red lights. Suppose my son wants to buy a red table for his room. We go in the store and I say, “That table is red. I’ll buy it for you.” Having inherited his father’s obsessive personality, he worries, “Daddy, what if it’s white with red lights shining on it?” I reply, “Don’t worry–you see, it looks red, so it is red, so it’s not white but illuminated by red lights.” Surely he should not be satisfied with this response. Moreover I don’t think it would help to add, “Now I’m not claiming that there are no red lights shining on the table, all I’m claiming is that the table is not white with red lights shining on it”. But if evidentialist foundationalism is correct, there is no basis for criticizing the reasoning. (Cohen 2002, 314)

    Imagine again my 7 year old son asking me if my color-vision is reliable. I say, “Let’s check it out.” I set up a slide show in which the screen will change colors every few seconds. I observe, “The screen is red and I believe it’s red. Got it right that time. Now it’s blue and, look at that, I believe its blue. Two for two…” I trust that no one thinks that whereas I previously did not have any evidence for the reliability of my color vision, I am now actually acquiring evidence for the reliability of my color vision. But if Reliabilism were true, that’s exactly what my situation would be. We can call this the problem of “easy evidence”.  (Cohen 2002, 317)

    Cohen thinks that the lessons to draw from these cases is that we must distinguish between KR and PKR.

    KR
    A potential knowledge source K can yield knowledge for S, only if S knows that K is reliable.
    PKR
    A potential knowledge source K can yield knowledge for S, only if S has prior knowledge that K is reliable.

    PKR is the problematic global conservatism. It leads to implausibly sceptical results. But, thinks Cohen, this is no argument against KR. Nothing in the discussion so far shows that there is anything absurd with a sweeping form of coherentism that says that S can to know simultaneously, and for the same reasons, all of the following propositions.

    1. ¬(0=1).
    2. I used a knowledge generating method to form the belief that ¬(0=1).
    3. I used a knowledge generating method to form the belief that I used a knowledge generating method to form the belief that ¬(0=1).
    4. I used a knowledge generating method to form the belief that I used a knowledge generating method to form the belief that I used a knowledge generating method to form the belief that ¬(0=1).

    And so on. Cohen’s opponents are the anti-coherentists who think it is possible to know ¬(0=1) prior to having this infinite chain of knowledge. Such anti-coherentists can, and do, disagree substantially about what exactly is required for one to know ¬(0=1). Let’s start by considering just one opponent, a reliabilist who says that a method can produce basic knowledge if the following two conditions are met:

    • The method is in fact reliable; and
    • The agent has no reason to doubt that the method is reliable.

    This is a somewhat simplified version of the reliabilism defended by Alvin Goldman (1986, 111–12), and similar in form (though not in its externalist commitments) to Pryor’s dogmatism  (Pryor 2000). And it is very much the kind of view that Cohen takes his arguments to be targeted against. He makes three observations about this kind of theory.

    First, the theory allows for a fairly simple response to doubts grounded in sceptical possibilities. If something appears to be a red table, and so we come to know that it is is a table, we can simply deduce that we are not in a tableless room but deceived by an evil demon to think there is a table. This looks too quick, but as Cohen concedes, any response to scepticism will have some odd feature.

    Second, the theory allows for a fairly simple response to more everyday doubts. This is the core of Cohen’s objection to basic knowledge views. For instance, he notes that the kind of foundationalism that he considers would allow an agent to easily infer that they are not looking at a white table illuminated by red lights simply on the basis of the appearance of a red table. And this he thinks is absurd. This is the upshot of the first of the imagined conversations with his (then) 7 year-old son.

    Third, the theory seems to allow for a fairly simple generation of grounds for an absurd inductive argument. Assume that the agent is living in a world where appearances do in fact reliably correlate with facts about the external world. So whenever something appears φ, the agent can know that it is φ, for any φ. So she can easily test the accuracy of her appearances just by looking. And the test will be passed every time, with flying colours! So she will have grounds for an inductive argument that appearances are an accurate guide to reality. This is the conclusion of the argument containing the second imagined conversion.

    For now, let’s assume that the intuitions about these cases are correct, and start with a question about the cases’ significance. After bringing up intuitions about these few cases, Cohen makes some rather sweeping generalisations about the impossibility of a plausible theory of basic knowledge. And that generalisation isn’t supported by these cases.

    Adding a defeasibility clause to foundationalism already avoids the worst of the problems. Cohen carefully distinguishes between inferences from everyday propositions to the falsity of outlandish sceptical claims, and inferences from everyday propositions (like That’s a red table) to the falsity of other everyday-ish propositions (like That’s not a white table illuminated by red lights). His reason for doing this is that it is the latter inferences that are especially implausible, since the necessity and difficulty of responding to the sceptic makes some otherwise counter-intuitive moves plausible. But once the defeasibility clause is in place, it isn’t clear that the everyday cases are really problems. After all, if white tables illuminated by red lights are everyday occurrences, then the defeasibility clause will be triggered. And if they are not, we are back in the realm of sceptical doubts.

    In other words, once the basic knowledge theorist adds a defeasibility clause, I don’t think Cohen can avoid considering the kind of sceptical scenarios that he grants intuitions are unreliable about. It might be that the only things we can know by basic means are relatively simple anti-sceptical propositions, since we have reason to doubt everything else. Put another way, it’s arguable that the unintuitiveness of Cohen’s example is due to the fact that we have reason to doubt that the lighting is normal in a lot of examples. So my preferred foundationalist externalist will think it is not a case of basic knowledge. And anything they do think is basic knowledge won’t be subject to these doubts.

    To make this point more dramatically, consider the theorist (such as perhaps Descartes) who thinks that introspection is a form of basic knowledge. It is not unintuitive that we can see, by introspection, that introspection is reliable. We can introspect that p and introspect that we are introspecting that p, and so deduce that introspection worked on that occasion. At the very least, this isn’t obviously wrong. For example, we mostly take our pain appearances to be reliable indicators of actually being in pain. They may or may not be reliable indicators of bodily damage, but they are reliable indicators of being in pain. We have no non-introspective evidence about this reliability. So we must, at some level, assume that introspection is good evidence that introspection is reliable.

    Let’s take stock. The big question is whether the Problem of Easy Knowledge helps us isolate a class of circular reasoning that is not acceptable. Cohen has demonstrated that several epistemological theories are committed to some reasoning that looks circular, like the reasoning involved in the imaginary conversations with his 7 year old son. Cohen himself takes those to be arguments against these epistemological theories, and by extension against a lot of circular reasoning. But it isn’t clear that Cohen’s arguments generalise as far as he intends; their intuitive force may turn on some special features about colour perception. So let’s look more closely at the intuitions behind Cohen’s examples.

    9.6 What’s Wrong with Easy Knowledge?

    It’s hard to put one’s finger on just what is supposed to be wrong with easy knowledge. Cohen usually just relies on the intuitive implausibility of the methods he is discussing being knowledge producing. But it is hard to generalise from particular cases since intuitions about any given case might be based on particular features of that case. An explanation of the intuition would avoid that problem. So I’ll go over a bunch of possible explanations of the intuitions Cohen is relying on, with the hope that once we know why these intuitions are true (when they are), we’ll know how far they generalise.

    Note that one simple explanation of intuitions in the cases Cohen gives is simply that radicalism, or even radical leaning liberalism, is wrong about colour perception. That would tell us something interesting about the epistemology of colour, but not something more general about knowledge and circular arguments. And it wouldn’t be any kind of problem for the normative externalist, since the normative externalist as such has no commitments at all about the epistemology of colour perception.

    The worry is that there is something more general behind Cohen’s cases, something that will be general enough to raise a problem for normative externalism. I deny there is, but I don’t think there is any way to back up this denial except to work through all the principles we might think are supported by Cohen’s cases. So that’s the game plan for this section. I’ll set things up as a dialogue between an objector, who uses reasoning inspired by Cohen’s cases to put forward views that are inconsistent with Change Evidentialism, and my responses to the objector. I’ll generally leave off the arguments that the objector’s positions are actually in conflict with Change Evidentialism, but mostly they are. There is one exception, where I make a fuss about this in the reply. The objector assumes that we are radicals, or at least radical leaning liberals, about perception in general. We could resist that, while holding on to Change Evidentialism, but I’d rather acquiesce in this assumption.

    9.6.1 Sensitivity

    Objection:
    If you use perception to test perception, then you’ll come to believe perception is accurate whether it is or not. So if it weren’t accurate, you would still believe it is. So your belief that it is accurate will be insensitive, in the sense of Nozick (1981). And insensitive beliefs cannot constitute knowledge.

    The obvious reply to this is that the last sentence is false. As has been argued at great length, e.g. in Williamson (2000, ch. 7), sensitivity is not a constraint on knowledge. We can even see this by considering other cases of testing.

    Assume a scientist is trying to figure out whether Acme machines are accurate at testing concrete density. She has ten Acme machines in her lab, and proceeds to test each of them in turn by the standard methods. That is, she gets various samples of concrete of known density, and gets the machine being tested to report on its density. For each of the first nine machines, she finds that it is surprisingly accurate, getting the correct answer under a very wide variety of testing conditions. She concludes that Acme is very good at making machines to measure concrete density, and that hence the tenth machine is accurate as well.

    We’ll return briefly to the question of whether this is a good way to test the tenth machine below. It seems that the scientist has good inductive grounds for knowing that the tenth machine is accurate. Yet the nearest world in which it is not accurate is one in which there were some slipups made in its manufacture, and so it is not accurate even though Acme is generally a good manufacturer. In that world, she’ll still believe the tenth machine is accurate. So her belief in its accuracy is insensitive, although she knows it is accurate. So whatever is wrong with testing a machine (or a person) against their own outputs, if the problem is just that the resulting beliefs are insensitive, then that problem does not preclude knowing those outputs are accurate.

    9.6.2 One-Sidedness

    Objection:
    If you use perception to test perception, then you can only come to one conclusion; namely that perception is accurate. Indeed, the test can’t even give you any reason to believe that perception is inaccurate. But any test that can only come to one conclusion, and cannot give you a reason to believe the negation of that conclusion, cannot produce knowledge.

    Again, the problem here is that the last step of the reasoning is mistaken. There are plenty of tests that can only produce knowledge in one direction only. Here are four such examples.

    First example. The agent is an intuitionist, so she does not believe that instances of excluded middle are always true. She does, however, know that they can never be false. She is unsure whether Fa is decidable, so she does not believe Fa ∨ ¬Fa. She observes a closely, and observes it is F. So she infers Fa ∨ ¬Fa. Her test could not have given her a reason to believe ¬(Fa ∨ ¬Fa), but it does ground knowledge that Fa ∨ ¬Fa.

    Second example. The agent is trying to figure out which sentences are theorems of a particular modal logic she is investigating. She knows that the logic is not decidable, but she also knows that a particular proof-evaluator does not validate invalid proofs. She sets the evaluator to test whether random strings of characters are proofs. After running overnight, the proof-evaluator says that there is a proof of some particular sentence S0 in the logic. The agent comes to know that S0 is a theorem of the logic, even though the failure of the proof-evaluatory to output that S0 has a proof would not have given her any reason to believe it is not a theorem.

    Third example. Ada has a large box of Turing machines. She knows that each of the machines in the box has a name, and that its name is an English word. She also knows that when any machine halts, it says its name, and that it says nothing otherwise. She does not know, however, which machines are in the box, or how many machines are in the box. She listens for a while, and hears the words ‘Scarlatina’, ‘Aforetime’ and ‘Overinhibit’ come out of the box. She comes to believe, indeed know, that Scarlatina, Aforetime and Overinhibit are Turing machines that halt. Had those machines not halted, she would not have been in the right kind of causal contact with those machines to have singular thoughts about them, so she could not have believed that they are not halting machines. So listening for what words come out of the box is one-sided in the sense described above; for many propositions, it can deliver knowledge that p, but could not deliver knowledge that ¬p.

    Fourth example. Kylie is a Red Sox fan in Australia in the pre-internet era. Her only access to game scores are from one-line score reports in the daily newspaper. She doesn’t know how often the Red Sox play. She notices that some days there are 2 games reported, some days there is 1 game reported, and on many days there are no games reported. She also knows that the paper’s editor is also a Red Sox fan, and only prints the score when the Red Sox win. When she opens the newspaper and sees a report of a Red Sox win (i.e. a line score like “Red Sox 7, Royals 3”) she comes to believe that the Red Sox won that game. But when she doesn’t see a score, she has little reason to believe that the Red Sox lost any particular game. After all, she has little reason to believe that any particular game even exists, or was played, let alone that it was lost. So the newspaper gives her reasons to believe that the Red Sox win games, but never reason to believe that the Red Sox didn’t win a particular game.

    So we have four counterexamples to the principle that you can only know p if you use a test that could give you evidence that ¬p. The reader might notice that many of the examples involve cases from logic, or cases involving singular propositions. Both of those kinds of cases are difficult to model using orthodox Bayesian machinery. That’s not a coincidence. There’s a well known Bayesian argument in favour of the principle I’m objecting to, namely that getting evidence for p presupposes the possibility of getting evidence for ¬p. The argument turns on the fact that this is a valid argument, for any values of E, H, x you like.

    1. Pr(H) < x
    2. Pr(E) > 0
    3. Pr(H | E) ⩾ x
    4. So, Pr(H | ¬E) < Pr(H)

    Intuitively, we might read this as saying that if E raises the probability of H above any threshold x, then ¬E would be evidence against H. I haven’t discussed that objection here, because it’s irrelevant. When dealing with foundational matters, like logical inference, Bayesian modelling is inappropriate. We can see that by noting that in any field where Bayesian modelling is appropriate, the objection currently being considered works. What’s not so clear, in fact what is most likely false, is that we can model the above four examples in a Bayesian framework. Bayesianism just isn’t that good at modelling logical uncertainty, or changes in which singular propositions are accessible to the agent. But that’s what matters to these examples.

    9.6.3 Generality

    Objection:
    Assume we can use perception to come to know on a particular occasion that perception is reliable. Since we can do this in arbitrary situations where perception is working, anyone whose perception is working can come to know, by induction on a number of successful cases, that their perception is generally reliable. And this is absurd.

    I’m not sure that this really is absurd, but the cases already discussed should make it clear that it isn’t a consequence of Change Evidentialism. It is easily possible to routinely get knowledge that a particular F is G, never get knowledge that any F is not G, and no way be in a position to infer, or even regard as probable, that all Fs are Gs.

    For instance, if we let F be is a Turing machine in the box Ada is holding, and G be halts, then for any particular F Ada comes to know about, it is G. But it would be absurd for her to infer that every F is a G. Similarly, for any Red Sox game that Kylie comes to know about, the Red Sox win. But it would be absurd for her to come to believe on that basis that they win every game.

    There’s a general point here, namely that whenever we can only come to know about an F only if it is a G, then we are never in a position to infer inductively that every F is G, or even that most of them are. Since even the foundationalist externalist doesn’t think we can come to know by perception that perception is not working on an occasion, this means we can never know, by simple induction on perceptual knowledge, that perception is generally reliable.

    9.6.4 A Priority

    Objection:
    Assume it is possible to come to know that perception is reliable by using perception. Then before we even perceive anything, we can see in advance that this method will work. So we can see in advance that perception is reliable. That means we don’t come to know that perception is reliable using perception, we could have known it all along. In other words, it is a priori knowable that perception is reliable. (This objection is related to an argument by Roger White (2006), though note his argument is directed against a slightly different target.)

    This objection misstates the consequences of the view that perception provides evidence when it works. If perception is working, then we get evidence for this every time we perceive something, and reflect on what we perceive. But if perception is not working well, we don’t get any such evidence. The point is not merely that if perception is unreliable, then we can’t possibly know that perception is unreliable since knowledge is factive. Rather, the point is that if perception is unreliable, then using perception doesn’t give us any evidence at all about anything at all. So it doesn’t give us evidence that perception is reliable. Since we don’t know antecedently whether perception is reliable, we don’t know if we’ll get any evidence about its reliability prior to using perception, so we can’t do the kind of a priori reasoning imagined by the objector.

    This response relies heavily on an externalist treatment of evidence. A first order internalist is perhaps vulnerable to this kind of objection. As I’ve argued elsewhere  (Weatherson 2005), first-order internalists have strong reasons to think we can know a priori that foundational methods are reliable. Some may think that this is a reductio of this first-order internalism. (I don’t.) But the argument crucially relies on first-order internalism, not just on foundationalism.

    9.6.5 Testing

    Objection:
    It’s bad to test a belief forming method using that very method. The only way to learn that a method is working is to properly test it. So we can’t learn that perception is reliable using perception.

    This objection is, to me, the most interesting of the lot. It is interesting because the first premise, i.e. the first sentence in it, is true. Testing perception using perception is bad. What’s surprising is that the second premise is false. The short version of my reply is that in testing, we aim for more than knowledge. In particular, we aim for sensitive knowledge. A test can be bad because it doesn’t deliver sensitive knowledge. And that implies that a bad test can deliver knowledge, at least assuming that not all knowledge is sensitive knowledge. Defending these claims is the point of the next section.

    9.6.6 Circularity

    Objection:
    Even if we haven’t put our finger yet exactly on the problem, the reasoning involved in getting easy knowledge is in some way circular, and we should be suspicious of it.

    By this stage of the chapter, it should be clear what’s wrong with this objection. The hope was that we would find some way of making the anti-circularity intuition more precise by investigating easy knowledge. But all we’ve ended up with is the view that easy knowledge is bad because it is in some vague sense circular. If this is the intuition behind the Problem of Easy Knowledge, we’re back in the territory of the ‘whiff of circularity’ objection.

    9.6.7 Multiple Properties

    Objection:
    Let’s say we grant that each of the six properties you mentioned so far is individually compatible with knowledge. That doesn’t show that every combination of them is compatible with knowledge. In general, ◇p and ◇q don’t entail ◇(p ∧ q). So you haven’t shown easy knowledge is possible.

    I don’t quite know what to think about this objection. It strikes me as completely wrong-headed. The ‘no easy knowledge’ intuition seems, to me at least, to rest on an overlapping set of plausible but ultimately mistaken judgments about the relationship between knowledge, evidence and rationality/justifiability. I’ve argued that any possible reason one could have to support the intuition that easy knowledge is not knowledge is false, or not strong enough to support that conclusion. Could it be that the reasons work collectively when they don’t work singularly? It’s logically possible, but I don’t see any reason at all to suspect it is true.

    In short, there isn’t any one reason to believe that the intuitions behind the most general form Problem of Easy Knowledge are correct. It could be that no one of them is correct, yet the intuitions are right because of some combination, or because of some extra factor. But at this stage, the best thing to do is to treat the intuitions as suspect. That means they can’t form the basis for any objection to normative externalism, or any other theory.

    9.7 Coda: Testing

    In response to the ‘testing’ argument for the intuition that easy knowledge is no knowledge at all, I suggested that we should distinguish between a test being in general good and a test being the kind of thing which can ground knowledge. I think that’s true because tests also aim at sensitive belief. A test can fail in this aim, but still produce knowledge, because sensitivity isn’t necessary for knowledge. Here’s a simplified version of a real-life situation that makes that position somewhat intuitive.

    Inspection
    In a certain state, the inspection of scales used by food vendors has two components. Every two years, the scales are inspected by an official and a certificate of accuracy issued. On top of that, there are random inspections, where each day an inspector must inspect a vendor whose biennial inspection is not yet due. Today one inspector, call her Ins, has to inspect a store run by a shopkeeper called Sho. It turns out Sho’s store was inspected just last week, and passed with flying colours. Since Sho has a good reputation as an honest shopkeeper, Ins knows that his scales will be working correctly.

    Ins turns up and before she does her inspection watches several people ordering caviar, which in Sho’s shop goes for $1000 per kilogram. The first customer’s purchase gets weighed, and it comes to 242g, so she hands over $242. The second customer’s purchase gets weighed, and it comes to 317g, so she hands over $317. And this goes on for a while. Then Ins announces that she’s there for the inspection. Sho is happy to let her inspect his scales, but one of the customers, call him Cus, wonders why it is necessary. “Look,” he says, “you saw that the machine said my purchase weighed 78g, and we know it did weigh 78g since we know it’s a good machine.” At this point the customer points to the certificate authorising the machine that was issued just last week. “And that’s been going on for a while. Now all you’re going to do is put some weights on the scale and see that it gets the correct reading. But we’ve done that several times. So your work here is done.”

    There is something deeply wrong with Cus’s conclusion, but it is surprisingly hard to see just where the argument fails. Let’s lay out his argument a little more carefully.

    1. The machine said my caviar weighed 78g, and we know this, since we could all see the display.
    2. My caviar did weigh 78g, and we know this, since we all know the machine is working correctly.
    3. So we know that the machine weighed my caviar correctly. (From 1, 2)
    4. By similar reasoning we can show that the machine has weighed everyone’s caviar correctly. (Generalising 3)
    5. All we do in testing a machine is see that it weighs various weights correctly.
    6. So just by watching the machine all morning we get just as much knowledge as we get from a test. (From 4, 5)
    7. So there’s no point in running Ins’s tests. (From 6)

    Cus’s summary of how testing scales works is obviously a bit crude but we can imagine that the spot test Ins plans to do isn’t actually any more demanding than what the scale has been put through while she’s been standing there. So we’ll let premise 5 pass. (If you’d prefer more realism in the testing methodology, at the cost of less realism in the purchasing pattern of customers, imagine that the purchases exactly follow the pattern of weights that a calibrator following the guidelines of the officially approved methods of calibration.) If 3 is true, it does seem 4 follows, since Cus can simply repeat his reasoning to get the relevant conclusions. And if 4 and 5 are true, then it does seem 6 follows. To finish up our survey of the uncontroversial steps in Cus’s argument, it seems there isn’t any serious dispute about step 1.

    So the contentious steps are:

    • Step 2 - we may deny that everyone gets knowledge of the caviar’s weight from the machine.
    • Step 3 - we may deny that the relevant closure principle that Cus is assuming here.
    • Step 7 - we may deny that the aim of the test is (merely) to know that the machine is working.

    One way to deny step 2 is to just be an inductive sceptic, and say that no one can know that the machine is working merely given that it worked, or at least appeared to work, last week. But that doesn’t seem very promising. It seems that the customers do know, given that the testing regime is a good one, and that the machine was properly tested, that the machine is working. And the inspector has all of the evidence available to the customers, and is in an even better position to know that the testing regime is good, so as step 2 says, she gets knowledge of the caviar’s weight from the machine.

    In recent years there has been a flood of work by philosophers denying that what we know is closed under either single-premise closure, e.g., Dretske (2005), or multi-premise closure, e.g., Christensen (2005). But it is hard to see how that kind of anti-closure view could help here. We aren’t inferring some kind of heavyweight proposition like that there is an external world. And Dretske’s kind of view is motivated by avoidance of that kind of inference. And Christensen’s view is that knowledge of a conjunction might fail when the amount of risk involved in each conjunct is barely enough to sustain knowledge. But we can imagine that our knowledge of both 1 and 2 is far from the borderline.

    A more plausible position is that the argument from 1 and 2 to 3 is not a PTA. But that just means that Ins, or Cus, can’t get an initial warrant, or extra warrant, for believing the machine is working by going through this reasoning. And Cus doesn’t claim that you can. His argument turns entirely on the thought that we already know that the machine is reliable. Given that background, the inference to 3 seems pretty uncontroversial.

    That leaves step 7 as the only weak link. I want to conclude that Cus’s inference here fails; even if Ins knows that the machine is working, it is still good for her to test it. But I imagine many people will think that if we’ve got this far, i.e., if we’ve agreed with Cus’s argument up to step 6, then we must also agree with step 7. I’m going to offer two arguments against that, and claim that step 7 might fail, indeed does fail in the story I’ve told, even if what Cus says is true up through step 6.

    First, even if Ins won’t get extra knowledge through running the tests on this occasion, it is still true that this kind of randomised testing program is an epistemic good. We have more knowledge through having randomised checks of machines than we would get from just having biennial tests. So there is still a benefit to conducting the tests even in cases where the outcome is not in serious doubt. The benefit is simply that the program, which is a good program, is not compromised.6

  • 6 The arguments of the next few paragraphs are obviously close to the arguments in Hawthorne and Srinivasan (2013).

  • We can compare this reason Ins has for running the tests to reasons we have for persisting in practices that will, in general, maximise welfare. Imagine a driver, called Dri, is stopped at a red light in a quiet part of town in the middle of the night. Dri can see that there is no other traffic around, and that there are no police or cameras who will fine her for running the red light. But it is wise to stay stopped at the light. The practice of always stopping at red lights is a better practice than any alternative practice that Dri could implement. I assume she, like most drivers, could not successfully implement the practice Stay stopped at red lights unless you know no harm will come from running the light. In reality, a driver who tries to occasionally slip through red lights will get careless, and one day run a serious risk of injury to themselves or others. The best practice is simply to stay stopped. So on this particular occasion Dri has a good reason to stay stopped at the red light: that’s the only way to carry out a practice which it is good for her to continue.

    Now Ins’s interest is not primarily in welfare, it is in epistemic goods. She cares about those epistemic goods because they are related to welfare, but her primary interest is in epistemic goods. But we can make the same kind of point. There are epistemic practices which are optimal for us to follow given what we can plausibly do. And this kind of testing regime may be the best way to maximise our epistemic access to facts about scale reliability, even if on this occasion it doesn’t lead to more knowledge. Indeed, it seems to me that this is quite a good testing regime, and it is a good thing, an epistemically good thing, for Ins to do her part in maintaining the practice of randomised testing that is part of the regime.

    The second reason is more important. The aims of the test are, I claim, not exhausted by the aim of getting knowledge that the machine is working. We also want a sensitive belief that the machine is working. Indeed, we may want a sensitive belief that the machine has not stopped working since its last inspection. That would be an epistemic good. Our epistemic standing improves if our belief that the machine has not stopped working since its last inspection becomes sensitive to the facts. Before Ins runs the test, we know that the machine will work. If we didn’t know that, we shouldn’t be engaged in high-stakes transactions (like the caviar sales) that rely on the accuracy of the machine. But our belief that the machine will work is not sensitive to one not completely outlandish possibility, namely that the machine has recently stopped working. After the test, we are sensitive to that possibility.

    This idea, that tests aim for sensitivity, is hardly a radical one. It is a very natural idea that good tests produce results that are correlated with the attribute being tested. And ‘correlation’ here is a counterfactual notion. For variables X and Y to correlate in the releavnt sense just means that if X had been different, then Y would have been different, and the ways Y would have been different had X been different are arranged in a systematic way. When we look at the actual tests endorsed in manuals on how to calibrate balances, producing this kind of correlation looks to be a central aim. If a machine weren’t working, and it were run through these tests, the tests would issue a different outcome than if the machine were working. But ‘testing’ the machine by using its own readings cannot produce results that are correlated with the accuracy of the machine. If the machine is perfectly accurate, the test will say it is perfectly accurate. If the machine is somewhat accurate, the test will say it is perfectly accurate. And if the machine is quite inaccurate, the test will say that it is perfectly accurate. The test Ins plans to run, as opposed to the ‘test’ that Cus suggests, is sensitive to the machine’s accuracy. Since it’s good to have sensitive beliefs, it is good for Ins to run her tests.

    So I conclude that step 7 in Cus’s argument fails. There are reasons, both in terms of the practice Ins is part of, and in terms of what epistemic goods she’ll gain on this occasion by running the test, for Ins to test the machine. That’s true even if she knows that the machine is working. The epistemic goods we get from running tests are not restricted to knowledge. That’s why it is a bad idea to infer from the badness of testing our eyes, say, using our eyes that we cannot get knowledge that way. The aims of tests don’t perfectly match up with the requirements of getting knowledge.