Chapter 7 Changes
The version of IRT I’m endorsing has enough in common with more familiar versions of IRT that it shares their defects, or at least their alleged defects. This chapter responds to a common complaint leveled at such theories. I’ll start with a recent version of the complaint from Crispin Wright (2018), which I think makes the point as well as anyone I’ve seen. (Note he talks about ‘IRI’ here not ‘IRT’, but it doesn’t matter; it’s the ‘IR’ part that drives the problem.)
[A] situation may arise \(\dots\) when we can truly affirm an ‘ugly conjunction’ like:
X didn’t (have enough evidence to) know P at t but does at t* and has exactly the same body of P-relevant evidence at t* as at t.
Such a remark seems drastically foreign to the concept of knowledge we actually have. It seems absurd to suppose that a thinker can acquire knowledge without further investigation simply because his practical interests happen so to change as to reduce the importance of the matter at hand. Another potential kind of ugly conjunction is the synchronic case for different subjects:
X knows that P but Y does not, and X and Y have exactly the same body of P-relevant evidence.
when affirmed purely because X and Y have sufficiently different practical interests. IRI, as we noted earlier, must seemingly allow that instances of such a conjunction can be true. (Wright 2018, 368)
The objection here is that IRT makes changes in knowledge depend on something other than evidence, and in particular that it makes changes depend on interests. And that is indeed what the theory says. If the very statement of the theory is objectionable, we have some work to do in defending the theory.
Wright is hardly the first to note that IRT looks bad in this way. You can see versions of this critique being made by Gillian Russell and John Doris (2009), and by Michael Blome-Tillmann (2009). David Eaton and Timothy Pickavance (2015) note a similar but even stranger consequence of the theory. Sometimes a single event can reduce a person’s evidence for p, and be the event turns them from not knowing that p to knowing that p. All of this is very strange, and the IRT defender should have something to say.
My main reply to these objections is that they overgenerate. If they worked, they would be successful objections to any theory that separated knowledge from rational true belief. But since knowledge is not rational true belief, there can be no successful objections to all such theories. But it will take a bit of work to get to this reply; let’s get started.49
7.1 Overview of Replies
The first thing one could say about these objections is that since they just state a prominent feature of the view, that it allows knowledge to turn on non-alethic features, and object to that very feature, the objections are blatantly question-begging. One could say that, but really that and $2.75 will get you a ride on the New York subway. The opponents think that this view is radical. And of course the objections to radical views will end up being question-begging (Lewis 1982). Saying that one’s opponents are begging the question might make you feel better - you don’t have to be persuaded by their arguments - but doesn’t actually move the debate forward. We can, and must, do better.
A second, and slightly more helpful, thing to say is that on some versions of the interest-relative view, it will be very hard to state the objection. Consider a version of the interest-relative view that also accepts E=K, the thesis that one’s evidence is all and only what one knows. This is hardly an obscure version of the view; it’s what is defended by Jason Stanley (2005). Now it will not be true on the view that there are, as Wright suggests, two people who have the same evidence but different knowledge. That’s impossible, since having different knowledge literally entails, on this view, that they have different evidence. But does this make the objection go away, or does it just make it harder to state? I’m mostly inclined to think it’s the latter. There is still something weird about people who have the same input from the world, and the same reactions to that input, but who differ in what they know about the world. So this response, while more useful than the last one, i.e., not totally useless, won’t quite work either.
A third response challenges head on the intuition about ‘weirdness’ mentioned in the previous paragraph. One of the consequences of the vast Gettier literature is that there are any number of cases where people have the same inputs, the same true beliefs based on those inputs, but different knowledge. It’s trivial to get these inter-world versions of a case like this, and maybe that’s enough to undermine the intuition. More generally, it’s hard to state, and endorse, the intuition that the interest-relative theory violates without committing oneself to something very much like the JTB theory of knowledge. And since that theory is false, that’s kind of bad news for the intuition. Or, perhaps more carefully, either that theory is false, or justification is understood in terms of knowledge, as on the E=K picture. And appealing to E=K might be an independent way to respond to the challenge. I’ll spell out this response more fully in section 7.2.
A fourth response aims to undermine the intuition in a different way. There is something fundamentally right about the JTB theory of knowledge, at least if we don’t presuppose that the justification, the J, gets an internalist spin. But it can’t be that the theory is extensionally correct. What is it? My conjecture is that knowledge is built, in the sense described by Karen Bennett (2017), out of those three components, justification, truth and belief. Now this needs a notion of building that doesn’t involve necessitation, and spelling that out would be a task for a different (and longer!) book. I’ll try and say enough in section 7.3 to make it at least minimally plausible that this conjecture is true, and that it is consistent with IRT.
The fifth response, and the one I want to lean on the most, comes from Nilanjan Das (2016). On the most plausible ways of articulating what the differences are between JTB and knowledge, it’s not just that the differences will depend on ‘non-standard’ factors, it’s that they will often depend on interests. Whether a belief is safe, or sensitive, or produced by a reliable method, or apt, or virtuous, or any other plausible criteria you might want, depends in part on the interests of the believer. More carefully, whether a belief satisfies any one of those properties can be counterfactually dependent on the interests of the believer. So I conclude these objections massively over-generate. If they are right, they show that practically every theory of knowledge produced in the last several decades is false. But it’s really implausible that these kinds of considerations could show that. So the objection fails. And I’ll end in section 7.4 by spelling out this response.
7.2 So Long JTB
The story of investigations into knowledge over the last fifty years is the story of finding ever more things that knowledge is sensitive to. The thesis of this book is that human interests, in particular the interests of the would be knower, should be added to that list. But to defend that thesis, and especially to defend it from the kind of blank stare objection that I’m worrying about in this chapter, it helps to remind ourselves of a list of some of the things which we all already know that knowledge is sensitive to. So I’m going to describe a mundane case of knowledge, then discuss various ways in which that knowledge could be lost if the world were different.
Our protagonist, Charlotte, is reading a book about the build up to World War One. In the base case, the book is Christopher Clark’s The Sleepwalkers (Clark 2012), though in some of the variants we’ll discuss she reads a less impressive book. In it she reads the remarkable story of Henriette Calliaux, the second wife of anti-war French politician Joseph Calliaux. As you may already know, Henriette Calliaux shot and killed Gaston Calmette, the editor of Le Figaro, after Le Figaro published a string of damaging articles about Joseph Calliaux. The killing took place on March 16, 1914, and the trial was that July. It ended on July 28 with her acquittal.
Charlotte reads all of this and believes it. And indeed it is true. And the book is reliable. Although Charlotte does believe what the book says about Henriette Calliaux, she is not credulous. She is an attentive enough, and skilled enough, reader of contemporary history to know when historians are likely to be going out on a limb, and when they are not being as clear as one might like in reflecting how equivocal the evidence is. But Clark is a good historian, and Charlotte is a good reader, and the beliefs she takes from the book are both true and supported by the underlying evidence.
Focus for now on this proposition
Henriette Caillaux’s trial for the murder of Gaston Calmette ended in her acquittal in late July 1914.
Call this proposition p. In this base case, Charlotte knows that p. But there are ever so many ways in which Charlotte could fail to have known it. The following three are particularly important .
Charlotte didn’t finish the book. She only got as far as the start of Caillaux’s trial, but lost interest in the machinations of the diplomats in the late stages of the July crisis. Still, she had a strong hunch that Caillaux would be acquitted and, on just this basis, firmly believed that she would be.
Charlotte is in a world where things went just as in the actual world up to the trial, but then Caillaux was found guilty. Despite this, Charlotte reads a book that is word-for-word identical to Clark’s book. That is, it falsely says that Caillaux was acquitted, before quickly moving back to talking about the war. Charlotte believes, falsely, that p.
Charlotte reads the book to the end, but she can’t believe that Caillaux would be acquitted. The evidence was conclusive, she thought. She is torn because she also can’t really believe a historian would get such a clear fact wrong. But she also can’t believe anyone would be acquitted in such a trial. So she withholds judgment on the matter, not sure what actually happened in Caillaux’s trial.
In all three of these variants, Charlotte does not know that p. I take these three kinds of cases to be good evidence that knowledge requires (respectively) justification, truth, and belief. In variant J, Charlotte lacks knowledge because her belief in p is not justified; it is a mere hunch. In variant T, Charlotte lacks knowledge because her belief is not true; it is an honest mistake. In variant B, Charlotte lacks knowledge because she doesn’t even believe p; she has the evidence, but does not accept it.
In all three cases there are philosophers who argue that these conditions are not strictly necessary, but it would take us too far afield to debate these points. I will simply take for granted that cases like Variant J show justification (or some kind of rationality) is necessary for knowledge, Variant T shows that truth is necessary for knowledge, and Variant B shows belief (or at least some kind of strong acceptance) is necessary for knowledge. (I discussed issues about belief more back in chapter 3.)
For a short while in the mid-20th century, some philosophers thought these conditions were not merely necessary for knowledge, but jointly sufficient. To know that p just is to have a justified, true belief that p. This became known, largely in retrospect, as the JTB theory of knowledge. It fell out of fashion dramatically after a short but decisive criticism was published by Edmund Gettier (1963). But Gettier’s criticism was not original; he had independently rediscovered a point made by the 8th century philosopher Dharmottara (Nagel 2014). Here is a version of the kind of case Dharmottara discovered.
Like in Variant J, Charlotte stops reading before the denouement. And she believes that Caillaux was acquitted. But she does not believe this on the basis of a hunch. Rather, she believes it because she read in another book that official France was so discombobulated in July 1914 that it didn’t manage to convict a single murderer. This is false, but Charlotte used it to reason to the true conclusion that p.
In Variant D Charlotte does not know that p. She does not know it because reasoning that relies on a falsehood in just this way does not ground knowledge. So here is another thing that knowledge is sensitive to - whether the grounds for one’s belief are true.
The variations from now on will not be as intuitively clear; whether Charlotte knows that p will be matters of greater dispute than in these first four variants. And I’m not going to use intuitions about any of the cases to provide evidence for the view I am endorsing. But I think the cases are good illustrations of ways knowledge might be sensitive to environmental factors. The first case is a version of an example due to Gilbert Harman (1973, 143ff).
It is surprising that Charlotte has never heard of Henriette Caillaux, because in the world she inhabits, the story of Calliaux is infamous. She is as well known as other famous killers like Ned Kelly, Jack the Ripper, and Lee Harvey Oswald. Novels, plays and movies are frequently made about her killing of Calmette. But in all of these popular depictions, the ending is fictionalised. Every one of them ends with Caillaux’s conviction and execution. This happened because the authorities were so embarrassed by her acquittal that they created a vast alternative reality in which Caillaux was convicted. Charlotte, by an amazing coincidence, is the only person to have not encountered this story. So when she reads a word-for-word duplicate of Clark’s book, she doesn’t realise it is controversial, and believes that p. Had she seen any of these books or plays, she would have assumed her book was making some mistake, since it is ‘common knowledge’ that Caillaux was convicted.
Intuitions may vary on this, but in Variant H, I don’t think Charlotte knows that p. If that’s right, then whether Charlotte knows that p is sensitive not just to the evidence she has, but to the evidence that is all around her. If she’s swimming in a sea of evidence against p, and by the sheerest luck has not run into it, the evidence she does not have can block knowledge that p.
The previous example relied on the possibility of counter-evidence being everywhere. Possibly all that matters is that the counter-evidence is in just the right somewhere.
In this world, an over-zealous copy-editor makes a last minute change to the very first printing of Clark’s text. Not able to believe that Caillaux was acquitted - the evidence was so conclusive - they change the word ‘acquittal’ to ‘conviction’ in the sentence describing the end of the trial. Happily, this error is quickly caught, and it is only the very first printing of the book that contains the mistake. Charlotte started reading the book after seeing it in a second-hand shop. The shop had two copies: one from the flawed first printing, and one from a later printing. Charlotte buys the later one because it is the first one she sees; had she entered the history section from the other direction, she would have bought the first printing, and come to believe that p is false.
In this case, Charlotte doesn’t know that p is true. There is too much luck in her happening to buy the later printing rather than the earlier printing. The belief-forming method that she uses - buy an apparently authoritative history book and believe the plausible and well-supported things it says - goes wrong on just this question in a very nearby possible world. (That is, it goes wrong in the world where she picks up the other copy.) And that kind of luck is incompatible with knowledge.
The contemporary terminology for this is that a belief forming method only yields knowledge if it is safe. And a method is safe only if it doesn’t go wrong in nearby, realistic, scenarios (Williamson 2000). So whether one knows is sensitive to not just the evidence one has, but the evidence one could easily have had.
But safety in this sense is a tricky notion. In Variant K, it seems to me that Charlotte does know that p.
Charlotte detests reading books on paper, and only ever reads on her Kindle (an electronic book-reading device). Just like in Variant S, there was an error in the first printing of Clark’s book. But the Kindle version never contained this error, and in any case, Kindle versions are updated frequently so even if it had, the error would have been quickly corrected. Charlotte reads the book on her Kindle, and comes to believe that p.
In this case, Charlotte believes p on good evidence from a trustworthy source, and there is no realistic possibility where she goes wrong on this question by trusting this source. That seems to me like enough for knowledge.
I’ll return to the difference between Variants S and K in section 7.4, but first I want to look at two more cases.
Charlotte reads Clark’s book and believes p. But like in Variant B, she was sure that Caillaux would be convicted. And she still thinks it is absurd that someone would be acquitted given this evidence. But rather than responding to these conflicting pressures by withholding judgment, she responds by both believing that p is true, and believing it is false. She is just inconsistent, like so many of us are in so many ways.
It seems to me that in this case, Charlotte does not know that p. The incoherence in her beliefs on this very point undermines her claim to knowledge. And at last we get to the case that motivates this book.
Charlotte reads the book, and believes that p. She is then offered a bet by a curiously benevolent deity. If she takes the bet, and p is true, she wins a dinner at her favourite bistro, Le Temps des Cerises. If she takes the bet, and p is false, she is cast into The Bad Place for eternity. If she declines the bet, life goes on as normal. And now she’s deciding what to do.
Now I think Variant I is just like the others; it’s a case where knowledge is lost. And in fact I think Variant I is basically a special case of Variant C. Knowledge requires coherence and rationality, and in Variant I, if Charlotte believes p, she is either irrational or incoherent. I’ll come back to this point about the relationship between Variants C and I in section 7.3. First I want to reflect a bit on what we’ve seen in the earlier cases.
Most of the people who think that it is implausible that interests matter to knowledge are happy acknowledging the varieties of sensitivity that are revealed by Variants J, T, B, D, H, S, K and C. (Or at least they acknowledge most of these; maybe they have idiosyncratic objections to one or other of them.) They just think this one new kind of sensitivity is a bridge too far. It is a bit of a puzzle to me why we should think sensitivity to interests is more philosophically problematic than the other kinds of sensitivity we’ve seen so far. And it might help to get you to share my puzzlement by starting with what looks like a simple question. What should we call the class of factors knowledge is sensitive to which revealed by these variants, but which does not include interests?
One option is to call them the ‘traditional’ factors. Now since discussion of, say, safety only really became widespread in the 1990s, the tradition of including it in one’s theory of knowledge is quite a new one. But I don’t mind calling new things traditional. I’m Australian, and we have great traditions like the traditional Essendon-Collingwood Anzac Day match, which also dates to the 1990s. But this terminology has a very short shelf-life. After all, we’ve been discussing the role of interests in epistemology since at least 2002 (Fantl and McGrath 2002), so that’s almost long enough to be traditional as well.
Another option is to say that they are the factors that are truth-connected, or truth-relevant. But there’s no way to make sense of this notion in a way that gets at what is wanted. For one thing, it’s really not obvious that coherence constraints (like we need for Variant C) are connected to truth. For another, all Variant I suggests is that we need a principle like the following in our theory of knowledge.
Someone knows something only if their evidence is strong enough for them to rationally treat the thing as a fixed starting point in their inquiries.
On the face of it, that’s truth-connected. It says knowledge requires strong evidence. Now, of course, it also says just how strong a requirement is depends on what their inquiries are. But what the critics want to say is not just that every factor that matters to knowledge is truth-relevant, but every aspect of every factor that matters is truth-relevant.
And that really can’t be right. Or, at least, it can’t be right unless you believe the JTB theory of knowledge. If the JTB theory is false, then there are bound to be counterexamples to the principles that IRT is inconsistent with. So consider Wright’s ‘ugly conjunction’.
> X didn’t (have enough evidence to) know P at t but does at t* and has exactly the same body of P-relevant evidence at t* as at t. (Wright 2018, 368)
If evidence goes primarily to justification, then similarity of evidence at t and t* should just tell us that X is rational in believing P at both times or neither. Let’s say that it’s both times. Then as long as one could be in a JTB-but-not-knowledge at t and not at t* this conjunction should be possible. So here’s one way that could happen.
Charlotte reads the book on her Kindle, and believes that p at t0. The next day, at t, she can’t believe she read that p and reads the book again. It still says that p, but this is bizarre because a new version of the book that says ￢p was pushed out to all Kindles. Due to a network failure, Charlotte’s Kindle was the only one not to get the push. She now doesn’t know that p; this case is just like the safety cases and the Harman cases. The next day at t* a corrected version of the book that says p is pushed out to all Kindles, including Charlotte’s. Again perplexed, she triple checks, and comes to believe, and know, that p.
The ugly conjunction that IRT endorses is something that theories that are sensitive to safety considerations, or evidential availability considerations, also endorse. And the true theory is sensitive to one or other kind of these considerations.
7.3 Making Up Knowledge
All that said, I’ve come to think there is something right about the JTB theory. Or, as I’d prefer, the RTB theory; as in Rational True Belief.50 It isn’t extensional adequacy; Dharmottara refuted that 1300 years ago. But it can be expressed using the modern51 notion of grounding. Or, as I’d prefer, using the notion of a building relation that Karen Bennett (2017) describes.
Consider a very abstractly described case where all of 1-4 are true.
- S knows that p.
- S’s attitude to p is rational.
- S believes that p.
I think that when 1 is true, it is made true by 2-4. Following Bennett, we might say that the fact expressed in 1 is built from the facts expressed in 2-4. Now to make this work, we need a notion of building (or grounding) that’s contingent, since 2-4 do not collectively entail 1. And defending the coherence of such a notion in detail would make for a very different book to this one. But I’ll say a few words about why I think such a notion is going to be needed.
When I say that 1 is made true by 2-4, I mean that it is metaphysically explained by 2-4. They provide a complete explanation of 1’s truth. Now here’s the key step. A complete explanation need not be an entailing explanation. I’ll give a relatively uncontroversial example of this involving causal explanation, then suggest a different philosophical example.
It is, famously, hard to explain the origins of World War One. But without settling all the causal and explanatory issues about the war’s origins, we can confidently make the following two claims.
- Had a giant asteroid struck Sarajevo on June 27, 1914, the war would not have started when it did.
- It is no part of the explanation of the start of the war that no such giant asteroid struck Sarajevo on June 27, 1914.
The counterfactual claim C can easily be verified by thinking about the consequences of giant asteroid strikes. (See, for example, the extinction of the dinosaurs.) And the claim about explanation NE can be verified by thinking about how absurd the task of explanation would be if it were false. For every possible event that could have changed history, but didn’t, we’d have to include its non-happening in our explanation of the war. The non-occurrence of every possible alien invasion, mass pandemic, or tulip mania that could have happened, and would have made a difference, would be part of our explanation. This seems absurd too.
So the origins of the war are sensitive to whether there was a giant asteroid strike, but the lack of a giant asteroid strike is no part of the complete explanation for why the war took place. Complete causal explanations can leave out things that are counterfactually relevant to whether the event took place. And that means that they aren’t entailing explanations, since if everything in the complete explanation happened, but so did an asteroid strike, the war wouldn’t have taken place.
We see the same thing in commonsense morality. This is one of the key points behind Bernard Williams’s “One Thought Too Many” argument (Williams 1976). If one’s child is drowning in a pool, one has a reason to dive in and rescue them. Moreover, it’s a complete reason. When someone asks “Why did you do that?”, you’ve given them a complete reason if you say “My child was drowning”. And you should accept that answer even if you think there are cases where that would be the wrong thing to do. Set up your preferred horror story moral example where diving in to rescue the child would lead to the destruction of the world. Had that horror story been actual, it would not have been morally required to dive into the pool. But in reality, a complete explanation of why it was required was that one’s child was drowning.
I want to say the same thing about knowledge and interests. What one knows is always (in principle) sensitive to what one’s interests are. But in cases where one knows, one’s knowledge is not explained by what one’s interests are. Indeed, it is explained just by the factors that go into RTB, and perhaps the interplay between them.
And I want to suggest, somewhat tentatively, that some of the objections to the very plausibility of IRT rely on running together the notions of building and of counterfactual dependence. In their critique of IRT, Gillian Russell and John Doris (2009) repeatedly talk about how implausible it is that a change in interests can “make” one have knowledge. Strictly speaking, I don’t think a change in interests does make one have knowledge. It’s true that one might have knowledge, and not have had that knowledge had one’s interests been different. But it doesn’t follow that facts about interests stand in a making, or building, relationship to facts about knowledge. They could be, and should be, treated as things relevant to whether facts about truth, belief and rationality suffice in the circumstances for knowledge. Those factors, and only those factors, make for knowledge. That’s true whether we’re talking about familiar counterexamples to the JTB (or RTB) theories, or whether we’re talking about interest-relativity.
But while I think attending to the distinction between counterfactual dependence and building explains some of the implausibility of IRT, it surely doesn’t explain all of it. So in the next and last section of the chapter, I’ll go over an argument by Nilanjan Das that when we look at the particular ways people have tried to get around the failures of RTB, we see that everyone is committed to at least some interest-relativity.
7.4 Every Theory is Interest-Relative
Think about the difference between Variant S and Variant K.52 Variant S was meant to be a simple case where Charlotte does not know something because of a safety violation. Knowledge is incompatible with a certain kind of luck. To know something is to do better than make a lucky guess. Charlotte isn’t guessing, but she seems to be lucky in a similar kind of way to the guesser, so she doesn’t know. But in Variant K, she isn’t lucky. It’s no coincidence that her book said the correct thing. There is no serious possibility of her being misled on this point.
If Charlotte knows that p in Variant K, but not in Variant S, then whether she knows that p depends, among many other things, on her preferred format for reading books. The first thing to note here is that it is very hard to explain this dependence if we insist that all the factors relevant to knowledge are ‘truth-relevant’ or ‘truth-connected’. She gets the truth in both cases; it’s just that she is lucky one time and not another. We can make the same point using other cases. In Variant H, Charlotte lacked knowledge because of evidence around her. But imagine a variant of that variant where Charlotte recently emigrated to a country where no one ever talks about Henriette Caillaux. In the variant, Charlotte knows that p. So her knowledge of French history is sensitive to her emigration status. And that isn’t truth-relevant or truth-connected. If knowledge is sensitive to external factors, and it isn’t required that knowledge be infallible, then knowledge will be sensitive to things that are not particularly truth-relevant. Any fallibilist, externalist, theory of knowledge will have to face a version of a reference class problem, in order to say whether a particular true belief was a matter of luck. The solution to that problem will rely on factors that are not in themselves truth-relevant. But given that different reference classes will have different success rates, it will turn out that knowledge depends on things that are not truth-relevant, i.e., things that put one in different reference classes.
That’s enough to argue against sweeping generalisations about what knowledge could or could not be sensitive to. Knowledge could be sensitive to anything, because anything could matter to a reference class. But as Nilanjan Das (2016, 116) shows, we can say something stronger. Cases like these can be used to directly argue for interest-relativity, even if one rejects all the other arguments in the existing literature on IRT.
Knowledge requires not getting it right just by luck. Making that intuition precise is a lot of work, but it means at least that the following is true. If the method the person used to form their belief frequently goes wrong in their actual environment, then even on occasions that the method gets the right answer, it isn’t knowledge. But what’s their environment? It’s not just spaces within a fixed distance from them. Rather, it’s spaces that they could easily have ended up being. It’s spaces where it’s a matter of luck that they are or aren’t in them. So my environment, in the relevant sense, consists of a network of college towns and universities throughout the globe, and excludes any number of places a short drive away. But should I become more interested in nearby suburbs than far away colleges, my environment would change. That is to say, environment is an interest-relative notion.
If knowledge is sensitive to what one’s environment is like, and one’s environment in the relevant sense is a function of one’s interests, then knowledge is going to be interest-relative. That’s the point about Charlotte and the Kindle. Two people can be alike in what signals they get from the world, and alike in what the world is like immediately around them, but be in different environments because of their different interests. If the method they use to form beliefs on the basis of that signal has differing levels of success in different environments, then whether they have knowledge will be sensitive to which environment they are in. And that will depend on any number of ‘non-traditional’ factors, including their interests.
In general, any theory that appeals to safety, or reliability, or publicly available evidence, or almost any other external factor, will make knowledge interest-relative. Knowledge means you couldn’t easily have gone wrong. But what could easily happen, in the relevant sense, is an interest-relative notion. When you’re assessing whether my belief forming mechanisms are safe and reliable, you should worry about whether they work in college towns the other side of the world, but not about whether they work in lakeshore towns across the state. The opposite is true when thinking about my neighbors. And that’s solely because of a difference in interests.
Now this isn’t the only way, or even the main way, that interests matter to knowledge. But it is a way. And it shows that objections that rely on the very idea of knowledge being interest-relative must over-generate. Unless such objections are tied to a rejection of the idea that safety or reliability or any other external factor matters to knowledge, they rule out too much.
That concludes the defence of IRT over the last three chapters. The final two chapters of the book return to setting out the view, going over two important, but technical, points. First, I argue that rational belief is not sensitive to interests in quite the same way that knowledge is. And second, I argue that evidence is interest-relative, but also in not quite the same way that knowledge is.
I first made a version of this reply in my (2016b). I earlier replied to Russell and Doris, and to Blome-Tillman, in my (2011), but I now think the replies there didn’t really get to the heart of the matter.↩︎
I think it’s strange to apply the notion of justification to beliefs, and much more natural to talk about rational beliefs.↩︎
Well, modern if you think it’s not the same notion as Meister Eckhart’s notion of grounding. I’m a little agnostic on that.↩︎
Though they are making somewhat different points, there is a resemblance between these cases and the cases that Gendler and Hawthorne (2005) use to raise trouble for fake barn intuitions.↩︎