In Knowledge and Its Limits, Timothy Williamson argues that few conditions are luminous. A condition is luminous iff we know we are in it whenever we are. Slightly more formally, Williamson defines
A condition C is defined to be luminous if and only if (L) holds:
(L) For every case a, if in a C obtains, then in a one is in a position to know that C obtains (95).
We will have to pay some attention in what follows to the range of the quantifier over cases. Williamson suggests that (L) fails for all interesting conditions even if we restrict the quantifier to those that are ‘physically and psychologically feasible’ (94), and I will assume that is what we are quantifying over. To argue that (L) fails for any interesting C, Williamson first argues that it fails in a special case, when C is the condition feeling cold, and then argues that the conditions that lead to failure here are met for any other interesting C. So I will also focus on the special case.
Mr Davis’s apartment faces southwest, so while it is often cold in the mornings it always warms up as the midday and afternoon sun streams in. This morning Mr Davis felt cold when he awoke, but now at noon he is quite warm, almost hot. But the change from wake-up time to the present is rather gradual. Mr Davis does not take a hot bath that morning, nor cook a hot breakfast, but sits reading by the window until the sun does its daily magic. Assume, for the sake of the argument, that feeling cold is luminous, so whenever Mr Davis feels cold, he knows he feels cold. Williamson argues this leads to a contradiction as follows. (I have replaced all the pronouns so Williamson’s argument will apply directly to the case described here.)
Let t0, t1, …, tn be a series of times at one millisecond intervals from dawn to noon. Let ai be the case at ti (0 £ i £ n). Consider a time ti between t0 and tn, and suppose that at tn [Mr Davis] knows that [he] feels cold. … Now at ti+1 [he] is almost equally confident that [he] feels cold, by the description of the case. So if [he] does not feel cold at ti+1, then [his] confidence at ti that [he] feels cold is not reliably based, for [his] almost equal confidence on a similar basis one millisecond earlier that [he] felt cold was misplaced … [His] confidence at ti was reliably based in the way required for knowledge only if [he] feels cold at ti+1. In the terminology of cases…:
(ii) If in ai [he] knows that [he] feels cold, then in ai+1 [he] feels cold.
Given (L), all instances of (ii), and the fact that Mr Davis feels cold when he awakes, we get the false conclusion that he now feels cold. So if we accept all instances of (ii), we must conclude that (L) is false when C is feeling cold and ‘one’ denotes Mr Davis. Why, then, accept (ii)? One move Williamson makes here is purely defensive. He notes that (ii) is different from the conditionals that lead to paradox in the Sorites argument. The antecedent of (ii) contains the modal operator knows that absent from its consequent, so we cannot chain together instances of (ii) to produce an implausible conditional claim. If that operator were absent then from all the instances of (ii) it would follow that if Mr Davis feels cold at dawn he feels cold at noon, which is false. But by strengthening the antecedent, Williamson weakens (ii) to avoid that conclusion. But the fact that (ii) is not paradoxical is not sufficient reason to accept it.
It is useful to separate out two distinct strands in Williamson’s argument for (ii). One strand sees Williamson arguing for (ii) by resting on the principle that beliefs constitute knowledge only if they are reliably based. The idea is that if Mr Davis’s belief that he feels cold is a bit of knowledge, it is reliable, and if it is reliable it is true in all similar situations, and hence it is true in ai+1. The other strand sees him appealing to a vague but undoubtedly real requirement that beliefs must be safely true in order to be knowledge. Neither argument is successful, though the second kind of argument comes closer to working.
Williamson acknowledges Conee and Feldman’s arguments that no reliabilist epistemologist has yet solved the generality problem (100). But he takes this to be a reason not to abandon the concept of reliability, but rather to abandon the hope of providing a reductive analysis of it. Williamson thinks we can get a long way by just resting on the intuitive concept of reliability. In this context, this seems to be a mistake. There are two ordinary ways of using ‘reliable’ in the context of discussing beliefs, and neither provides support for (ii).
First, and this is clearly not what is needed, sometimes ‘reliable’ just means true. This is the sense of the word in which we can consistently say, “It turned out the information that old Ronnie provided us about where the gov’nor was eating tonight was reliable, which was plenty surprising since Ronnie hadn’t been right about anything since the Nixon administration.” This is the sense in which ‘reliable’ means just what the etymology suggests it means, something that can be relied upon. And that means, in practice, true. But that won’t help at all, for if ‘reliable’ just means true, then nothing follows from the fact that knowledge is reliable that does not follow from the fact that it is factive.
Secondly, there is a distinctively philosophical sense in which reliable means something more like true in a wide range of circumstances. This is the sense in which a stopped clock is not even reliable twice a day. At first, this might look to help Williamson a little more. But if philosophical usage is to be key, the second look is more discouraging. For in its philosophical usage, reliability does not even entail truth. And if reliability does not entail truth in the actual situation, it surely does not entail truth in all nearby situations. But Williamson’s argument for (ii) requires that reliability in ai entails truth in ai+1. So on neither of its natural readings does the concept of reliability help here, and since we have no unnatural reading to fall back upon since the concept of reliability resists analysis, the argument from reliability for (ii) fails. O the other hand, in most of the book it seems Williamson intends the concept of safety to do more work than the concept of reliability. So let us see whether we can use safety to support his argument.
Williamson at times suggests that the core argument for (ii) is a straight appeal to intuition. “[E]ven when we can appeal to rigorous rules, they only postpone the moment at which we must apply concepts in particular cases on the basis of good judgement. … The argument for (ii) appeals to such judgement.” (101) The appeal to intuition is the royal road to scepticism, so we would be justified in being a little wary of it. Weinberg, Stich and Nichols (2002) discovered that undergraduates from the same social class as Williamson, Mr Davis and I would frequently judge that a subject could not know that mule was a mule unless he could tell it apart from a cleverly painted zebra. The judgements of that class are not obviously the basis for a sane epistemology.
Williamson undersells his argument by making it an appeal to judgement. For there is a principle here, if not a rigorous rule, that grounds the judgement. The principle is something like Ernest Sosa’s safety principle. The idea is that a belief does not constitute knowledge if it is false in similar situations. For S to know that p, it must be the case that “not easily would S believe that p without it being the case that p.” (Sosa 1999: 142) There is much to be said here about what is a similar situation. (Lewis 1996 says some things relevant in his discussion of which worlds are salient, in his preferred sense, in virtue of being similar to salient worlds.) It might turn out that there is no account of similarity that makes it plausible that this is a constraint on knowledge. But for present purposes I am prepared to grant (a) that only safe beliefs count as knowledge, and (b) that ai+1 is a similar situation to ai. I don’t think this gives us (ii), for reasons related to some concerns first raised by Mark Sainsbury (1995).
The role for a safety condition in a theory of knowledge is to rule out knowledge by lucky guesses. This includes lucky guesses in mathematics. If Mr Davis guesses that 193 plus 245 is 438, he does not thereby know what 193 plus 245 is. Can a safety condition show why this is so? Yes, but only if we phrase the safety condition a certain way. Assume that we have a certain belief B with content p. (As it might be, Mr Davis’s belief with content 193 + 245 = 438.) Then the following two conditions both have claims to being a safety condition.
Content-safety. B is safe iff p is true in all similar worlds.
Belief-safety. B is safe iff B is true in all similar worlds.
If we rest with content-safety, then we cannot explain why Mr Davis’s lucky guess counts as knowledge. For in all nearby worlds, the content of the belief he actually has is true. If we use belief-safety as our condition though, I think we can show why Mr Davis has not just got some mathematical knowledge. The story requires following Marian David’s good advice for token physicalists and rejecting content essentialism about belief (David 2002; see also Gibbons 1993 for similar sage advice). The part of Mr Davis’s brain that currently instantiates a belief that 193 plus 245 is 438 could easily have instantiated a belief that 193 plus 245 is 338, for Mr Davis is not very good at carrying hundreds while guessing. If, as good physicalists, we identify his belief with the part of the brain that instantiates it, we get the conclusion that this very belief could have had the false content that 193 plus 245 is 338. So the belief is not safe, and hence it is not knowledge.
This lends some credence to the idea that it’s belief-safety, not content-safety, that’s the important safety criteria. When talking about Mr Davis’s mathematical hunches, belief-safety is a stronger condition than content-safety. But when talking about his feelings, things may be reversed.
Let me tell you a little about how Mr Davis’s mind is instantiated. Mr Davis’s phenomenal beliefs do not arise from one part of his brain, his belief box or mind’s eye, tracking another part, the part whose states constitute his feeling cold. Rather, when he is in some phenomenal state, the very same brain states constitute both the phenomena and a belief about the phenomena. Mr Davis’s brain is so wired that he could not have any sensation of radiant heat (or lack thereof) without his thereby believing that he is having just that sensation. In that case, it seems belief-safety does not guarantee (ii). Imagine that at ai Mr Davis feels cold, but at ai+1 he does not. (I assume here, with Williamson, that there is such an i.) At ai he thereby believes that he feels cold. The content of that belief is a de se proposition that is false at ai+1, so it violates content-safety. But in at+1 that part of his brain does not instantiate his feeling cold, and thereby does not instantiate his believing that he feels cold. By hypothesis, by that time no part of his brain instantiates feeling cold. So the belief in ai that he feels cold is not false in ai+1; it either no longer exists, or now has the true content that Mr Davis does not feel cold. So safety does not prevent this belief of Mr Davis’s from being knowledge. And indeed, it seems rather plausible that it is knowledge, for he could not have had just this belief without it being true. This belief violates content-safety but not belief-safety, and since we have no reason to think that content-safety rather than belief-safety is the right form of the safety constraint, we have no reason to reject the intuition that this belief, this more or less infallible belief, counts as a bit of knowledge.
This story about Mr Davis’s psychology might seem unbelievable, so let me clear up some details. Mr Davis has both phenomenal and judgemental beliefs about his phenomenal states. The phenomenal beliefs are present when and only when the phenomenal states are present. The judgemental beliefs are much more flexible, they are nomically independent of the phenomena they describe. The judgemental beliefs are grounded in ‘inner perceptions’ of his phenomenal states. The phenomenal beliefs are not, they just are the phenomenal states. The judgemental beliefs can be complex, as in a belief that I feel cold iff it is Monday, while the phenomenal beliefs are always simple. It is logically possible that Mr Davis be wired so that he feel cold without believing he feels cold, but it is not an accident that he is so wired. Most of his conspecifics are similarly set up. It is possible that at a particular time Mr Davis has both a phenomenal belief and a judgemental belief that he feels cold, with the beliefs being instantiated in different parts of his brain. If he has both of these beliefs in ai, then Williamson’s argument may well show that the judgemental belief does not count as knowledge, for it could be false in ai+1. If he has the judgemental belief that he is not cold in ai, then the phenomenal belief that he is cold may not be knowledge, for it is plausible that the existence of a contrary belief defeats a particular belief’s claim to knowledge. But that does not mean that he is not in a position to know that he is cold in ai.
Some may object that it is conceptually impossible that a brain state that instantiates a phenomenal feel should also instantiate a belief. And it is true that Mr Davis’s phenomenal states do not have some of the features that we typically associate with beliefs. These states are relatively unstructured, for example. Anyone who thinks that it is a conceptual truth that mental representations are structured like linguistic representations will think that Mr Davis could not have the phenomenal beliefs I have ascribed to him. But it is very implausible that this is a conceptual truth. The best arguments for the language of thought hypothesis rest on empirical facts about believers, especially the facts that mental representation is typically productive and systematic. If there are limits to how productive and systematic Mr Davis’s phenomenal representations are, then it is possible that his phenomenal states are beliefs. Certainly those states are correlated with inputs (external states of affairs) and outputs (bodily movements, if not actions) to count as beliefs on some functionalist conceptions of belief.
Mr Davis is an interesting case because he shows just how strong a safety assumption we need to ground (ii). For Mr Davis is a counterexample to (ii), but his coldness beliefs satisfy many plausible safety-like constraints. For example, his beliefs about whether he feels cold are sensitive to whether he feels cold. Williamson (Ch. 7) shows fairly conclusively that knowledge does not entail sensitivity, so one might have thought that in interesting cases sensitivity would be too strong for what is needed, not too weak as it is here. Of course false theories can be multiply mistaken, so there’s no conclusive argument here, just an odd point. From this it follows that any safety condition that is strictly weaker than sensitivity, such as the condition that the subject could not easily believe p and be wrong, is not sufficient to support (ii). Williamson slides over this point by assuming that the subject will be almost as confident that he feels cold at ai+1 as he is at ai. This is no part of the description of the case, as Mr Davis shows.
My argument above rests on the denial of content essentialism, which might look like a relatively unsafe premise. So to conclude this section, let’s see how far the argument can go without that assumption. Sainsbury responds to his example, the lucky arithmetic guess, by proposing a different version of safety: mechanism-safety.
Mechanism-safety. B is safe iff the mechanism that produced B produces true beliefs in all similar worlds.
I didn’t want to rest on this too much because I think it’s rather hard to say exactly what the mechanism is that produces Mr Davis’s belief that he feels cold. But if it’s just his sensory system, then I think it is clear that even at ai, Mr Davis’s belief that he feels cold satisfies mechanism-safety. The bigger point here is that content-safety is a very distinctive kind of safety claim, but it’s the only kind that justifies (ii).
To close, let me stress how limited my criticisms of Williamson here are. Very briefly, the argument is that there can be some self-presenting mental states, states that are token identical with the belief that they exist, and these beliefs will satisfy all the safety requirements we should want, even in borderline cases. If some conditions are invariably instantiated in self-presenting states, then those conditions will be luminous. And I think it is a live possibility, relative at least to the assumptions Williamson makes, that there are such self-presenting states. But there aren’t very many of them. There is a reason I picked feels cold as my illustration. It’s not laughable that it is self-presenting.
On the other hand, it is quite implausible that knowing where to buy the best Guinness is self-presenting. And for states that are not self-presenting, I think Williamson’s anti-luminosity argument is likely to work. That’s because it is very plausible (a) that for a belief to be knowledge it must satisfy either belief-safety or mechanism-safety, (b) a non-self-presenting state satisfies probably belief-safety or mechanism-safety only if it satisfies content-safety, and (c) as Williamson showed, if beliefs about a state must satisfy content-safety to count as knowledge, then that state is not luminous. So epistemic states, like the state of knowing where to buy the best Guinness, are not luminous. That is to say, one can know where to buy the best Guinness without knowing that one knows this. And saying that (in this context) is to just endorse Williamson’s arguments against the KK principle. Those arguments are an important special case of the argument against luminosity, and I don’t see how any of my criticisms of the general argument touch the special case.
Williamson describes his attacks on luminosity as an argument for cognitive homelessness. If a state was luminous, that state would be a cognitive home. Williamson thinks we are homeless. I think we may have a small home in our phenomenal states. This home is not a mansion, perhaps just a small apartment with some afternoon sun, but it may be a home.
Don’t be fooled into thinking this supports any kind of foundationalism about knowledge, however. It is true that if we have the kind of self-presenting states that Mr Davis has, then we have the self-justifying beliefs that foundationalism needs to get started. But it is at best a wide-open philosophical and scientific question whether we have any such states, while it is not a wide-open question whether we have any knowledge, or any justified beliefs. If these states are the only things that could serve as foundations, it would be at least conceptually possible that we could have knowledge without self-justifying foundations. So the kind of possibility exemplified by Mr Davis cannot, on its own, prop up foundationalism.
David, Marian (2002) “Content Essentialism” Acta Analytica 17: xx-xx
Gibbons, John (1993) “Identity without Supervenience” Philosophical Studies 70.1: 59‑79.
Lewis, David (1996) “Elusive Knowledge” Australasian Journal of Philosophy 74: xx-xx
Sainsbury, Mark (1999) “Vagueness, Ignorance and Margin for Error” British Journal for the Philosophy of Science 46.4: 589‑601.
Sosa, Ernest (1999) “How to Defeat Opposition to Moore” Philosophical Perspectives 13: 137‑49.
Williamson, Timothy (2000) Knowledge and Its Limits. Oxford: Oxford University Press.
Weinberg, Jonathan, Stephen Stich and Shaun Nichols (2002) “Normativity and Epistemic Intuitions” Philosophical Topics xx-xx