8  Higher-Order Evidence

8.1 Varieties of Higher-Order Examples

Higher-order evidence is evidence about one’s own evidence, or reliability, or rationality. Several examples have been proposed which are often taken to show that rationality requires adjusting one’s confidence in certain propositions to higher-order evidence. And the best explanation of that phenomena may well be that some level-crossing principle or other is true. Since it’s my task to argue against level-crossing principles, I need to say something about these examples.

The examples that have been proposed thus far all have a similar structure. The hero starts out with a firm belief, and the belief would licence a decisive action. Something happens that would, in normal cases, cause a person to question both the belief and the wisdom of taking decisive action. The suggested explanation is that a level-crossing principle is true, and explains the normal person’s hesitation. But the structure of the level-crossing principles has nothing to do with hesitation, either in belief or action. If the principles were true, there should be cases where higher-order evidence, evidence about the nature of one’s evidence or capacity, licences decisive belief or action that is not licensed by the first-order evidence. And once we see what such a licensing looks like in practice, the level-crossing principles look less attractive. So my main aim here is to expand the diet of examples that we have, and judge explanations by how well they handle all the examples in this class.

I already introduced one of the proposed examples in the previous chapter: David Christensen’s example of the medical resident. I’m going to argue that the details of the case are underspecified in important ways. Once we fill in those details, it becomes clear that there are ways to respond to the case without thinking that they provide any support for level-crossing principles. Since we’ll discuss the example at some length, it’s worth repeating it here.

I’m a medical resident who diagnoses patients and prescribed appropriate treatment. After diagnosing a particular patient’s condition and prescribing certain medications, I’m informed by a nurse that I’ve been awake for 36 hours. Knowing what I do about people’s propensities to make cognitive errors when sleep-deprived (or perhaps even knowing my own poor diagnostic track-record under such circumstances), I reduce my confidence in my diagnosis and prescription, pending a careful recheck of my thinking.  (Christensen 2010, 186).

First, a relatively trivial point. Many of the examples in the literature to date are written as either first-personal narratives, as this one is, or as second-personal narratives. It’s not particularly easy to write commentary on such narratives. How, exactly, should we refer to the protagonist of the story? Should we call him David? That seems informal, and incorrect. I’ve been using the clumsy ‘the narrator’ or ‘the resident’, but those aren’t the easiest phrases to track, especially over time. So it’s better to give the protagonist a name. For similar reasons, it is better to say what exactly the diagnosis is, so we can easily refer back to it directly. There are two scope ambiguities in David doubts that his diagnosis is supported by his evidence, and those ambiguities can be cleared up if we specify what the diagnosis is, and what the evidence for it is.

While there are these general reasons to eschew first-personal narratives, there is an extra reason for concern here. The externalist thinks that is very important to distinguish evaluation of states from evaluations of agents, and to distinguish both of these from advice. We’re interested here in what it would be rational for the resident to believe. That’s distinct, at least in principle, from what a wise resident would believe in the circumstances. And both of those are distinct, again at least in principle, from what would be advisable for the resident to believe; i.e., from what advice we should give the resident about how to deal with such situations. Using first-personal, or second-personal, narratives in philosophical examples encourages conflation of rationality, wisdom and advisability. And we’re wading into territory where it is important to remember those can come apart.

Returning to this example, Christensen does not make clear whether the doubts that have been raised are focussed in the first instance on the rationality of the resident, or on the reliability of the resident. (Indeed, the parenthetical remark seems to point in the opposite direction to the main text on just this point.) This distinction may be important.1 That is, it may be that the rational response to learning that one is prone to irrationality is very different to the rational response to learning that one is prone to unreliability. Maybe that won’t be so, but at the beginning of inquiry there is little reason to think these two responses are certain to go together. So let’s keep them apart in the examples we introduce.

  • 1 Indeed, in later work Christensen (2016) himself is very clear on the importance of this distinction, and what I say here draws on that later work.

  • I’m going to spend a lot of time on these three cases. All of them have a similar structure to Christensen’s case, but with many more details filled in.

    Raisa is a medical resident with a new patient. He came in complaining of a burning sensation in his scalp and a nasty smell that he can’t explain. Raisa looks at him and sees his hair is on fire. She decides that this is the cause of his symptoms, and starts to put the fire out. She is then told that she has been on duty for 36 hours, and that residents who have been on duty that long are typically over-confident in their diagnoses and prescriptions. What should she believe and do?

    Regina is a medical resident with a new patient. The whites of his eyes are yellow, and he is lethargic. Regina was taught in medical school that literally every lethargic patient with yellow eyes is jaundiced. (This is, we’ll assume, actually true in Regina’s world, though I’m sure it is actually false.) And she was taught, correctly, that every jaundiced patient should be treated with quinine. In her world, quinine cures all cases of jaundice and is, unlike every other medicine, free of all adverse side-effects. (Remember this is a fictional example!) So Regina prescribes quinine, recalling these facts from her medical training. But she is then told that she has been on duty 36 hours, and that residents who have been on duty that long are typically over-confident in their diagnoses. What should she believe and do?

    Riika is a medical resident with a new patient. He has a fever, headache, muscle and joint pains, and a rash that blanches when pressed. And he has recently returned from a trip to Louisiana. It seems to Riika that her patient has dengue fever, and that he should be treated with paracetemol and intravenous hydration. This is right; Riika’s patient does actually have dengue fever, and it’s rational to make that diagnosis after correctly processing the available evidence. But then Riika is told that she has been on duty for 36 hours, and that residents who have been on duty that long are typically over-confident in their diagnoses. What should she believe and do?

    My judgment on these cases is that Raisa should keep trying to put out the fire, Riika should get a second opinion, and hold off on the treatment if it seems at all safe to do so, and that Regina’s case is rather hard. That is, the details of what the symptoms are, and what the diagnosis and prescription are, matter to the judgment about what they should believe and do.

    Now note that this doesn’t immediately get Change Evidentialism off the hook. All it takes to refute Change Evidentialism is one case, and Riika’s case may be enough to get the job done. But Raisa’s case, and Regina’s too, are important. Our best theory should explain what’s true about those cases, and explain why the cases are different from Riika’s. (If, indeed, Regina’s case is different.) Ideally, they would even explain why Regina’s case is a hard case, though maybe that’s too much to ask of a philosophical theory  (Ichikawa 2009).

    As you may have guessed, I’m going to argue that Change Evidentialism does the best job at discharging these explanatory burdens. Before I start showing that, we need one more case. Christensen’s example is one where the higher-order evidence seems to push in the direction of being more uncertain. All of the cases from the literature that I cited earlier have the same feature. But in principle we can imagine cases that go the other way.

    Roshni is a medical resident with a new patient. His symptoms are similar to those of Riika’s patient, but his rash does not blanch when pressed, and indeed is light enough that it doesn’t have the distinctive visual characteristics of the rash produced by dengue fever. Given his symptoms and history, Roshni thinks he probably has dengue fever, though the oddity of the symptoms means that she thinks other diagnoses are possible. So she wants to run more tests before commiting to any course of treatment. One reason for her to run more tests is that she remembers there are some other illnesses going around that display similar symptoms to what her patient displays. Roshni is then told that she has been on duty for 13 hours and (and this is actually true in the world of the story) that residents who have been on duty between 12 and 14 hours are typically over cautious in their diagnoses. If such a resident thinks probably p, then p is almost always true, and the resident should simply have come to believe p. Now as it turns out Roshni is an exception to this rule; she really doesn’t have strong enough evidence to conclude that her patient has dengue fever, and she’s right to stop at the conclusion that he probably has dengue. But she has no independent reason to believe that she is an exception. So what should she believe and do?

    It would be wrong for Roshni to reason as follows.

    When someone in my circumstance concludes probably p, then there is almost always sufficient evidence to conclude definitely p. I’ve concluded he probably has dengue fever. So he definitely has dengue fever. So I’ll stop running tests and start the prescribed treatment for dengue fever.

    Roshni can’t rule out other possible diagnoses simply on the basis of general characteristics of residents in situations like her. If her patient has some other disease, and Roshni treated him for dengue on the basis of higher-order considerations, she’d be guilty of malpractice.

    So now we have another task for our theory to perform. It must explain why there is, to use a term Stewart Cohen suggested to me, epistemic gravity. Riika’s case shows that, at least sometimes, intuition wants agents to lower confidence when they learn they are in a situation where people are often over-confident. But Roshni’s case shows that the converse is not always true. Higher order evidence can, according to intuition, make confidence go down but not up. And that’s especially true if one had judged correctly to begin with.

    I’m going to argue that a theory that rejects level-crossing principles, and accepts Change Evidentialism, is best placed to explain these four cases.

    8.2 Diagnoses and Alternatives

    It is easy to see why one might think Riika’s case is a problem for Change Evidentialism. Imagine that Riika’s twin sister is also a medical resident, and looks at the same public data about Riika’s patient. And she, like Riika, concludes that the patient has dengue fever. Now the residents are both told that Riika (but not her sister) has been awake for 36 hours, and hence a member of a class that is systematically over-confident in their diagnoses. This seems like a reason for Riika, but not her sister, to reduce their confidence that the patient has dengue fever. And that’s a problem for Change Evidentialism. That Riika has been awake for 36 hours either is, or is not, evidence against the hypothesis that the patient has dengue fever. If it is, then both sisters should become less confident. If, more plausibly, it is not, then if Riika should change, that violates Change Evidentialism.

    There is a purely technical solution to this problem that I mention largely to set aside. The argument of the previous paragraph assumed that when the nurse told Riika how long she’d been awake, the evidence Riika received was a proposition like Riika has been awake for 36 hours. That’s evidence that Riika can get, and that her sister can get. And intuitively learning that has a different effect on the two of them. But we could conceptualise Riika’s evidence differently. We could think her evidence is a centered world proposition, in the sense popularised by David Lewis (1979). On this picture, Riika’s evidence is I have been awake for 36 hours, while her sister’s evidence is My sister has been awake for 36 hours. So they get different evidence. So there is no argument that Change Evidentialism fails.

    This feels a bit like a cheat, at best. After all, we can imagine that the nurse explicitly says to the pair of them, “Riika has been awake for 36 hours”. In that case it would feel extremely artificial to say that the evidence is really this first-personal claim about Riika. But while this technical attempt to save the letter of Change Evidentialism isn’t attractive, it tells us something useful. The information about Riika’s sleep (or lack thereof) matters to Riika because of what it tells her about her mind, i.e., about the very mind she is both using to think about the patient, and thinking about. And an explanation of what goes on in the case should be sensitive to this fact.

    It is important that Riika and her sister are medical residents. The patient in the next bed can’t reasonably believe that Riika’s patient has dengue fever on the basis of the data. Or at least he can’t unless he has medical training. Should we think this is a case where different people with the same evidence can draw different conclusions? No, because this data about the patient does not exhaust the evidence. The evidence also includes everything relevant that Riika learned in her medical training. That’s evidence she has in common with her sister, but not with the patient in the next bed.

    The evidence provided by training, and background information, has to play two roles. First, it has to make it plausible that the patient has dengue fever. It does that by including facts about the symptoms the patient displays, and facts about what symptoms patients with dengue fever typically display. But it must also play a second role. In making a diagnosis and a prescription, Riika isn’t just saying that the patient has dengue fever. She is also saying that dengue fever is the cause of the symptoms. And that requires excluding a lot of other possible diseases, either on the basis that they are inconsistent with the symptoms displayed, or because they are initially implausible and the evidence does not sufficiently raise their likelihood to make them worth taking seriously. If the patient has dengue fever and some other equally serious disease that causes some of the symptoms, then to diagnose dengue fever is to some extent to mis-diagnose the patient. And to start the treatments for dengue fever is, in such a case, to mis-treat the patient. In these respects, forming a diagnosis of dengue fever is importantly different, and stronger, to forming a belief that the patient has dengue fever.

    This exclusion of alternative diseases must be prior to the diagnosis of dengue fever. Imagine how strange it would sound for Riika to have this conversation with her supervisor:

    Supervisor: Why do you think that the patient does not have West Nile?
    Riika: Well, the patient has a fever, headache, a rash etc.
    Supervisor: Yes, those are all consistent with West Nile.
    Riika: Ah, but you see, from those symptoms we can conclude that the patient has dengue fever.
    Supervisor: Yes, and?
    Riika: So the symptoms have been fully explained, so there is no reason to believe the patient has West Nile.

    That’s not good reasoning. It would be perfectly good to reason that the symptoms aren’t consistent with West Nile, so the patient doesn’t have West Nile. Or that West Nile is very rare among people with the patient’s background, so it is better to conclude that he has a disease that is (much) more prevalent in areas he has been. But it isn’t good to first diagnose the patient with dengue fever, and use that to conclude they don’t have West Nile.

    So a good diagnosis draws on lots of background information. So that information must in some sense be available to the doctor. I don’t mean that the information has to be accessible in the sense that she could recite it off hand. But she must be able to base her diagnosis on the background information. And if she’s been awake for 36 hours, then that information is probably not available, even in this weak sense. As I will discuss in section 8.4, there are hard questions about just when it is that evidence previously acquired can still be used. But it is plausible that the relevant information that excludes other diagnoses is not something Riika can use in her tired state.

    There is another complication to consider here. Riika has to rule out particular alternatives like West Nile before she can diagnose the patient with dengue fever. But she also has to rule out, collectively, alternative explanations she hasn’t thought of, or may have forgotten. It’s not enough that the alternative explanations simply fail to exist. If one knows the patient has yellow eyes, and as a matter of fact the only possible explanation for this is that they are jaundiced, it doesn’t follow that one is in a position to rationally conclude the patient is jaundiced. One must know that only jaundice causes yellow eyes, or at least that it’s the only plausible cause. And the same holds for all other diagnoses.

    It is here that concerns about one’s own alertness become particularly pressing. At least in my own case, the most worrying consequence of excessive tiredness is that I overlook alternative explanations of phenomena. When that happens, my abductive inferences to particular explanations are unreasonable because I should have looked harder for alternatives before settling on one explanation. So let’s spend some time thinking about how this might affect the reasonableness of Riika’s diagnosis.

    8.3 Tiredness and Abduction

    We’d like to show that NR is true, and even better, that LNR is true, without positing any kind of level-crossing principle.

    NR
    It is Not Reasonable for Riika to believe that her patient has dengue fever.
    LNR
    When she Learned that she had been awake for 36 hours, it became Not Reasonable for Riika to believe that her patient has dengue fever.

    Since we’re not using level crossing principles, we can’t reason as follows.

    1. Riika has been awake for 36 hours, and she knows this.
    2. So it is reasonable for her to believe that her diagnoses are unreasonable.
    3. Whenever it is reasonable to believe that some mental state is unreasonable, it is unreasonable to maintain that mental state.
    4. It was not unreasonable to believe that she’d made a reasonable diagnosis before learning how long she’d been awake.
    5. So, from 3 and 4, LNR is true.

    If we want to reject level-crossing principles, then we have to reject step 3 of that purported explanation. We need to find something to put in its place. I’m going to offer three explanations. The first two are probably flawed. But I’m offering them in part because they aren’t obviously wrong, and would solve the problem without appeal to level-crossing principles. And, more importantly, thinking through what’s wrong with these explanations helps us see what’s right about the correct explanation of Riika’s case. Here is the first of these probably flawed explanations.

    1. To reasonably conclude that p by abductive inference, Riika needs to antecedently, reasonably believe that other explanations of the data fail.
    2. Her best evidence is that other explanations of the data fail is that (a) it seems to her that no other explanation works, and (b) she is a reliable judge of when alternative explanations are available.
    3. When she learns she has been awake for 36 hours, she is no longer in a position to reasonably use part (b) of that evidence.
    4. So LNR is true; once she learns that she has been awake for 36 hours, she can no longer reasonably make the abductive inference from the data to the diagnosis of dengue fever.

    I suspect there are two, related, mistakes in this explanation. It relies on a ‘psychologised’ conception of evidence, and Timothy Williamson (2007) has argued convincingly against that conception of evidence. It isn’t at all obvious that Riika has to reason from how things seem to her to conclusions about the world in order to form medical diagnoses.

    And it isn’t obvious that Riika’s has to form a reasonable belief that there are no alternative explanations, and that she has to do so before forming the diagnosis. It might be that an abductive inference is reasonable if one’s evidence rules out alternative explanations of the data, and one is reliably disposed to consider alternative explanations when they are not ruled out. In other words, an abductive inference might be good (in part) in virtue of being based in a skill in considering explanations, and that skill may be manifest when the abductive conclusion is drawn, not antecedently to it being drawn.

    Even if all that is true, there is still a skill that is needed. That skill needs to reliably rule out alternative explanations. And Riika is really tired; maybe she can’t exercise that skill while so tired. This idea leads to our second (probably mistaken) explanation.

    1. To reasonably conclude that p by abductive inference, Riika has to be able to reliably rule out alternative explanations as unreasonable.
    2. Since she’s been awake for 36 hours, Riika cannot reliably rule out alternative explanations of the symptoms as unreasonable.
    3. So NR is true; Riika cannot make the diagosis reasonably because she cannot reliably rule out alternatives.

    One shortcoming of this explanation is that it doesn’t explain LNR. Indeed, if the premises here are true, then LNR is in fact false. It is the fact that Riika has been awake for 36 hours that makes her diagnosis unreasonable, not her learning that she’s been awake that long. To the extent that we think LNR is true, that’s a reason to dislike the explanation.

    A bigger problem for this explanation is that we don’t really know that premise 2 is true. What we know is that folks in general who have been awake as long as Riika as not reliable. But perhaps she is an exception. Indeed, the setup of the example suggests she may well be an exception. The fact that other people in her position are unreliable does not entail that she is unreliable. Or, at least, it doesn’t entail this without some strong assumptions about the reference class that is relevant to Riika’s reliability. So let’s try a different explanation.

    The alternative explanation starts with the observation that the reliability of a mechanism is not normally enough for it to produce reasonable, or rational, beliefs. If a scale is working, but there is excellent testimonial evidence that it is not working, it is unreasonable to believe what the scale says. This applies to internal mechanisms too. If one is reliably told that one is in an environment full of visual illusions, it is unreasonable to believe what one sees, even if one’s eyesight is reliable.

    A similar story holds true for skills. To learn that the patient has dengue fever, Riika has to exercise her skill at reliably ruling out alternative explanations of the data. And while she has such a skill, she has no reason to believe that she has it. Indeed, she has a positive reason to believe that she lacks it, since she has been awake so long, and people who have been awake that long typically lack the skill. So she should not rely on the skill. Here, then, is my preferred explanation for what’s going on in Riika’s case. I’ll call that explanation the evidentialist explanation in what follows, since it makes key use of how evidence does (or does not) change in explaining changes in what states it is rational to hold.

    1. To reasonably conclude p by abductive inference, Riika must reasonably rely on her skill at excluding alternative explanations of the data.
    2. It is not reasonable to rely on a skill if one has excellent, undefeated, evidence that one does not currently possess the skill.
    3. So, once Riika learns she has been awake 36 hours, she cannot reasonably infer from the observed data to the conclusion that the patient has dengue fever.

    If this explanation is correct, the case is not a counterexample to Change Evidentialism, and we do not need to appeal to level-crossing principles. Riika had to rely on her sensitivity to explanations she had not considered in order to have a justified diagnosis. Even though she is, in the circumstances, sufficiently sensitive to alternative explanations, she could not reasonably rely on that sensitivity when she has such good evidence that her skills are temporarily diminished. So her belief that the patient has dengue fever is unjustified.

    That is our explanation of why Riika loses knowledge, and loses reasonable belief, when she learns that she has been awake for 36 hours. But it isn’t the only possible explanation. There are, for example, explanations that appeal to level-crossing principles. Why should we prefer the explanation I just offered? As I’ll argue in the next section, the answer is that only this explanation in terms of skill can generalise to cover all of the cases.

    8.4 Explaining all Four Cases

    Let’s start with Raina. Unlike Riika, Raina needs neither specialist background information, nor expert insight, to form a diagnosis. There’s a guy with his hair on fire, and she comes to the belief that his hair is on fire. She perhaps needs the background information that burning hair burns and smells, and has a distinctive fiery appearance, but most adults will have that information ready to hand in case of emergency. So the kinds of evidence that are threatened by fatigue are not needed to form the judgment in Raina’s case. So she still knows, even in her fatigued state, that her patient’s hair is on fire. Since judging that the patient’s hair is on fire doesn’t require any particular skill, it doesn’t matter that her skills are diminished.

    Unlike Riika, Roshni didn’t have enough public information to conclude her patient had dengue fever. She needed the extra step that there are no other plausible explanations of the data. Since there are other plausible explanations of the data, she can’t know there are none. Hence it cannot be part of her evidence that there are none. Being fatigued might explain why one’s ‘insights’ do not really constitute evidence. But it can’t turn non-insights, and non-facts, into evidence. So even in her semi-fatigued state, Roshni still lacks sufficient evidence to diagnose her patient with dengue fever. So she still doesn’t know her patient has dengue fever, as we hoped to explain.

    We’ll spend much more time on this in chapter 11, but for now note one quick reason to suspect that Roshni‘s credence that her patient has dengue fever should not move at all. Assume that she learns not just that residents who have been on duty 12–14 hours are systematically under-confident in their diagnoses, but that they remain so after making their best efforts to incorporate this information about their own under-confidence. And assume that Roshni should, on learning that she is part of a group that is systematically under-confidence, increase her confidence in her preferred diagnosis. Now we have a perpetual confidence increasing machine. Even once she has increased her confidence in light of the information about herself, she has reason to increase it again, since she is still in a group that systematically is too cautious in their judgments. And this fact persists no matter how hard she tries. But perpetual confidence increasing machines, like perpetual motion machines, are absurd. The best place to stop this machine is at the very start. So Roshni should not increase her confidence at all. (I think this is intuitively the right thing to say about her case, but this argument is offered to those who don’t share the intuition.) And that in turn provides reason to not just believe the evidentialist explanation of Riika’s case, but to believe the ’non-psychologised’ version of that explanation.

    The really tricky case, from this perspective, is Regina. She doesn’t need any skill in identifying possible alternative explanations of the data. She just needs to remember some facts from her medical training, make some straightforward observations, and perform a very simple logical deduction. Her tiredness does not affect her ability to make the observations or, I suspect, to do this deduction. A tired person may struggle to draw complicated consequences from data, but going from All Fs are Gs and This is F to This is G does not require particular skill.

    The big question is whether Regina can really rely on her memory when she is tired. It is helpful to think about this case by comparing it to the Shangri-La example developed by Frank Arntzenius (2003). Here is the slightly simplified version of the case that Michael Titelbaum sets out.

    You have reached a fork in the road to Shangri La. The guardians of the tower will flip a fair coin to determine your path. If it comes up heads, you will travel the Path by the Mountains; if it comes up tails, you will travel the Path by the Sea. Once you reach Shangri La, if you have traveled the Path by the Sea the guardians will alter your memory so you remember having traveled the Path by the Mountains. If you travel the Path by the Mountains they will leave your memory intact. Either way, once in Shangri La you will remember having traveled the Path by the Mountains. The guardians explain this entire arrangement to you, you believe their words with certainty, they flip the coin, and you follow your path. What does ideal rationality require of your degree of belief in heads once you reach Shangri La.  (Titelbaum 2014, 120)

    The name of the person Titelbaum’s narrator is addressing isn’t given, so we’ll call him Hugh. And we’ll focus on the case where Hugh actually travels by the Mountains.

    There is something very puzzling about Hugh’s case. On the one hand many philosophers (including Arntzenius and Titelbaum) report a strong intuition that once in Shangri La, Hugh should have equal confidence that he came by the mountains as that he came by the sea. On the other hand, it’s hard to tell a dynamic story that makes sense of that. When he is on the Path by the Mountans, Hugh clearly knows that he is on that path. It isn’t part of the story that the paths are so confusingly marked that it is hard to tell which one one is on. Then Hugh gets to Shangri La and, well, nothing happens. The most straightforward dynamic story about Hugh’s credences would suggest that, unless something happens, he should simply retain his certainty that he was on the Path by the Mountains.

    Resolving the tension here requires offering a theory of the epistemology of memory. And I have no desire to do that, any more than I had a desire in the ethics part of the book to offer a first-order ethical theory. What I am going to do is say why hard questions within the epistemology of memory are relevant to what we should say about Hugh’s case, and by extension Regina’s case.

    Some theories of memory are synchronic. Whether the agent’s mental state at time t makes it rational for her to believe that p, on the basis of her (apparent) memories, solely depends on the the properties she possesses at t. There are two natural ways to fill in the synchronic theory. First, we could say that the agent’s faculty of memory outputs propositions that become, if it is a reliable faculty, evidence for the agent. (It’s presumably a gross oversimplification of the best cognitive and neural theories of how memory works in humans to describe it as a faculty, but we’ll have to work with such simplifications to get a broad enough view of the philosophical landscape.) Second, we could say that the apparent memories the agent has provide her evidence, and she can then reason using either what she knows about herself, or perhaps some default entitlements to trust herself that she possesses, to the truth of the contents of those memories.

    On either kind of synchronic theory, Hugh won’t know that he came to Shangri La via the mountains. If memory provides evidence directly, it does so only when it is reliable. And on this question, it is unreliable, since in nearby worlds it produces mistaken outputs. It’s true that there is nothing funky about the causal chain leading to Hugh’s memory. But on a synchronic theory of memory, the nature of the chain is not relevant; all that is relevant is the reliability of the output. And the output is not reliable. If, on the other hand, the evidence is something like the apparent memory Hugh has, then things are even worse. He knows that he can’t reason from his apparent memory to any claim about how he got to Shangri La, because in very nearby worlds his apparent memories are badly mistaken.

    Arntzenius argues that Hugh should have a credence of 0.5 that he came by the mountains as follows. (Assume Arntzenius is talking to Hugh here, so ‘you’ picks out Hugh.)

    For you will know that he would have had the memories that you have either way, and hence you know that the only relevant information that you have is that the coin was fair.  (Arntzenius 2003, 356)

    That argument seems to presuppose that we are using the second, psychologised, version of the synchronic theory of memory. If we understand memories to be not just phenomenal appearances, but traces of lived experiences, then Hugh would very much not have the memories that he has either way. He might think that he had the same memories had he come by the sea, but he’d be wrong. Still, Arntzenius’s argument doesn’t seem to rely on this feature of memory. What it does seem to rely on is that in an important sense, Hugh would be the same right now however he had arrived at Shangri La. That is, it relies on a synchronic theory of memory. Sarah Moss (2012) makes a similar claim about the case. (Again, her narration is addressed to Hugh.)

    Intuitively, even if you travel on the mountain path, you should have .5 credence when you gets to Shangri La that the coin landed heads. This is a case of abnormal updating: once you arrive in Shangri La, you can no longer be sure that you traveled on the mountain path, because you can no longer trust your apparent memory.  (Moss 2012, 241–42)

    Again, the presupposition is not just that we have a synchronic epistemology of memory, but that the evidence memory provides comes from appearances. And, once again, the second presupposition does not seem to really matter. We would get the same result if we took memory to provide evidence directly, but only when it was reliable. What matters, that is, is the synchronic epistemology of memory.

    In recent work, Moss (2015) has developed a systematic defence of synchronic epistemology, what she usefully calls ‘time-slice epistemology’. And while she makes a good case for it, there is also a good case for a diachronic epistemology. Richard (Holton 1999, 2014) has argued for diachronic norms of intention, and for understanding belief as being in important ways like intention. From these premises he concludes that there are diachronic norms on belief. David James Barnett (2015) has offered more direct arguments for adopting a diachronic epistemology of memory. So we should work through what happens in cases like Shangri La on a diachronic approach.

    It turns out that we quickly face another choice point. The cases we are interested in are ones where an agent knows p at an earlier time t1, and then this belief is preserved from t1 to a later time t2. The theoretical choice to make is, is this sufficient for the agent to know p at t2, or could the knowledge be defeated by things that happen in the interim? If the knowledge could not be defeated, then Hugh knows he came by the mountains, for the obvious reason that he once knew this and has never forgotten it. If it can be defeated, then on any of the most obvious ways to incorporate defeat into the theory, Hugh’s claim to knowledge will be defeated. He is, after all, part of a group (explorers who arrive at Shangri La) who have very unreliable memories, and he knows that.

    Whatever we say about defeat here can be made consistent with Change Evidentialism2. Since we’re developing a diachronic epistemology, we should allow that evidence can be accrued over time. On the version of the theory where memories are indefeasible, Hugh’s evidence that he came via the mountains is his perception of the mountain path. This perception can be his evidence well into the future, as long as his memory does its job of preserving the visual evidence. (He could of course forget how he got to Shangri La, but we’re only discussing cases where beliefs are preserved throughout the relevant time period.) If memories can be defeated, the Change Evidentialist should say that the defeaters prevent the past perceptions from being current evidence. (In general, I think the evidentialist should say that defeaters prevent propositions becoming part of one’s evidence. But defending that claim would take us too far afield.) If his evidence does include the contents of his perceptions while on the path, then he now knows that he came via the mountains, if it does not he does not. Either way, it is the change or lack of change of evidence (and not merely his worries about his own reliability) that explain why he knows what he does.

  • 2 Note that the key notion in the statement of Change Evidentialism is change of evidence, not accrual of evidence. Losing evidence matters too.

  • I’ve described four theories of memory, two synchronic and two diachronic. On three of the four theories, Hugh does not know, indeed does not even have reason to be particularly confident, that he came by the mountain. On the fourth he does know this. I think that’s a reasonable stopping point; it’s left as a somewhat difficult philosophical question whether Hugh knows that he came via the mountains. But it’s not one we have to settle the big picture views I’ve been defending, since either answer to the philosophical question about memory is consistent with those views.

    And what we say about Hugh carries over to Regina’s case. The big issue is whether she (still) has the following two propositions as evidence.

    1. All lethargic patients with yellow eyes are jaundiced.
    2. All jaundiced patients should be treated with quinine.

    If she has 1 and 2, then she should treat her patient with quinine. This isn’t, or at least isn’t just, because 1 and 2 entail that she should treat her patient with quinine. It’s rather because these pieces of evidence provide strong and immediate support for the claim that she should treat her patient with quinine.

    Does she (still) have those propositions as evidence, or as something she can derive and use as evidence? On either synchronic theory of memory, she does not. Her apparent memory of 1 and 2 cannot ground an inference to the truth of 1 and 2, since she knows that she is unreliable given her fatigue. Alternatively, if memory delivers propositions like 1 and 2 directly, the fact that she is so fatigued right now will defeat memory’s claim to being a source of evidence. If we adopt a diachronic theory of memory, then what matters is whether we allow for (anything like) defeaters. If we do, her current fatigue is, probably, a defeater, so she again doesn’t know that her patient should be treated with quinine. But on the (not totally implausible!) diachronic theory that rejects defeaters, we get that she does know. I think this is the right result; Regina’s case is not as clear as Riika’s, and it is right that it turns on hard philosophical questions.3

  • 3 In a recent paper  (Weatherson 2015) I take a stand on some of these questions about memory in ways that go beyond what is necessary for rejecting level-crossing.

  • If we explain Riika’s case using level-crossing principles, then we should say that Regina’s case does not turn on hard philosophical questions. On this approach, Regina’s case is easy. She can’t rationally believe that she rationally believes that the patient is jaundiced, so she can’t rationally believe that the patient is jaundiced. Now this seems to me to be the wrong result in Regina’s case. It’s wrong twice over; it says the wrong thing about Regina, and it says the case is easy when in fact it is hard. But because the question is hard, I don’t want to lean any argumentative weight on it. And I doubt that we should ever put much argumentative weight on intuitions about whether cases are hard or easy. Instead I’ll argue against the application of level-crossing principles to Riika’s case by comparing Riika’s case with Raina’s and Roshni’s.

    The level-crossing explanation of Riika‘s case provides no resources to distinguish between Riika’s case and Raina’s. Both of them have reasonably responded to the evidence that is available. Both of them then get evidence that they are (temporarily) unlikely to be responding correctly to evidence. These facts are, in Riika’s case, held to be sufficient to explain why she should change her view. But they are features of Riika’s case that are shared with Raina’s case. Since Raina should not change her view on being told she has been awake for 36 hours, we need either something more, or something else. An explanation of Riika’s case based on level-crossing principles will over-generalise; it will ’explain’ why Raina should change her mind too.

    Roshni is even more of a challenge for explanations that rely on level-crossing principles. Let p be the proposition The patient might not have dengue fever. At the start of the story, Roshni believes that, and rationally so. But then she gets evidence that she cannot rationally form beliefs like that given her state. So, if the level-crossing principle is true, then she should lose the belief in p. But if she thinks that the patient’s having dengue fever is at least very likely, and does not believe that it might be false, that sounds to me like she believes it. That is, the only way to comply with the level-crossing principles is to believe the patient does have dengue fever. And that conclusion is absurd.

    So Roshni is a counterexample to a lot of level-crossing principles. The following claims about her are true:

    • Roshni rationally believes that p.
    • Roshni could not rationally believe that she rationally believes that p.
    • Roshni should believe that her evidence does not support rational belief in p.

    And level-crossing principles are meant to rule out just those combinations. So Roshni’s case does not just undermine an abductive argument for level-crossing principles, it provides direct evidence that those principles are mistaken.

    8.5 Against Bracketing

    David Christensen draws a different response to these puzzles involving higher-order evidence. His theory is that higher-order evidence requires us to ‘bracket’ first-order evidence. Here is how he introduces the idea. (The background is that he is discussing a case where he did a logic problem, got the right answer, and then was told he took a drug that distorts most people’s logical abilities.)

    It seems to me that the answer comes to something like this: In accounting for the HOE (higher order evidence) about the drug, I must in some sense, and to at least some extent, put aside or bracket my original reasons for my answer. In a sense, I am barred from giving a certain part of my evidence its due. After all, if I could give all my evidence its due, it would be rational for me to be extremely confident of my answer, even knowing that I’d been drugged. In fact, it seems that I would even have to be rational in having high confidence that I was immune to the drug. By assumption, the drug will very likely cause me to reach the wrong answer to the puzzle if I’m susceptible to it, and I’m highly confident that my answer is correct. Yet it seems intuitively that it would be highly irrational for me to be confident in this case that I was one of the lucky immune ones. … Thus it seems to me that although I have conclusive evidence for the correctness of my answer, I must (at least to some extent) bracket the reasons this evidence provides, if I am to react reasonably to the evidence that I’ve been drugged.  (Christensen 2010, 194–95, emphasis in original)

    There are a few different arguments here that we need to tease apart.

    There is an argument that bracketing is needed because otherwise the narrator will have ‘conclusive’ evidence for the answer to the logic problem. This isn’t right; or at least it is misleading. In a sense seeing my coffee cup on my desk is conclusive evidence for the truth of any mathematical proposition. It does entail it. But it’s a terrible reason to believe, for example, Fermat’s Last Theorem. There is another sense of conclusive that is more relevant; whether some evidence provides epistemically conclusive reason to believe a conclusion. And mere entailment does not suffice for that.

    There is an argument I think implicit in Christensen’s remarks that if we allowed the first order evidence to stand, we’d be licencing some improperly circular reasoning. That’s an interesting observation, and I’ll discuss it at more length in the next chapter.

    But what we’re interested in is the conclusion, that the original evidence must be bracketed or set aside in cases where higher order evidence suggests we are likely to be making a mistake. And that conclusion, we can now see, can’t be right. It can’t be right because of Raina’s case and Roshni’s case. If Raina brackets her first order evidence, she won’t have reason to put out the fire in her patient’s hair. But she has excellent, indeed compelling, reason to do that. And if Roshni brackets her first order evidence, she will have sufficient reason to believe that her patient has dengue fever, and to start treating him. But she does not have sufficient reason to do that.

    These cases aren’t isolated incidents. They point to two general problems with the bracketing picture. It doesn’t distinguish between cases where evidence immediately supports a conclusion, and cases where the evidence supports the conclusion more indirectly. The latter cases, ones where the agent must use the initial evidence to derive more evidence, and then use the larger evidence set to support the conclusion, are cases where higher order evidence matters. But the reason higher-order evidence matters in those cases is that higher-order evidence blocks those intermediate steps. Cases like Raina’s are different, but the bracketing story does not distinguish them. And the bracketing story can’t explain the existence of epistemic gravity, while the evidentialist explanation I’ve offered can.

    There are other cases that, while not as clear, seem to me cases the bracketing story cannot handle correctly. The following case is inspired by some examples presented Jonathan Weisberg (2010).

    Jaga has been taking some medication. She knows that she has taken the medication for 22 days, and that she has taken 18 pills each day. She then learns some very worrying news. The medication is being withdrawn from sale because it has a striking effect on anyone who takes 400 or more pills; it makes them incredibly bad at arithmetic for several weeks. The effect is surprisingly sharp in its effect; anyone who has taken 399 or fewer is unaffected, but once one has taken the 400th pill, it kicks in with full force. (Yes, this is a very unrealistic case, but more realistic cases are possible, and would simply be more complicated to discuss.)

    Now Jaga is very worried. She knows that she has taken 22 times 18 pills. But she is unsure what 22 times 18 is. That’s not unreasonable; most of us wouldn’t know what it is off the top of our heads either, without doing the calculation. And one of the things that worries Jaga is that before doing the calculation, it seems pretty likely to her that it is greater than 400. And that isn’t unreasonable either. It’s wrong, but well within the reasonable range of error.

    So Jaga does the calculation. She works out that 22 times 18 is 20 times 18 plus 2 times 18, so it is 360 plus 36, so it is 396. Wonderful, she thinks, I haven’t taken too many pills. So I can do arithmetic well, as indeed I just did. That’s exactly the right attitude for Jaga to have. Her evidence does not actually show that she is bad at arithmetic. Before she sat down to do the calculations, she should have worried that she was bad at arithmetic. But now that she’s done the calculations, she knows better.

    But note this isn’t what a defender of the bracketing view can say about Jaga’s case. There is a serious doubt about whether she is good at arithmetic, and relatedly about whether she has taken 400 or more pills. She can’t resolve that by appeal to her first order evidence about whether she has taken 400 or more pills, since whether her calculations provide her with reason to believe that she’s taken 400 or more pills is exactly what is at issue. More formally, let p be the proposition that she’s taken less than 400, and q be the proposition that she’s good at arithmetic. The intuition behind the bracketing view is that one can’t come to believe q by doing some arithmetic and trusting your answers. Yet that is exactly what Jaga has done, admittedly via the roundabout route of coming to believe p, and antecedently knowing that q is true iff p is true.

    The point of Jaga’s case is that bracketing has implications not just in cases where an agent gets evidence that does suggest she is irrational or unreliable, but also in cases where she gets evidence that might suggest that. And those implications are much less plausible than they are in the cases where the force and direction of the higher-order evidence is clearer. We’ll return to such cases extensively in chapter 10. The next priority, however, is to deal with the circularity worry. If we reject level-crossing principles, and accept Change Evidentialism, are we committed to accepting what are in fact bad kinds of circular reasoning?