10  Akrasia

The normative externalist seems to be committed to the following possibility. An agent, we’ll call her Aki, has been given excellent arguments in favour of a false sceptical thesis. For concreteness, we’ll assume the scepticism in question is testimonial scepticism. Nothing turns on the particular choice of sceptical thesis. But something does turn on whether there can be excellent arguments for any false sceptical thesis, and we’ll return to this assumption below. For now we’ll assume that Aki is confident that one cannot get reasons to believe propositions on the basis of testimony. And she is rational to be confident in this; it’s what her philosophical evidence supports. But, we’ll also assume, testimonial scepticism is false.

Aki now learns the proposition that a long-time friend, who has not lied to her in the past, said that p. She has weak probabilistic reasons to have greater credence in ¬p than p, but these are the kinds of background reasons that are routinely overturned by testimony. The details don’t matter, but if it helps to make the case concrete, imagine that p is the proposition that the home team won last night’s baseball game, when it was known in advance that the away team was stronger, and was favoured to win. Upsets happen all the time in baseball, so a friend’s testimony that the home team won should be only mildly surprising, and cause one to believe that the home team won. Since in this case the friend’s testimony was caused by the fact that the home team did indeed win, it is doubly true that one should believe the friend.

And this is what Aki does. Despite her philosophical leanings, she can’t bring herself to not believe what her friend says. That she can’t follow her own views in this way shouldn’t be surprising. The ancient sceptical texts are filled with both arguments for scepticism, and techniques for putting one’s sceptical conclusions into practice. It was never assumed that mere belief in a sceptical view would suffice for control over one’s mental states  (Morison 2014). Aki is just like the people that the ancient sceptics were writing for; people who believed their views but could not put them into practice.

And of course, it’s a good thing Aki does not have her theoretical doubts govern her beliefs. She gets a well-confirmed, and true, belief by trusting the testimony. Does she get knowledge? That’s a hard question, turning on whether one thinks that knowledge is incompatible with this kind of mistake by one’s own lights. I’m going to set that aside, and just focus on the fact that she gets a well-supported true belief. I think, though this is controversial, she gets a rational belief. So Aki is an epistemological case of what Arpaly calls inadvertent virtue. She forms the right belief, for the right reasons, while thinking these are bad reasons.

Normative externalism, of the kind I prefer, says that Aki is doing as well as she can in the circumstances. She is believing what her evidence supports. She violates a level-crossing principle, but since I’m arguing against level-crossing principles, I don’t take this to be a problem. Good for Aki, a paragon of rationality!

This take on Aki’s situation strikes many philosophers as implausible. Some philosophers go so far as to say that Aki’s situation is literally impossible; we cannot truly believe of Aki that she both believes p and believes that this is an irrational belief  (Hurley 1989; Owens 2002; Adler 2002). Many others think that Aki is possible but irrational; rationality requires that Aki keep her first-order and higher-order beliefs coherent, so if she has this combination of beliefs, she is irrational  (Hookway 2001; Ribeiro 2011; Smithies 2012; Greco 2014; Horowitz 2014; Titelbaum 2015; Littlejohn 2018).

So we get the following argument.

  1. If normative externalism is true, then some akratic attitudes are rational.
  2. No akratic attitudes are rational.
  3. So, normative externalism is false.

The short version of my response is that there is no understanding of ‘akratic’ that makes this argument plausible. We have to have a fairly expansive understanding of what akrasia is for premise 1 to be true. And on that understanding, premise 2 is implausible.

Note I’m using ‘attitude’ in a fairly expansive sense here. If one believes p and believes that it is irrational to believe p in one’s situation, I’ll call that combination an akratic attitude. This is perhaps non-standard - maybe we should say that’s a pair of attitudes that only become a single attitude if one forms the conjunctive belief that p is true and irrational to believe. But distinguishing belief in a conjunction and belief in each conjunct would be needlessly distracting in this context. Put in other terminology, the best version of premise 2 will be a ‘wide-scope’ principle, saying that it is irrational to both believe p and believe that this very belief is irrational or otherwise defective.

10.1 The Possibility of Akrasia

I’m going to mostly assume that it is at least possible to, as Aki does, hold a belief while believing that very belief is in some way improper. I’ve tacitly given the argument for that assumption already. It draws on a very similar argument by Brian Ribeiro (2011). In practice there is a gap between, on the one hand, coming to accept a sceptical argument and being motivated to adjust one’s mental life around it, and making those adjustments effectively. The very existence of Pyrrhonian techniques for resisting belief in propositions that one’s theory says one should not believe is evidence of this gap. Anyone who falls into that gap, like Aki, will be akratic.

Could it be said that Aki doesn’t really believe that sceptical arguments work? As David Owens (2002) points out, we don’t want to just rely on Aki’s firm avowals that she endorses testimonial scepticism; it takes more than talk to form a belief. But if Aki says that she endorses the sceptical arguments, and she tries to convince others of them, and she, for example, carefully studies Sextus Empiricus for strategies for putting her testimonial scepticism into effect, it seems plausible that she really does believe in testimonial scepticism. And that’s true even if she lacks whatever it would take to put this sceptical doubt into full practice.

Is Aki, so described, akratic? Owens says that she is not, because she does not freely and deliberately choose to believe that the home team won last night, against her better judgment. Most other authors say, or perhaps just assume, that epistemic akrasia does not require freely and deliberately choosing one’s beliefs. I’m not going to take a stand on the substantive question here. If we’re trying to find a plausible version of the anti-externalist argument, it is best to not use ‘akrasia’ the way Owens does. That’s because given Owens’s usage, premise 1 is clearly false. Normative externalism makes no commitments at all concerning what it is rational to freely and deliberately believe. So let’s assume we’re working with a notion of akrasia that is not so demanding, and in particular that ‘akrasia’ applies to all cases where an agent believes against their better judgment.

10.2 Three Level-Crossing Principles

But even that characterisation is unclear on a key point. Here are three formulations of anti-akrasia principles that you could read as precisifications of the idea.

  • “No situation rationally permits any overall state containing both an attitude A and the belief that A is rationally forbidden in one’s current situation.”  (Titelbaum 2015, 261)
  • “It can never be rational to have high confidence in something like P, but my evidence doesn’t support P.”  (Horowitz 2014, 718)
  • “If we use Cr for an agent’s credences and Pr for the credences that would be maximally rational for someone in that agent’s epistemic situation [then] Cr(A | Pr(A) = n) = n”  (Christensen 2010, 122)

Titelbaum calls the principle he puts forward the ‘Akratic Principle’. I don’t want to use that name because part of what we’re discussing is whether it is the most helpful way to understand akrasia. So I’ll just call it Titelbaum’s principle. Horowitz calls her principle the ‘Non-Akrasia Constraint’. For similar reasons, I’ll instead call it Horowitz’s principle. The principle Christensen puts forward is commonly called Rational Reflection, and I’ll follow that usage.

Rational Reflection is, in practice, considerably stronger than Titelbaum’s principle. Imagine that Aki is having doubts about her testimonial scepticism. She doesn’t fully endorse it. But she is still pretty confident in it; her credence in testimonial scepticism is 0.9. And she thinks that if testimonial scepticism is right, then the rational credence in the proposition that the home team won last night is below one-half. But she still has a very high confidence that the home team won, while thinking this is most likely irrational. This is a violation of Rational Reflection, but not of Titelbaum’s principle. After all, there is no attitude that Aki both has and believes that it is irrational to have.

That doesn’t show that Rational Reflection is logically stronger than Titelbaum’s principle. Maybe there are states that violate Titelbaum’s principle but not Rational Reflection. Whether this is so turns out to turn on difficult questions about the relationship between credence and belief. I’m not going to get into those questions here, in part because I have rather idiosyncratic views on them. On almost all theories about that relationship, however, it is impossible to violate Titelbaum’s principle without violating Rational Reflection. That’s what I mean by saying that in practice, Rational Reflection is a stronger principle.

Whether Rational Reflection is also stronger than Horowitz’s principle is a little less clear. At first glance, it seems like it must be. Imagine someone whose credences are given by the following table:

Proposition Credence
p 0.7
The rational credence for me to have in p is 0.7 0.9
The rational credence for me to have in p is 0 0.1

Such an agent violates Rational Reflection. Rational Reflection implies that an agent’s credence in a proposition equals their expectation of the rational credence in that proposition. And the agent’s rational expectation of the rational credence in p is, from the last two lines of the table, 0.63. But on the face of it, it doesn’t look like they violate Horowitz’s principle. There is no proposition they are both confident in, and confident their evidence does not support. So it looks like Rational Reflection is stronger than Horowitz’s principle too. But the arguments below concerning iterated cases may cause us to doubt whether that’s ultimately the case.

My view is that all three of these principles are false. It’s a little trickier to say exactly which of the principles are inconsistent with normative externalism, and so must be rejected by anyone who accepts normative externalism. The simplest thing to say here uses the framework developed at the end of Part I, concerning core and peripheral commitments of normative externalism.

It is a core commitment of normative externalism that Rational Reflection is false. Rational Reflection offers a bidirectional link between what it is rational to believe, and what one believes about what it is rational to believe. And, at least as I read the proponents of the principle, the direction of explanation goes (at least in part) from the subject’s beliefs about what is rational to facts about what is rational.

Just what to say about Horowitz’s principle and normative externalism is less clear, because we need to see exactly how it applies in some tricky cases to get a sense of its scope. We’ll return to this below.

On the other hand, it is a relatively peripheral commitment that Titelbaum’s principle is false. Titelbaum’s principle is only a one way connection. And it is at least possible to endorse it while thinking the order of explanation goes in an externalist friendly way. One might think that if one believes A, it is irrational to believe that it is irrational to believe A in part in virtue of having that very first order belief. So there is at least a version of Titelbaum’s principle for which the answers to all of the questions posed at the end of Part I is “No”, and that makes it an extremely peripheral violation.

We get this very externalist friendly version of Titelbaum’s principle if we think that rational beliefs must be true, at least when the belief is about the normative. Why might we think that? One way to motivate that view is to start with the arguments given by Clayton Littlejohn (2012) that only true beliefs can be justified, and try to either reason from there to the conclusion that only true beliefs are rational, or to amend the arguments so as that conclusion falls out. But another way is to argue that there is something special to normative beliefs. While descriptive beliefs can be false and rational, normative beliefs cannot. That is the lesson Titelbaum draws from his principle (which remember he calls the ‘Akratic Principle’).

Ultimately, we need a story that squares the Akratic Principle with standard principles about belief support and justification. How is the justificatory map arranged such that one is never all-things-considered justified in both an attitude A and the belief that A is rationally forbidden in one’s current situation? The most obvious answer is that every agent possesses a priori, propositional justification for true beliefs about the requirements of rationality in her current situation. An agent can reflect on her situation and come to recognize facts about what that situation rationally requires. Not only can this reflection justify her in believing those facts; the resulting justification is also empirically indefeasible.  (Titelbaum 2015, 276)

But even if Titelbaum’s principle were true, it wouldn’t support a conclusion nearly that strong. The inference here is of the form: Agents can’t rationally form false beliefs about a particular topic, so agents have a priori justification for all possible true beliefs about that topic. And there are all sorts of ways to block that. We could say that all rational beliefs are true, as noted. Or we could simply say that for this topic, the truth of a proposition is a reason to believe it that is always strong enough to defeat rational justification to fully believe its negation. There are a lot of spaces between the claim that a proposition has a priori justification that can never be overridden, and the claim that that proposition can never be rationally believed to be false.

The upshot is that there are two distinct ways out, for the externalist, from the challenge posed by akrasia. One could adopt an extremely externalist epistemology of normative beliefs, as Titelbaum does. That will accept that akrasia is irrational, but deny that the core commitments of externalism entail that akrasia may be rational. Or one could accept that some forms of akrasia, such as violations of Rational Reflection, are rationally possible, and deny they are problematic. I’m going to take this second path. That’s in part because it gives us a stronger form of externalism and I want to show how a strong form of externalism may be defended. And it’s in part because that’s the path I think is correct. Let’s turn, then, to reasons that have been given for thinking that all forms of epistemic akrasia are problematic.

10.3 Why Not Be Akratic

I’m going to briefly discuss a simple, but bad, argument for thinking that all akratic agents are irrational. I’ll call this the Argument from the Ideal. I don’t think anyone in the current literature endorses this argument, so it should be uncontroversial that it fails. Indeed, I suspect it is relatively uncontroversial why it fails. But working through the argument will be helpful for getting to our main task, discussing the Argument from Weirdness. This argument turns on the following premise.

Weirdness is Irrational
Akratic agents will say or do weird things, and only irrational agents would say or do those weird things.

I think Weirdness is Irrational is false, but the following similar principle is true.

Weirdness is Non-Ideal
Akratic agents will say or do weird things, and no ideal agent would say or do those weird things.

Different forms of the argument from weirdness will occupy the rest of the chapter, and in every case my reply will have this form. Akratic agents do some odd things, weird things even, but this is evidence of their not being ideal, not of their being irrational.

But let’s start with the Argument from the Ideal. Imagine a perfect agent, who is all knowing and perfectly good. For convenience, call this agent God. God will never be akratic. That’s because God only believes things that are strongly supported by His evidence, and only believes truths, so He believes (truthfully!) that everything He believes is supported by His evidence. This suggests a simple argument.

  1. God is not akratic.
  2. Rational people will, so far as they can, replicate God’s properties.
  3. So rational people will not be akratic.

The problem is that premise 2 has any number of counterexamples. As well as not being akratic, God is opinionated. By this, I mean that for any p, God will either believe p or believe ¬p. (I’m assuming here that if God exists then a kind of realism is true.) Does it follow that all rational people are opinionated? No, of course not. I don’t know what the weather is like where you, dear reader, are. In many cases, I don’t even know who you are, or when you are reading. So far, we might think this is just a failure of omniscience. But it doesn’t mean that rationality requires that I be opinionated about who you are, where you are, when you are, or what the weather is like there then. Indeed, rationality requires that I not be opinionated about these questions. And that’s true even though I know if I were ideal, I would be opinionated.

The point is not just that premise 2 of the Argument from the Ideal is false. It’s that once we have the distinction between what would be ideal, and what would be rational in non-ideal circumstances, we can see how a lot of other arguments fail too. So let’s start working through some of the Arguments from Weirdness with this distinction in mind.

It is plausible that in Aki’s situation, where she believes p and believes the evidence does not support it, that she should say p, but my evidence does not support p. And this kind of Moore-paradoxical utterance is absurd, say some philosophers  (Smithies 2012; Greco 2014); it’s not something a rational person could say. And it’s certainly weird, and non-ideal. But we can see that it could be rational by working through some other non-ideal cases.

Bulan isn’t sure who she is. She is highly confident that Bulan’s evidence is EB. This is rational, though not quite right. She knows that EB is weak evidence for q, and that her evidence is EA, and that EA is good evidence for q, and that q is true. And that’s all good, because all of those things are true. She says q, but Bulan’s evidence does not support q. It’s hard to see what’s wrong with that claim, and indeed even opponents of epistemic akrasia should not say it is irrational. It’s only the distinctively first-personal claim, the one that we get when Bulan thinks her attitude is mistaken under a first-personal mode of presentation, that is problematic. That’s interesting in itself; the Argument from Weirdness seems to rely on a view about the distinctiveness of first-personal thought and talk. So there is a potential line of defence for the normative externalist that denies the critic’s assumption that first-personal belief is special  (Cappelen and Dever 2014). But let’s grant the assumption that first-person thought and talk is special, and see what other ways we can raise problems for the Argument.

Imagine that Bulan now learns who she is. Since she can’t hold on to all of the claims that she is Bulan, that her evidence is EA, and that Bulan’s evidence is EB, she drops the middle claim. She instead holds on to the first and third claim, and infers that her evidence is EB. Since she knows that EB is weak evidence for q, she now believes that her evidence for q is weak. But since the fact that she is Bulan is no evidence against q, she also holds onto her belief that q. So now she thinks that q, but my evidence does not support q. And this is meant to be problematic, at least according to some opponents of epistemic akrasia. But it isn’t at all clear which step was mistaken. I think that proponents of the Argument from Weirdness have to say that at the last step, one of two things must happen. Either Bulan must not resolve the tension in her beliefs by dropping the belief that her evidence is EA, or she must take the fact that she is Bulan to be a reason to lose her belief in q, although her identity is probabilistically independent of whether q is true. Neither option seems appealing, and it’s striking that proponents of the argument are forced into it.

Let’s go back to the question of just what Aki (or Bulan) should say given their beliefs. Even if epistemic akrasia is possible, it doesn’t immediately follow that rational agents will make these weird utterances. If it is only appropriate to say things if one knows them, as Williamson (2000) argues, and one can only know something if one’s evidence supports it, then it can never be appropriate to say p, but my evidence does not support p. If one knows one evidence does not support p, then by the factivity of knowledge, one’s evidence does not support p, so one does not know p, so one should not assert it. On this view, Aki shouldn’t say My evidence does not support p, even if that proposition is supported by her evidence.

We don’t need anything as strong as the rule Only say what you know to make the argument of the last paragraph work. Assume that for descriptive claims, the rule is Only say what your evidence supports, and for normative claims the rule is Only say what is true. Then if p is descriptive, it won’t be permissible for Aki to say p, but my evidence does not support p. She will be able to say this if p is itself a normative claim. But the evidence that her assertion would be absurd in such cases is weak; there seem to be cases where this is exactly the right thing for her to say  (Maitra and Weatherson 2010).

Horowitz (2014) carefully designs her akratic principle so as to ensure the arguments for it can’t be so easily deflected. Imagine that Aki is more careful to not commit to anything that might be false. So she says I’m confident that p, and I’m confident my evidence does not support p. It is not plausible to say that one should only be confident in a proposition, or should only announce one’s confidence in that proposition, if one knows the proposition to be true. For every lottery ticket in a large, fair lottery, I’m confident it will lose, yet I can’t know each ticket will lose. (Perhaps I can’t know any ticket will lose.) Horowitz argues that even this qualified utterance of Aki’s is defective.

Notably, she doesn’t just argue for this on the basis of intuitions about how weird the assertion itself sounds. There is a good dialectical reason for her to reason this way. The anti-akratic thinks that it is wrong to both be confident in p and in the proposition that the evidence for p is not strong, no matter which proposition p is, and no matter what the agent’s background. It’s hard to see how getting intuitions going about a few token utterances could support a universal generalisation that sweeping. So Horowitz offers some more careful arguments, ones that have at least the potential to generalise in the needed way.

Horowitz argues that Aki should be in a position to conclude, on the basis of her evidence, that her evidence is misleading, and that she was lucky to become so confident in the truth. And this, Horowitz thinks, is wrong. One needs independent reason to think that one’s evidence is misleading, so it’s wrong for Aki to conclude that on the basis of this very evidence. But that last premise seems too strong. Sometimes parts of one’s evidence can be sufficient ground for thinking one’s overall evidence is misleading. That’s indeed what happens in Aki’s case. There is no one part of her evidence that is both grounds for something and (complete) grounds for thinking those very grounds are misleading. The internal relations between the different parts of her evidence provide all the independent support we need for a reasonable judgment that other parts are misleading.

Horowitz has another argument that Aki will be in an untenable position. Imagine she is offered a bet that wins a small amount if p is true, and loses a larger amount if it is false. Aki takes the bet, as she should given that she has excellent reason to believe p is true. But she is then asked why she is doing this, she’ll say that she should not be doing it; she has no good reason to believe the bet will win. Is this, doing something while saying one should not be doing it, problematic? Once we’ve seen other cases of inadvertent virtue, we can see why the answer is no. Huck Finn should help Jim escape, and should say he’s doing the wrong thing while doing so. Aki’s predicament is no worse.

Recently, Clayton Littlejohn (2018) argued for an anti-akrasia view by suggesting that Aki would end up with a distinct kind of untenable attitude. He imagines a conversation between Aki and her epistemic conscience with the following punchline. (Note in Littlejohn’s example, the first order evidence supports not believing in p, and the higher-order evidence supports belief in p. This is the reverse of the case I’ve started with, but that doesn’t matter much. What matters is that the levels diverge, and Aki follows the first-order evidence.)

EC: You agree that it’s irrational for you not to believe p. You agree that it’s rational for you to agree on this point. You acknowledge that you don’t believe p. You just don’t yet see that this calls for any sort of change.
Aki: Right.  (Littlejohn 2018, 12, reference to preprint)

And this last statement of Aki’s is untenable, thinks Littlejohn. And I suspect he is right about that. But it doesn’t matter, because that’s not what Aki should be saying. She should say that there is a “call for change”, and she should think that there is such a call. After all, she thinks that she is not following her evidence, and that one should in general follow one’s evidence. At the very least, that seems like reason to stop and have a think about how one got into this situation, and see if there wasn’t some big mistake made along the way.

If Aki doesn’t stop and reflect on her odd situation, that would be somewhat strange behaviour. But even the normative externalist can say that she should stop and reflect. It’s true that she she isn’t doing anything wrong. But whether one should stop and reflect is not entirely a function of whether one is doing anything particularly wrong. If one’s cognitions or activities (or the conjunction of these) resemble those of people who are making mistakes, one has a reason to be think through what one has done. Of course, if Aki were ideal, she wouldn’t need to stop and reflect, since she would know she is responding optimally to being in a strange situation. But if she were ideal, i.e., if she were God, she wouldn’t be in that situation in the first place.

So we still haven’t seen anything that Aki should do or say, given normative externalism, that is weird in a way that is inconsistent with rationality. She should perhaps say one thing and do another, just like Huck Finn. And she should say that Aki’s evidence doesn’t support what she herself believes, just like Bulan (in the original case) should say that Bulan’s evidence doesn’t support what she herself believes. But Huck Finn, and Bulan, aren’t problematic. And the attempts to get Aki to say weirder things so far haven’t worked; they’ve got her making assertions that violate norms of assertion even by the externalist’s lights.

10.4 Self-Awareness and Rational Reflection

In the previous section I argued that there was nothing distinctively weird about akratic agents. They say and do weird things that other non-ideal but rational agents do. In this section I’ll continue the argument a little, with more focus on two particular principles. In particular, I’ll argue for these two claims:

  1. Cases where agents do not know exactly what their situation is generate counterexamples to Rational Reflection, and to Horowitz’s principle.
  2. There is no reason to believe that these principles hold in cases where agents do know what their evidence is, since there is no reason to think that violations of the principles are more problematic in cases where agents do know what their evidence is.

I’ll start with two relatively plausible assumptions:

  1. What attitudes it is rational for an agent to have depend on features of her situation that vary from agent to agent and time to time.
  2. The features that are relevant in point 1 are not luminous; agents might possess them without knowing that they do.

My view is that the ‘features’ in assumption 1 are just the agent’s evidence, but I’m not assuming that. I’m just assuming that what’s rational depends on the circumstances.

Premise 2 follows from the anti-luminosity arguments introduced by Williamson (2000), and defended recently by Hawthorne and Magidor (2009, 2011) and by Srinivasan (2015). I don’t need the full blown anti-luminosity principle to complete the argument. All I need is that luminosity fails for some of the features that are relevant to rational belief. So if there are some luminous states, as I’ve argued elsewhere  (Weatherson 2004), that won’t matter unless all features relevant to rationality are luminous. And that’s not particularly plausible.

Even if all rational agents know exactly what is rationally required in all possible situations, as Titelbaum argues they do, there will still be failures of Rational Reflection. That is because an agent need not know what situation they are actually in. It is possible for an agent to have perfect knowledge of the function from situations to the rational status of states in such a situation, and not know what is rational for them. If rather extreme rational states are only permissible in rare situations, and the agent is in such a rare situation, then Rational Reflection will fail.

The abstract possibility described in the previous sentence is realized in Williamson’s case of the unmarked clock  (Williamson 2011, 2014). I’ll work through Horowitz’s variant, her case of the unmarked dartboard, because it provides a useful platform for setting up Horowitz’s criticisms of the example, and my reply.

A dart is thrown at a dartboard that is infinite in height and width. The dartboard has gridlines on it running up-down and left-right. Due to magnets in the dart and the board, we know in advance that it will land on the intersection of two gridlines. The agent, we’ll call her Siiri, can almost, but not quite, make out where it lands, and she knows in advance this will be the case.

Say that the ‘distance’ between two grid points, ⟨x1y1⟩ and ⟨x2y2⟩ is |x1 - x2| + |y1 - y2|. This is not the straight-line distance between the points; it is the shortest path between them on gridlines. Siiri knows in advance that if the dart lands on ⟨xy⟩, then she’ll know it is on ⟨xy⟩ or one of the four points distance 1 away from it. And she knows in that situation it will be rational to have equal credence that it is on each of those five points.

Assume the dart lands on ⟨8, 3⟩, and consider her credence in the proposition that it is on ⟨7, 3⟩, ⟨8, 4⟩, ⟨9, 3⟩ or ⟨8, 2⟩. Call that proposition p. After getting visual evidence of where the dart is, her credence in p should be 0.8. But she should have credence 0.8 in p iff the dart is on ⟨8, 3⟩, and credence 0.2 in p if the dart is on any of the other four squares she thinks it might be on. So given her situation, the expected rational credence in p is 0.32. So Rational Reflection fails, even though Siiri knows exactly the function from situations to rational credences.

Horowitz argues that this is a special case. She thinks that a restricted version of Rational Reflection can be crafted that is immune to such a counterexample. There is something odd about the example. We’re interested in a proposition p that is in a very odd class. Consider all propositions of the form the dart lands distance 1 from point ⟨x, y⟩. Siiri knows in advance that she will be very confident in such a proposition iff it is false. And that is odd. Here is how Horowitz puts the point. (Note that I’ve adjusted the terminology slightly to match what’s here, and what she calls ‘akrasia’ is being highly confident in p, but my evidence doesn’t support p.)

In Dartboard, however, the evidence is not truth-guiding, at least with respect to propositions like p. Instead, it is falsity-guiding. It supports high confidence in p when p is false—that is, when the dart landed at ⟨8, 3⟩. And it supports low confidence in p when p is true—that is, when the dart landed at ⟨7, 3⟩, ⟨8, 4⟩, ⟨9, 3⟩ or ⟨8, 2⟩. This is an unusual feature of Dartboard. And it is only because of this unusual feature that epistemic akrasia seems rational in Dartboard. You should think that you should have low confidence in p precisely because you should think p is probably true—and because your evidence is falsity-guiding with respect to p. Epistemic akrasia is rational precisely because we should take into account background expectations about whether the evidence is likely to be truth-guiding or falsity-guiding.  (Horowitz 2014, 738, notation altered, emphasis in original)

Surprisingly, it isn’t essential to the example that the evidence is falsity-guiding in Horowitz’s sense. This feature of the case is a byproduct of its simplicity; more complicated cases don’t have this feature.

Imagine instead that when the dart lands at a particular spot ⟨xy⟩, all spots whose distance from ⟨xy⟩ is 10 or less are open epistemic possibilities for Siiri. But they are not equal possibilities; her probability distribution is peaked at ⟨xy⟩ itself. For any grid point distance d from ⟨xy⟩, her posterior probability that it landed there is:

\[ \frac{4^{10-d}}{2,912,692} \]

The denominator there is just what’s needed to make the probabilities add to 1. The intuitive idea is for each step further away from the center we get, the probability of being in that particular cell falls by a factor of 4. Now assume again the dart lands on ⟨8, 3⟩, though of course Siiri does not know this, and let q be the proposition that the distance between the dart and ⟨8, 3⟩ is either 0 or 3.

The evidence is not falsity-guiding with respect to q. Given what we said about Siiri, then among the worlds that are epistemically possible for her, her credence in q would be higher if q were true than if it were false. More precisely, her credence in q would somewhere between 0.413 and 0.44 if she were in one of the worlds that made q, and at most 0.307 if she were in one of the worlds that made q false. (The calculations to confirm the facts I’ll run through about the example are tedious, but trivial, to verify with a computer.) The evidence supports higher confidence in q when q is true than than when q is false. That’s unlike the original example. But this case also generates violations of Rational Reflection. Siiri’s credence in q is about 0.4275, but her expectation of the rational credence in it is about 0.3127.

Now you might think that’s not a huge difference. Perhaps this is a counter-example to Rational Reflection, but not to Horowitz’s principle that it is irrational to be highly confident in a proposition while also being highly confident that one is irrational to be so confident. But if we iterate the example, we get a counterexample to that principle too.

Imagine Siiri starts off (rationally) certain that repeated throws at the board are independent. And imagine that the dart is removed after each throw, so she can’t see that successive darts land at the same spot. And imagine that her ability to detect where it lands doesn’t improve, indeed doesn’t change, over repeated throws. Finally imagine (somewhat improbably!) that repeated throws keep landing on ⟨8, 3⟩. Let r be the proposition that at least 35 percent of throws are either distance 0 or distance 3 from ⟨8, 3⟩. As the number of throws increases, she should get more and more confident that is true, and get more and more confident that it is irrational to think that it is true. After 100 throws, for example, her credence in r should be over 0.95, but her expectation of the rational credence in r should be under 0.25. This kind of iteration of examples can be used to turn any dartboard-like counterexample to Rational Reflection into a counterexample to Horowitz’s principle.

10.5 Akrasia and Odd Statements

So Horowitz’s explanation of why cases like Siiri’s are special, that they are cases where agents know evidence is not truth-conducive, doesn’t work. And that raises doubts for any attempt to separate Aki’s case from Siiri’s.

A large part of the motivation for thinking Aki’s state is irrational is that Aki says weird things, like p is true, although my evidence supports p being false. But Siiri says similar things, and they are the right things for Siiri to say. So the very fact that Aki says them can’t show that her position is incoherent; she is, in this respect, just like the perfectly coherent (if unfortunate) Siiri.

Siiri might regard it as a lucky break that she has a true belief despite not following her evidence. Of course, Aki could feel the same way. She should think that the home team won, think that her evidence doesn’t support this, and from those claims think it is lucky that she has a correct belief despite not following the evidence. But Siiri will think something structurally similar. Horowitz argues that Siiri doesn’t have to regard herself as implausibly lucky. In the original version of the case Siiri knows the evidence is not truth-conducive, so it isn’t a lucky break that not following the evidence (as it seems) leads to truth. But in the revised case, Siiri has to think she’s just as lucky as Aki. And if it is reasonable for Siiri to think she is lucky, it is also reasonably for Aki to think she is.

Let’s take stock. Siiri’s case shows that Rational Reflection fails, and that it can be rational to be confident in something while also being confident that one’s evidence does not support this view. It does not show that it can be rational to be confident in a falsehood about what rationality itself requires, as opposed to what one’s situation is. That is, one could be certain about all the truths about what rationality requires in each situation, and still end up like Siiri. Indeed, we assumed she was certain about all the truths about what rationality requires in each situation, and still got a strange result falling out. So Siiri’s case does not directly tell against the most plausible version of Titelbaum’s principle.

But the arguments for Titelbaum’s principle (or anything like it) are all Arguments from Weirdness. And Siiri’s case does undermine the force of those arguments. For she says a lot of weird things too, and they are the right thing to say. So the fact that violations of Titelbaum’s principle will lead to people saying weird, akratic things is no reason to think that Titelbaum’s principle is a requirement of rationality. In weird situations, rational people are weird. Ideal people aren’t weird, but that’s only because they know things about their situation that are hidden from normal, rational, people. Normative externalism does imply that rational people will be akratic, and be weird, and be non-ideal. But none of that is surprising; the kinds of weirdness and non-idealness we see are just what we should independently expect in rational, but non-ideal, people.

10.6 Desire as Belief (Reprise)

The dartboard example is relevant to more than debates over akrasia. It also helps illustrate a point I alluded to frequently in part one, without ever setting out in detail. Proponents of the idea that moral uncertainty matters to rational decision making seem to be committed to a kind of ‘desire as belief’ thesis. David (Lewis 1988, 1996) raised some technical problems for such theories, and recently those problems have been expanded by Russell and Hawthorne (2016). I’m not going to add anything to the arguments they have offered. But I think it might be helpful to translate those arguments into the idioms that are more familiar in the moral uncertainty debates, since participants in that debate have not always appreciated the significance of these formal results. The only philosopher I know who has connected the moral uncertainty debates with the desire as belief debates is Ittay Nissan-Rozen (2015), and he takes an externalist position on moral uncertainty. My focus will be on the argument Russell and Hawthorne give, because it would be too much of a digression to investigate whether the ‘desire by necessity’ response that Huw Price (1989) gives to Lewis’s arguments is successful.

Let’s assume that we want moral uncertainty to play an important role in decision making. We should be able to provide some kind of semantics for claims about moral uncertainty. In particular, we would like a semantics for claims of the form A is better than B that satisfies the following four constraints.

  1. Claims like A is better than B should be the kind of thing that can be believed, and that one can have higher or lower credences in. So that claim should be associated with a set of worlds, or a set of n-tuples, where the first member of that tuple is a world. (The latter disjunction is relevant if one thinks, perhaps following Lewis (1979), that the objects of belief are something like centred worlds.)
  2. These attitudes in moral ‘propositions’ (or whatever else is picked out by A is better than B) should be updated in the way that credal attitudes are usually updated. Ideally that would be by conditionalisation, or by some other update rule that can be given independent motivation.
  3. The semantics should associate with A is better than B a set of worlds (or tuples or whatever) that at least roughly corresponds with what those words ordinarily mean in English.
  4. The claim should be action guiding, so (perhaps barring exceptional circumstances) conditional on A is better than BA should be more choice-worthy than B.

And it turns out to be incredibly hard to find a semantics that satisfies these four constraints. In fact, there are principled reasons to think that no such semantics is possible.

There is one technical complication that we need to address first. Whether A is better than B depends on one’s evidence. So if A is that I get a (typical) lottery ticket, and B is that I get a penny, then A is better than B, from my perspective, iff I don’t know that A is a losing ticket. It is far from trivial to represent claims about what one’s evidence is in a semantic model. That’s in part because facts about what one’s evidence is are ‘first-personal’ facts that are tricky to represent in standard models, and in part because what one’s evidence is changes over time, and it’s hard to represent changes over time in standard models.

Here’s how I’ll try to deal with, or at least sidestep, these problems. Instead of thinking of beliefs as attitudes to sets of worlds, we’ll think of them as attitudes to world-evidence-morality triples: ⟨wem⟩. And we’ll assume that e determines (perhaps among many other things) a function from times to one’s evidence at that time. Just how it does that, and just how it attitudes distributed over e are updated, will be left as a black-box. (See Titelbaum (2016) for an excellent survey of the options for how the self-locating parts of one’s credal state might be updated.)

I’ll assume m is just a number, perhaps subject to enough constraints that we don’t end up in the paradoxes of unbounded utility1. And what we want is that the value of a proposition is the expected value of m given that the proposition is true. So A is better than B, given some evidence, just in case the expected value of m given A and that evidence is greater than the expected value of m given B and that evidence. But expected values change with evidence, and evidence changes with time, so this doesn’t settle what m should be. It turns out that while there are a few ways one could go here, any choice ends up violating one of the four constraints I proposed.

  • 1 I’m assuming here that the moral value of a world can be represented as a number. That’s not particularly plausible, but without this assumption the internalist views I’m opposing are very hard to state or defend.

  • Assume, first, that the evidence is highly malleable. I mean two things by that. One is that when we conditionalise on some proposition c, then c gets added to the evidence. The other is that the time in question (and remember that e is a function from times to evidence sets) is the time any relevant decision has to be made. This pair of assumptions has a very nice feature - it guarantees that the fourth constraint is met. (This turns out to be harder to do than you might think.) Conditional on A is better than B, thus interpreted, I should choose A over B, no matter what the other evidence is.

    The problem with this assumption is that it violates the third constraint rather dramatically. The following example is a version of the objection that Russell and Hawthorne (2016, 315–16) make to the principle they call Comparative Value. Consider the following substitutions for A and B.

    A1
    I get a can of frosty ice-cold Foster’s Lager in five minutes time.
    B1
    I get a poke in the eye with a burnt stick in five minutes time.

    I think that A1 is better than B1. And I even think that conditional on them both being true, which I hope they aren’t. But on this model, we can’t have that. Because conditional on them both being true, the expected value of m conditional on either of them is the same as the expected value m simpliciter. So conditional on their both being true, it isn’t true that A1 is better than B1.

    This is already a violation of constraint 3. But as Russell and Hawthorne go on to point out, a lot of strange things start to follow if we don’t want to violate constraint 2 as well. We just proved that conditional on A1B1, it must be false that A1 is better than B1. That is, conditional on A1B1, the probability of A1 is better than B1 must be 0. If the way to update on A1B1 is by conditionalisation, it follows that the current probability of the conjunction of A1B1 and A1 is better than B1 must be 0. So conditional on A1 is better than B1, which is surely true, the conjunction of A1 and B1 must have probability 0. And that’s true for any A, B such that right now it’s known that A is better than B. This is all absurd. Now perhaps this isn’t a violation of constraint 2, because I’m assuming here that update is by conditionalisation, and maybe there is a principled way to reject that in cases like this. In any case, this option for how to understand e fails constraint 3, so it must be wrong.

    The way this option failed suggested a distinct move. What’s true about A1 and B1 is not that given they are both true, A1 will make the world better than B1 will. After all, given they are both true, they won’t make any (further) difference to the world. So perhaps when assessing A1 and B1 for value, we should look at their initial value, or their value given the (absolutely) prior probability.

    The problem with this approach is that it doesn’t allow learning. Assume we learn C, than if I get poked in the eye with a burnt stick in five minutes, then malaria will be cured. Then it would be false that A1 is better than B1, and indeed true that B1 is better than A1. (Although, owww!) So this approach also violates constraint 3. And, for the same reason, it violates constraint 4.

    Maybe the approach is to rigidify. What it means to say that A is better than B is that given the actual evidence I currently have, A has a higher expected m value than B. This will handle the the Foster’s/poke case fairly well. But it leads to other problems. The following is a simple variant of the Rembrant case Russell and Hawthorne (2016, 331) offer.

    Imagine we’re in the simpler of the dart cases. When a dart lands on ⟨xy⟩, then each of the five possibilities that it is on that very spot, or that it is one spot up, down, left or right are equally likely. And the dart did in fact land on ⟨8, 3⟩. At the same time, two fair coins have been tossed, although the results of them are hidden. Now compare the following options:

    A2
    I get a Vegemite sandwich if the dart landed on ⟨8, 4⟩, ⟨8, 2⟩, ⟨7, 3⟩ or ⟨9, 3⟩, and nothing otherwise.
    B2
    I get a Vegemite sandwich if at least one of the coins landed heads, and nothing otherwise.

    Right now A2 is better than B2. That’s because given my evidence, A2 gets me a 0.8 chance of a Vegemite sandwich, and B2 gets me a 0.75 chance. (Assuming, as is completely obvious, that more Vegemite sandwiches are better than fewer.) But conditional on A2 is better than B2, I should prefer B2. That’s because the only worlds where A2 is better than B2 are worlds where the dart landed on ⟨8, 3⟩. And in those worlds, I don’t get a Vegemite sandwich from A2.

    So this rigid interpretation of ‘better’ violates constraint 4: it makes betterness judgments not be action guiding. I prefer A2 to B2, but conditional on A2 being better than B2, I prefer B2. Personally, I think this is the best interpretation of ‘better’, but that’s because I think our choices shouldn’t be guided by our beliefs about, or our evidence about, what’s better than what.

    I haven’t given a watertight proof here that there is no way to interpret ‘better’ in this kind of model, or any other kind of model, that satisfies the four constraints. But philosophers who think moral uncertainty matters for decision making haven’t typically appreciated how hard it is to get a model that does satisfy these constraints. The ‘desire as belief’ results are fairly surprising, and when combined with anti-luminosity principles, they make it very hard to see how moral uncertainty could be relevant to decision making.