1  Introduction

1.1 To Thine Own Self Be True

Early in Hamlet, Laertes departs Elsinore for Paris. As he prepares to go his father, Lord Polonius, offers him some paternal advice. He tells him to talk less and smile more. He tells him to spend all his money on clothes, since that’s how they roll in Paris. He tells him to neither a borrower nor a lender be, though the latter is presumably redundant if he’s taken the advice to date. And he concludes with this advice, destined to adorn high school yearbooks for centuries to come.

This above all: to thine own self be true,
And it must follow, as the night the day,
Thou canst not then be false to any man.

It isn’t completely clear what Polonius means when he advises Laertes to be true to himself, but it is plausible that he means something like this:

Follow your own principles!

Or perhaps something like this:

Do what you think is right!

And unlike the rest of the advice Polonius gives, many philosophers have followed him in thinking this is a very good idea.

The primary aim of this book is argue against this idea. Following one’s own principles, or doing what one thinks is right, are not in general very good ideas at all. I will call normative internalism the view that we should be guided by norms that are internal to our own minds, in the sense that our beliefs, and our (normative evidence) is internal to our minds. And I will oppose that view, arguing for normative externalism.

Normative externalism is the view that the most important standards for evaluating actions, mental states and agents are typically external to the actor, believer or agent being evaluated. It can be appropriate to hold someone to a moral, or an epistemic, standard that they do not endorse, or even that they could not be reasonably expected to endorse. If one has bad standards, there need be nothing wrong in violating them, and there is nothing good about upholding them.

That last paragraph made a lot of distinct claims, and it is worth spending some time teasing them apart. But before we get too deep in the weeds, I want to have on the table the guiding principle of the book. Being true to yourself, in the sense of conforming to the principles one has, or even to the principles one has reason to have, is just not that important. What is important is doing the right thing, being a good person, and having rational beliefs. If one has misguided views about the right, the good, and the rational, then there is nothing good about conforming to those misguided views. And this matters, because many people have views about the right, the good, and the rational, that are very misguided indeed.

1.2 Four Questions

1.2.1 Actions, Agents or Advice

If one says, with Polonius, that it is good to conform to one’s own principles, there are a number of distinct things one could be meaning.

One could be making a claim about particular actions. (Or about particular beliefs, but we’ll focus on actions for the next few paragraphs.) So one could be saying that actions that conform to the actor’s principles are thereby in some sense right or good, and those that violate the actor’s principles are in some sense wrong or bad.

Alternatively, one could be making a claim about agents. So one could be saying that people who (typically) conform their actions to their principles are in some sense good (or less bad) people, and those who violate their own principles are in some sense bad.

Or alternatively again, one could be making a claim about advice. One could be saying that whether or not the claims in the previous two paragraphs are strictly correct, it is excellent to advise people to act according to their principles. There are plenty of cases where advising people to do the optimal thing is bad, especially if aiming for the optimal result is likely to lead to catastrophe. So this view about advise is in principle distinct from the views about actions and agents.

The form of externalism I will defend is opposed to the views in all of the last three paragraphs. But it is most strongly opposed to the view about actions, and least strongly opposed to the view about advice. Indeed, I won’t have a lot to say about advice throughout the book; except to note occasionally when intuitions about advice seem to be getting used illegitimately to justify conclusions about actions. But I don’t mean to imply that the views have to stand or fall together. A view that is externalist about actions - it thinks it doesn’t make any difference to the correct evaluation of an action whether the actor endorsed it or not - but internalist about agents - it thinks there is something good about people who stick to their principles and bad about those who do not - is certainly worth considering. But it isn’t my view; I mean to oppose all three precisifications of what Polonius says.

1.2.2 Above All?

Polonius does not just say Laertes should be true to himself. He says this is something ‘above all’. This suggests that he is elevating Do what you think is right to a central place, making it more important than principles like Respect other people, or Make the world better, or even Do the right thing.

The externalist view I support takes completely the opposite tack. The principle Do what you think is right is of no importance at all.

But there is a large middle ground position. This is easiest to see if we assume the debate is about agents, not actions or advice, so I’ll present it for agents. But it shouldn’t be too hard to see how to generalise the idea.

We could hold that doing what one thinks is right is one of the virtues, something that contributes to a person being a good person. Or we might think that failing to do what one thinks is right is a vice, something that contributes to a person being a bad person. And we might think one or other (or both) of those things without thinking them particularly important virtues or vices. One could coherently hold that there is a virtue in holding to one’s principles, even if one thinks that other virtues to do with honesty, courage, respect and the like are more important. And one could coherently hold that doing what one thinks is wrong is a vice, even in the case where one has false enough views about first-order moral questions that doing what one thinks it right would manifest even more serious vices.

Indeed, one might think that ordinary English goes along with this. We do talk somewhat admiringly about people who are principled or resolute, and somewhat disdainfully about people who are hypocritical.1

  • 1 Though to be clear, I don’t think the English words ‘principled’ and ‘resolute’ actually pick out the so-called virtue of upholding one’s own principles. Following Richard Holton (1999), I think those words pick out diachronic properties of a person. They apply to a person in part due to that person’s constancy over time in some respect. Following one’s principles isn’t like this; it is a purely synchronic affair.

  • I’m going to classify this kind of view, the one that says that doing what one thinks is right is important to character, but not of maximal importance, as a moderate internalist view. And my externalism will be opposed to it, like it is opposed to the view that being principled, and avoiding hypocrisy, are the most important virtues.

    The possibility of such a moderate internalist view is important, because otherwise we might think the argument against internalism would be too easy. History is full of fanatics who convinced themselves that they were doing the right thing while causing immense harm. It is hard to believe that the one principle they did conform to, Follow your own principles, is the most important principle of all. But perhaps, just perhaps, their resoluteness is in a small way a virtue. At least, a philosophical view that says that it is a virtue, albeit one offset by mountains of vice, is not absurd.

    1.2.3 Ethics, Epistemology and More

    I’ve been interpreting Polonius’s dictum as being primarily about ethics so far. But views like his are available in many other areas of philosophy. I’ll mention three more here, the first of which will be a major focus of this book.

    Belief is subject to evaluation on a number of fronts. Beliefs are true or false, but that hardly exhausts their virtues or vices. Some true beliefs are bad in virtue of being lucky guesses, or leaps to unwarranted conclusions. Some false beliefs are the result of sensibly following the evidence where it leads, and just being unluckily misled into error. So as well as evaluating a belief for truth, we can evaluate it for responsiveness to the evidence. I’m going to argue, somewhat indirectly, that a belief is rational just in case it is responsive to the evidence in this way.2

  • 2 Though getting clear on just what this last sentence commits me to will require saying more about what evidence is. For now, it won’t do much harm to equate evidence with basic knowledge. A proposition p is part of the subject’s evidence if the subject knows p, and doesn’t know p because she inferred it from something else.

  • But if that’s what rationality is, then subjects can also have beliefs about the rationality of their own beliefs. And we can ask whether subjects are doing well at believing by their own lights. To believe something just is to believe it is true, so if our only standard for belief is truth, then everyone will believe well by their own lights. But it is possible to believe something, and even rationally believe it, while believing that that very belief is irrational. Or, at least, so I’ll argue.

    Is this a bad thing? Should we mark someone down for believing in a way that they take to be irrational? I’m going to argue that we should not. It’s good to believe truths. It’s good to believe in accord with one’s evidence. And that’s as far as we should go. It’s not good to believe in accord with what one believes the evidence supports, unless one thereby ends up with a belief that is good for some other reason. And it’s not bad to believe something that one believes is not supported by one’s evidence, unless one ends up with a belief that is bad for some other reason.

    Just as in the ethics case, we can separate out a number of distinct questions here. Assume you think there is something philosophically important about beliefs that are irrational by the lights of the believer themselves. You could say that this is a bad-making feature of the belief itself, or a bad-making feature of the believer, or, perhaps that it is bad to advise people to have beliefs that are irrational by their own lights. That is, we can replicate the act, agent or advice distinction inside epistemology, though the ‘acts’ are really the states of holding particular beliefs. And if you do think these beliefs, or believers, are bad in some way, there is a further question about how much badness is involved. Is believing in a way that one thinks is irrational as bad as not following the (first-order) evidence, or more bad, or less bad. (Or is badness the wrong concept to be using here?)

    We will see different philosophical views that take different stands on these questions throughout part II of the book. I’m going to defend a fairly simple, and fairly extreme, position. It isn’t a bad making feature, in any way, of a belief that the believer thinks it is irrational, nor is it a bad making feature of believers that they have beliefs they think are irrational. It isn’t even a bad habit to routinely have beliefs that one thinks are irrational; though I’m going to be a little more tentative in defending that last conclusion. The general principle throughout is to motivate and defend a picture where what matters is conformity to the actual rules - be they rules of action or rules of belief - rather than conformity to what one takes (or even rationally takes) the rules to be.

    The disputes of the last few paragraphs have all been over epistemology, fairly narrowly construed. But there are some other disputes that we might have to, where the difference between conformity to external rules and conformity to one’s version of the rules matters. I’m not going to say much about the next two disputes, but they are helpful to have on the table.

    Some lives go better than others. When we act for the sake of others, when we act benevolently, we aim to improve the lives of others. Call someone’s welfare that quantity we improve when we act benevolently.3 Philosophers disagree a lot about what welfare is, so some of them are wrong. And though I’m not going to argue for this, it seems to me that the disagreeing parties each have such good arguments that at least some of the philosophers who are wrong are nevertheless rational in holding the position they do. So that implies that a rational person could have a choice between two actions, one of which actually produces more welfare, and the other of which produces more welfare according to the theory of welfare they (rationally) hold. Assuming the person wants to act benevolently, or, if the act is directed to their own good, they want to act prudentially, is there anything good about doing the thing that produces more welfare according to the theory of welfare they hold? My position, though I’m not going to argue for this in this book, is that there is not. What matters for benevolent or prudential action is how well one’s act does according to the correct theory of welfare. It doesn’t make an action benevolent, or prudent, if the action is good according to a mistaken theory of welfare. That’s true even if the theory of welfare is one’s own, or even if it is the one that is rational for one to hold. If one’s theory of welfare is a purely hedonistic experiential theory of welfare, then you might think you are improving the welfare of others by force-feeding them happy pills. But if that theory of welfare is false, and welfare involves preference satisfaction, or autonomy, then such an action will not be benevolent, nor will it be rational to perform on benevolent grounds.

  • 3 There are a lot of different things that people call welfare in the philosophical literature. I’m taking the idea of tying it definitionally to benevolent action from Simon Keller (2009).

  • 4 I’m suppressing disputes within orthodoxy about how just to formulate the view, though those disputes would also suffice to get the kind of example I want going.

  • We can make the same kind of distinction within decision theory. Let’s assume for now that a person has rational beliefs, and when they lack belief they assign a rational probability to each uncertain outcome, and they value the right things. There is still a question about how they should act in the face of uncertainty. Unlike the questions about ethics, epistemology, or welfare, there is an orthodox answer here. They should maximise expected utility. That is, for each act, they should multiply the probability of each outcome given that act, by the (presumably numerical) value of that outcome-act pair, and add up the resulting products to get an expected value of the act. Then they should choose the act with the highest expected value. But while this is the orthodox view of decision theory, there are dissenters from it4. The best recent statement of dissent is in a book-length treatment by Lara Buchak (2013). And someone who has read Buchak’s book can think that her view is true, or, perhaps, think that there is some probability that it is true and some probability that the orthodoxy is true.

    So now we can ask the same kind of question about conformity to the correct rules versus conformity to the rules one thinks are correct.5 Assume that someone does not have the correct beliefs about how to rationally make decisions. And assume that they perform an act which is not rational, according to the true decision theory, but is rational according to the decision theory they accept. Is there something good about that decision, and would there have been something bad about them doing the thing that correct theory recommended? My position is that there is not. The rational decisions are the ones recommended by correct decision theory. There is nothing to be said for conforming to one’s own preferred decision theory, if that theory is false.

  • 5 If the moral theories one gives credence to reject expected value maximisation, then there will be even more complications at the intersection of ethics and decision theory. Ittay Nissan-Rozen (2015) has a really nice case showing the complications that arise for the internalist when moral theories do not assume orthodox decision theory.

  • 1.2.4 Actual or Rational

    So far I’ve focussed on the distinction between principles that are external to the agent, and principles that are internal to the agent in the sense of being believed by the agent, or being the agent’s own principles. When I call my view externalist, it is to indicate that I think it is the external principles that matter. But there is another category of principles that I haven’t focussed on, and which are in some sense internal. These are the principles that the agent should, rationally, accept.

    Now if we say that the agent should rationally accept all and only the true principles, then there won’t be a distinction between Follow the true principles and Follow the principles it is rational to accept. But let’s work for now with the assumption that there is a difference here; that just like with anything else, agents can be rationally misled about the nature of ethics, epistemology, welfare, and decision theory.6 Then there is another possibility; that agents should follow the principles that they have most reason to believe are true.

  • 6 Julia Markovits (2014) argues that agents have rational reason to accept the fundamental moral truths. Michael Titelbaum (2015) argues that agents have rational reason to accept the fundamental epistemological truths. I’m assuming for now that both of these positions are false, because it gives my opponents more room to move if they are false. Claire Field (forthcoming) responds to Titelbaum’s arguments. Note here that when I say that an agent can be rationally misled about morality and epistemology, I am not claiming that they can rationally have outright false beliefs about morality and epistemology. I just mean that rationality is consistent with having something other than complete certainty in the claims that are actually true.

  • 7 There are more historical sources on Robespierre than would be remotely possible to list. The things I say here are largely drawn from recent work by Peter McPhee (2012), Ruth Scurr (2006) and especially Marisa Linton (2013). The study of the Committee of Public Safety by R. R. Palmer (1941) is helpful for seeing Robespierre in context, and especially seeing him alongside men with even more extreme characteristics than his.

  • 8 Most revolutionary leaders are either power-hungry or bloodthirsty. But Robespierre genuinely seems to have been neither of those, except perhaps at the very very end. Linton (2013, 97–99) is particularly clear on this point.

  • 9 One thing that won’t rescue intuitions about the case is to say that Do what you think is right is important only if the agent is ‘procedurally rational’. Robespierre used the right methods to form moral beliefs: he read widely, talked to lots of people, and reflected on what he heard and saw. He just got things catastrophically wrong. Gideon Rosen (2003, 2004) places a lot of emphasis on procedural rationality in defending a form of internalism, though his aim is very much not to track intuitions about particular cases.

  • This gives another way for the internalist to respond to the problem of historical monsters. Let’s think about one particular case, one that I’ll return to occasionally in the book: Maximilien Robespierre7. Whatever else one can say about him, no one seriously doubts that Robespierre always did what he thought was right.8 But doing what he thought was right involved setting off the Reign of Terror, and executing ever so many people on incredibly flimsy pretexts. We can’t really say that the principle he did well by, Do what you think is right, is one that should be valued above all. We mentioned above that we could reasonably say it is a good-making feature of Robespierre that he was principled, even if it is outweighed by how abhorrent his set of principles turned out to be. But the interest here is in whether we can find some internalist principle that can be said to be true ‘above all’ in his case.9

    Robespierre had ample reason to believe that he had ended up on the wrong track. He wasn’t brainwashed into believing that the Terror was morally justifiable; the reasons for it were clearly present to him. The results of the Terror weren’t playing out in some distant land, or in the hold of a slave ship, they were right in front of him. And he knew a lot of moral and political theory. He was well educated in the classics. He read Montesquieu. He read, and adored, Rousseau. He sat through hours upon hours of debate every day about the efficacy and morality of government actions, both before and during his reign. Even if one thinks, as I do, that sometimes the reasons for the immorality of an action are hidden from the actor, that can hardly be said to be true in Robespierre’s case.

    So I think we can reasonably say in Robespierre’s case that he violated the rule Follow the principles it is rational to accept. And that rule is an internal rule, in some sense. If we take it to be the primary rule, then we won’t judge people by standards that are hidden from them. We may judge them by standards they don’t accept, but only when they have reason to accept the standards. So I’ll treat it as another internalist approach, though very different from the approach that says it is most important for people to follow their own principles.

    So we have two very different kinds of internalist approaches to ethics, epistemology, welfare and decision theory. One says that it is (most) important that people follow their own principles. The other says that it is (most) important that people follow the principles they have rational reason to accept. The first, in its strongest form, says absurd things about the case of fanatics. As I’ll argue at length in what follows, it also leads to nasty regresses. The second does not have these problems. But it is very hard to motivate. We will spend some time on the reasons philosophers have had for wanting views like Polonius’s. All of these, I’ll argue, push towards the idea that the most important thing is that people follow the principles they actually accept. None of them, when considered carefully, give us a reason to prefer principles the actor or believer has reason to accept to the principles that are actually true. Retreating from Follow your own principles to Follow the principles it is rational to accept lets the internalist avoid harsh cases like Robespierre, but at the cost of abandoning the interesting reasons they have for their view.

    1.2.5 Some Caveats

    I’ve spoken freely in this section about the true moral principles. That way of speaking presupposes that there are moral truths. I mean to be using the phrase ‘moral truths’ in as non-committing as sense as is possible. I don’t mean to say that the moral truths are mind-independent. If it is true that murder is wrong in virtue of our disapproval of murder, it is still true that murder is wrong, and that’s enough for current purposes. Nor do I mean to insist that the moral truths are invariant across space and time. There are hard questions about how we should evaluate actors from different times and places if a form of moral relativism is true. But those questions are largely orthogonal to the one’s I’m interested in.

    I am in effect assuming away a very strong form of moral relativism, one that makes moral truth relative to the moral principles of the actor being evaluated. But that’s not a plausible form of moral relativism. If moral relativism is true, then what morality is relative to is much more inclusive than a single person; it is something like a culture, or a practice. And that is enough for there to be a difference between what a person accepts, and what is true in their culture or practice.

    As briefly noted above, I’m also assuming that there is a difference between what is true and what it is rational to accept. All I really need here is that it can be rational to be less than fully certain in some moral and epistemic truths. I’m not going to assume, for example, that one can rationally believe moral or epistemic falsehoods. I’ve spoken above as if that is possible, but that was a convenient simplification. What’s going to really matter is just the existence of a gap between what’s true and what’s reasonable to believe, and that gap can arise even if all the things that are reasonable to believe are true.

    Finally, you may have noticed that we ended up a long way from anything that could be plausibly attributed to Lord Polonius. When he tells Laertes to be true to himself, I’m pretty sure he’s not saying anything about whether Laertes should have beliefs that are rational by the standards that Laertes should rationally accept. Yet whether Laertes (or anyone else) should have such beliefs is one of the questions we ended up being interested in. The good Lord’s role in this play was just to introduce the distinction between following one’s own principles and following the true principles. With that distinction on stage, we can let Polonius exit the scene.

    1.3 Normative Externalism Defined

    Normative externalism is the view that the most important evaluations of actions and actors, and of beliefs and believers, are independent both of the actor or believer’s belief about the value of their action or belief, and of the evidence the actor or believer has about the value of their action or belief. The aim of this book is to defend normative externalism, and explore why it is philosophically important.

    It is tempting to strengthen this kind of normative externalism further, and say that what one should do and believe is completely independent of what one believes one should do and believe. But this strong independence claim can’t be right. (I’m grateful here to Derek Ball.) If one thinks that one should murder one’s neighbours, then one ought to get professional help. Sometimes normative beliefs change the normative significance of other actions. So the externalist claim I’m defending is a little weaker than this general independence claim. It allows that a normative belief B may change the normative status of actions and beliefs that are not part of the content of B. But the externalism I’m defending is still going to be strong enough to allow a lot of critics.

    The strongest kind of normative internalism says that the value of actions and beliefs is tightly tied to the beliefs that actors and believers have about their own actions and beliefs. It says that the most important moral precept is to do what you think is right, and the most important epistemological precept is to believe what you think the evidence supports. The strong version of internalism is not a popular position. But it has an important role to play in the narrative here. That’s because there are many interesting, and popular, moderate versions of internalism. Yet once we look at the motivations for those moderate versions, we’ll see that they really are arguments for the strongest, and least plausible, version.

    We can generate those moderate forms of normative internalism by looking at the four questions from the previous section. Some internalists say that internalism is true just for actors (or believers), not for actions (or beliefs). Some say that internalist principles are part of the moral (or epistemological) truth, not principles to put above all. Some say that internalism principles apply to just one of ethics or epistemology, not both. And some say that what matters is not conformity to the principles one actually holds, but conformity to the principles one has evidence for. And answers to these questions can be mixed and matched indefinitely to produce varieties of internalist theses. Here, for example, are three principles that are both widely believed, and which you can get by mixing and matching answers to the four questions.

    • It is a vice to frequently do things one believes are wrong, even if those actions are actually right.
    • Wrong actions are blameless, and hence do not reflect badly on the actor who performs them, if that actor believes the action is right, and has good reason for that belief.
    • A belief is irrational if the believer has good evidence that the belief is not supported by their evidence, even if that ‘higher-order’ evidence is misleading.

    And I’m going to argue that the best arguments for those positions overgeneralise; they are equally good as arguments for the implausible strong version of internalism. So they are no good.

    Part of the argument here will be piecemeal: showing for a particular internalist thesis that there are no good arguments for it but for the arguments that lead all the way to the strongest form of internalism. And I can’t hope to do that for all the possible theses you could get by mixing and matching answers to the four questions. But I can hope to make the strong form of externalism more plausible, both by showing how it handles some difficult cases, and by showing that the most general arguments against it do not work.

    1.4 Guidance

    To illustrate the kind of storyline I sketched in the previous section, let’s consider one popular argument against externalism. The externalist says that people should do the right thing, whatever that is, whether or not they know that the right thing is in fact right. It is often objected that this is not particularly helpful guidance, and morality should be more guiding than this. We see versions of this objection made by Ted Lockhart (2000, 8–9), Michael Smith (2006, 143), Andrew Sepielli (2009, 8), William MacAskill (2014, 7) and by Hillary Greaves and Toby Ord (2017). These authors differ between themselves about both why norms that are not guiding are bad, some saying they are unfair, others that they are unhelpful, and about what conclusion we should draw from this fact. But they agree there is something bad about Do the right thing in virtue of it not being guiding, and think we need something more internalist.

    But if you think Do the right thing is not guiding, and we need norms that are guiding in just that sense, some very strong conclusions follow. After all, if non-guiding rules are bad, then they shouldn’t be any part of our moral theory. So it isn’t just that we should take hypocrisy to be one vice alongside cowardice, dishonesty, and so on, but to be the only vice. After all, if there are other vices at all, then morality as a whole may not be guiding. Now who is Do the right thing not guiding to? Presumably to people who lack full moral knowledge. But some of these people won’t have full epistemological knowledge either. So by the standard that Do the right thing is not guiding, principles like Do whatever the evidence best suggests is right, or Do whatever maximises expected rightness won’t be guiding either. If we can’t expect people to know what’s right, we can’t really expect them to know what’s probably right either.

    So taking guidance to be a constraint in this way pushes us to a version of internalism that relies on actual beliefs about rightness, not beliefs the evidence supports, and relies on a version that takes conformity to one’s own values to be ‘above all’. But if we do that, we can’t say either of the plausible things I suggested various moderate internalists could say about Robespierre. The two suggestions were to say that conformity to one’s own value is merely one virtue among many, and that good people should conform not to their actual principles, but to the principles their evidence supports. If we take guidance to be a constraint, then both ways out are blocked. Robespierre failed by some very important standards, but he couldn’t be guided (in whatever sense the internalist means) by those standards.

    We’ll see this storyline a few times in what follows. The externalist view seems to have some unattractive features. But when we spell out just what the features are, we’ll see they are shared by all but some very implausible theories. This won’t just hold in ethics. The epistemological picture I’m going to draw allows for kinds of reasoning that appear on their face to be unacceptably circular. But when we try to say just what this kind of circularity comes to, we’ll see that blocking it would provide enough resources to ground an argument for Pyrrhonian scepticism.

    1.5 Symmetry

    In general, one’s evidence is relevant to what one should do. The normative externalist denies a natural generalisation of this little platitude. Although evidence about matters of fact is relevant to what one should do, evidence about the normative, about the nature of morality and rational, is not. Evidence about whether to turn left or right is relevant to rational decision making, evidence about what is wrong or right is irrelevant. Or so says the externalist.

    This looks like an argument against externalism: it denies a very plausible symmetry principle. The principle says that we should treat all kinds of uncertainty, and all kinds of evidence, the same. I’m going to spend much of the first half of this book arguing against the symmetry principle, but for now let’s quickly set up why we might think there is a puzzle here.

    We’ll start by thinking through an example of where evidence is relevant to mundane action. A person, we’ll call him Baba, is looking for his car keys. He can remember leaving them in the drawer this morning, and has no reason to think they will have moved. So the natural thing to do is to look in the drawer. If he does this, however, he will be sadly disappointed, for his two year old daughter has moved the car keys into the cookie jar.

    Things would go best for Baba if he looked in the cookie jar; that way he would find his car keys. But that would be a very odd thing for him to do. It would be irrational to look there. It wouldn’t make any sense. If he walked down the steps, walked straight to the cookie jar, and looked in it for his car keys, it would shock any onlookers because it would make no sense. It used to be thought that it would not shock his two year old daughter, since children that young had no sense that different people have different views on the world. But this isn’t true; well before age two children know that evidence predicts action, and are surprised by actions that don’t make sense given a person’s evidence  (He, Bolz, and Baillargeon 2011). This is because from a very young age, humans expect other humans to act rationally  (Scott and Baillargeon 2013).

    In this example, Baba has a well-founded but false belief about a matter of fact: where the car keys are. Let’s compare this to a case where the false beliefs concern normative matters. The example is going to be more than a little violent, though after this the examples will usually be more mundane. And the example will, in my opinion, involve three different normative mistakes.

    Gwenneg is at a conference, and is introduced to a new person. “Hi,” he says, “I’m Gwenneg,” and extends his hand to shake the stranger’s hand. The stranger replies, “Nice to meet you, but you shouldn’t shake my hand. I have disease D, and you can’t be too careful about infections.” At this point Gwenneg pulls out his gun and shoots the stranger dead.

    Now let’s stipulate that Gwenneg has the following beliefs, the first of which is about a matter of fact, and the next three are about normative matters.

    First, Gwenneg knows that disease D is so contagious, and so bad for humans both in terms of what it does to its victims’ quality and quantity of life, that the sudden death of a person with the disease will, on average, increase the number of quality-adjusted-life-years (QALYs) of the community.10 That is, although the sudden death of the person with the disease obviously decreases their QALYs remaining, to zero in fact, the death reduces everyone else’s risk of catching the disease so much that it increases the remaining QALYs in the community by a more than offsetting amount.

  • 10 QALYs are described in McKie et al. (1998), who go on to defend some philosophical theses concerning them that I’m about to assign to Gwenneg.

  • 11 Nick Bostrom (2003) endorses, and uses to interesting effect, what I’m calling a strong version of the straight rule. In my reply to his paper I argue that only a weak version is plausible, since other things are rarely equal  (Weatherson 2003). Gwenneg thinks that Bostrom has the better of that debate.

  • Second, Gwenneg believes in a strong version of the ‘straight rule’. The straight rule says that given the knowledge that x% of the Fs are Gs, other things equal it is reasonable to have credence that this particular F is a G. Just about everyone believes in some version of the straight rule, and just about everyone thinks that it needs to be qualified in certain circumstances. When I say that Gwenneg believes in a strong version of it, I mean he thinks the circumstances that trigger qualifications to the rule rarely obtain. So he thinks that it takes quite a bit of additional information to block the the transition from believing x% of the Fs are Gs to having credence that this particular F is a G.11

    Third, Gwenneg thinks that QALYs are a good measure of welfare. So the most beneficent action, the one that is best for well-being, is the one that maximises QALYs. This is hardly an uncontroversial view, but it does have some prominent defenders  (McKie et al. 1998).

    And fourth, Gwenneg endorses a welfarist version of Frank Jackson’s decision-theoretic consequentialism  (Jackson 1991). That is, Gwenneg thinks the right thing to do is the thing that maximises expected welfare.

    Putting these four beliefs together, we can see why Gwenneg shot the stranger. He believed that, on average, the sudden death of someone suffering from disease D increases the QALYs remaining in the community. By the straight rule, he inferred that each particular death of someone suffering from disease D increases the expected QALYs remaining in the community. By the equation of QALYs with welfare he inferred that each particular death of someone suffering from disease D increases the expected welfare of the community. And by his welfarist consequentialism, he inferred that bringing about such a death is a good thing to do. So not only do these beliefs make his action make sense, they make it the case that doing anything else would be a moral failing.

    Now I think the second, third and fourth beliefs I’ve attributed to Gwenneg are false. The first is a stipulated fact about the world of Gwenneg’s story. It is a fairly extreme claim, but far from fantastic. There are probably diseases in reality that are like disease D in this respect12. So we’ll assume he hasn’t made a mistake there, but from then on every single step is wrong. But none of these steps are utterly absurd. It is not too hard to find both ordinary reasonable folk who endorse each individual step, and careful argumentation in professional journals in support of those steps. Indeed, I have cited just such argumentation. Let’s assume that Gwenneg is familiar with those arguments, so he has reason to hold each of his beliefs. In fact, and here you might worry that the story I’m telling loses some coherence, let’s assume that Gwenneg’s exposure to philosophical evidence has been so tilted that he has only seen the arguments for the views he holds, and not any good arguments against them. So not only does he have these views, but in each case he is holding the view that is best supported by the (philosophical) evidence available.

  • 12 At least, there probably were such diseases at some time. I suspect cholera had this feature during some epidemics.

  • Now I don’t mean to use Gwenneg’s case to argue against internalism. It wouldn’t be much use in such an argument for two reasons. First, there are plenty of ways for internalists to push back against my description of the case. For example, perhaps it is plausible for Gwenneg to have any one of the the normative beliefs I’ve attributed to him, but not to have all of them at once. Second, not all of the internalist views I described so far would even endorse his actions given that my description of the case is right.

    But the case does illustrate three points that will be important going forward. One is that it isn’t obvious that the symmetry claim above, that all uncertainty should be treated alike, is true. Maybe that claim is true, but it needs to be argued for. Second, the symmetry claim has very sweeping implications, once we realise that people can be uncertain about so many philosophical matters. Third, externalist views look more plausible the more vivid the case becomes. It is one thing to say abstractly that Gwenneg should treat his uncertainty about morality and epistemology the same way he treats his uncertainty about how many people the stranger will infect. At that level of abstraction, that sounds plausible. It is another to say that the killing was a good thing. And we’ll see this pattern a lot as we go forward; the more vivid cases are, the more plausible the externalist position looks. But from now on I’ll keep the cases vivid enough without being this violent.13

  • 13 One exception: Robespierre will return from time to time, along with other Terrorists.

  • 1.6 Regress

    In this book I’m going to focus largely on ethics and epistemology. Gwenneg’s case illustrates a third possible front in the battle between normative internalists and externalists: welfare theory. There is a fourth front that also won’t get much discussion, but is I think fairly interesting: decision theory. I’m going to spend a bit of time on it right now, as a way of introducing regress arguments for externalism. And regress arguments are going to be very important indeed in the rest of the book.

    Imagine that Llinos is making trying to decide how much to value a bet with the following payoffs: it returns £10 with probability 0.6, £13 with probability 0.3, and £15 with probability 0.1. Assume that for the sums involved, each pound is worth as much to Llinos as the next.14 Now the normal way to think about how much this bet is worth to Llinos is to multiply each of the possible outcomes by the probability of that outcome, and sum the results. So this bet is worth \(10 \times 0.6 + 13 \times 0.3 + 15 \times 0.1 = 6 + 3.9 + 1.5 = 11.4\). This is what is called the expected return of the bet, and the usual theory is that the expected return of the bet is its value. (It’s not the most helpful name, since the expected return is not in any usual sense the return we expect to get. But it is the common name throughout philosophy, economics and statistics, and it is the name I’ll use here.)

  • 14 Technically, what I’m saying here is that the marginal utility of money to Llinos is constant. There is a usual way of cashing out what it is for the marginal utility of money to be constant in terms of betting behaviour. It is that the marginal utility of money is constant iff the agent is indifferent between a bet that returns 2x with probability 0.5, and getting x for sure. But we can’t adopt that definition here, because it takes for granted a particular method of valuing bets. And whether that method is correct is about to come into question.

  • There’s another way to get to calculate expected values. Order each of the possible outcomes from worst to best, and at each step, multiply the probability of getting at least that much by the difference between that amount and the previous step. (At the first step, the ‘previous’ value is 0.) So Llinos gets £10 with probability 1, has an 0.4 chance of getting another £3, and has an 0.1 chance of getting another £2. Applying the above rule, we work out her expected return is 10 + 0.4 \(\times\) 3 + 0.1 \(\times\) 2 = 10 + 1.2 + 0.2 = 11.4. It isn’t coincidence that we got the same result each way; these are just two ways of working out the same sum. But the latter approach makes it easier to understand an alternative approach to decision theory, one recently defended by Lara Buchak (2013).

    She thinks that the standard approach, the one I’ve based around expected values, is appropriate only for agents who are neutral with respect to risk. Agents who are risk seeking, or risk averse, should use slightly different methods.15 In particular, when we multiplied each possible gain by the probability of getting that gain, Buchak thinks we should instead multiply by some function f of the probability. If the agent is risk averse, then f(x) < x. To use one of Buchak’s standard examples, a seriously risk averse agent might set f(x) = x2. (Remember that x \(\in\) [0, 1], so x2 < x everywhere except the extremes.) If we assume that this is Llinos’s risk function, the bet I described above will have value 10 + 0.42 \(\times\) 3 + 0.12 \(\times\) 2 = 10 + 0.48 + 0.02 = 10.5.

  • 15 The orthodox view is that the agent’s attitude to risk should be incorporated into their utility function. That’s what I think is correct, but Buchak does an excellent job of showing why there are serious reasons to question the orthodoxy.

  • Now imagine a case that is simpler in one respect, and more complicated in another. Iolana has to choose between getting £1 for sure, and getting £3 iff a known to be fair coin lands heads. (The marginal utility of money to Iolana is also constant over the range in question.) And she doesn’t know whether she should use standard decision theory, or a version of Buchak’s decision theory, with the risk function set at f(x) = x2. Either way, the £1 is worth 1. (I’m assuming that £1 is worth 1 util, expressing values of choices in utils, and not using any abbreviation for these utils.) On standard theory, the bet is worth 0.5 \(\times\) 3 = 1.5. On Buchak’s theory, it is worth 0.52 \(\times\) 3 = 0.75. So until she knows which decision theory to use, she won’t know which option is best to take. That’s not merely to say that she won’t know which option will return the most. She can’t know which option has the best returns until the coin is flipped. It’s to say also that she won’t know which bet is rational to take, given her knowledge about the setup, until knows which is the right theory of rational decision making.

    In the spirit of normative internalism, we might imagine we could solve this problem for Iolana without resolving the dispute between Buchak and her orthodox rivals. Assume that Iolana has, quite rationally, credence 0.5 that Buchak’s theory is correct, and credence 0.5 that orthodox theory is correct. (I’m assuming here that a rational agent could have positive credence in Buchak’s views. But that’s clearly true, since Buchak herself is rational.) Then the bet on the coin has, in some sense, 0.5 chance of being worth 1.5, and 0.5 chance of being worth 0.75. Now we could ask ourselves, is it better to take the £1 for sure, or to take the bet that has, in some sense, 0.5 chance of being worth 1.5, and 0.5 chance of being worth 0.75?

    The problem is that we need a theory of decision to answer that very question. If Iolana takes the bet, she is guaranteed to get a bet worth at least 0.75, and she has, by her lights, an 0.5 chance of getting a bet worth another 0.75. (That 0.75 is the difference between the 1.5 the bet is worth if orthodox theory is true, and the 0.75 it is worth if Buchak’s theory is true.) And, by orthodox lights, that is worth 0.75 + 0.5 \(\times\) 0.75 = 1.125. But by Buchak’s lights, that is worth 0.75 + 0.52 \(\times\) 0.75 = 0.9375. We still don’t know whether the bet is worth more or less than the sure £1.

    Over the course of this book, we’ll see a lot of theorists who argue that in one way or other, we can resolve practical normative questions like the one Iolana faces without actually resolving the hard theoretical issues that make the practical questions difficult. And one common way to think this can be done traces back to an intriguing suggestion by Robert Nozick (1994). Nozick suggested we could use something like the procedure I described in the previous paragraph. Treat making a choice under normative uncertainty as taking a kind of bet, where the odds are the probabilities of each of the relevant normative theories, and the payoffs are the values of the choice given the normative theory.16 And the point to note so far is that this won’t actually be a technique for resolving practical problems without a theory of decision making. At some level, we simply need a theory of decision.

  • 16 Nozick’s own application of this was to the Newcomb problem  (Nozick 1969). (Going into the details of what the Newcomb problem is would take us too far afield; Paul Weirich (2016) has a nice survey of it if you want more details.) He noted that if causal decision theory is correct, then two–boxing is fractionally better than one–boxing, while if evidential decision theory is correct, then one–boxing is considerably better than two–boxing. If we think the probability that evidential decision theory is correct is positive, and we use this approach, we will end up choosing one box. And that will be true even if the probability we assign to evidential decision theory is very very small.

  • The fully internalist ‘theory’ turns out to not have anything to say about cases like Iolana’s. If it had a theory of second order decision, of how to make decisions when you don’t know how to make decisions, it could adjudicate between the cases. But there can’t be a theory of how to make decisions when you don’t know how to make decisions. Or, more precisely, any such theory will be externalist.

    Let’s note one striking variant on the case. Wikolia is like Iolana is almost every respect. She gives equal credence to orthodox decision theory and Buchak’s alternative, and no credence to any other alternative, and she is facing a choice between £1 for sure, and £3 iff a fair coin lands heads. But she has a third choice: 55 pence for sure, plus another £1.60 iff the coin lands heads. It might be easiest to label her options A, B and C, with A being the sure pound, B being the bet Iolana is considering, and C the new choice. Then her payoffs, given each choice and the outcome of the coin toss, are as follows.

    Heads Tails
    Option A £1 £1
    Option B £3 £0
    Option C £2.15 £0.55

    The expected value of Option C is 0.55 + 0.5 \(\times\) 1.6 = 1.35. (I’m still assuming that £1 is worth 1 util, and expressing values of choices in utils.) It’s value on Buchak’s theory is 0.55 + 0.52 \(\times\) 1.6 = 0.95. Let’s add those facts to the table, using EV for expected value, and BV for value according to Buchak’s theory.

    Heads Tails EV BV
    Option A £1 £1 1 1
    Option B £3 £0 1.50 0.75
    Option C £2.15 £0.55 1.35 0.95

    Now rememeber that Wikolia is unsure which of these decision theories to use, and gives each of them equal credence. And, as above, whether we use orthodox theory or Buchak’s alternative at this second level affects how we might incorporate this fact into an evaluation of the options. So let EV2 be the expected value of each option if it is construed as a bet with an 0.5 chance of returning its expected value, and an 0.5 chance of returning its value on Buchak’s theory, and BV2 the value of that same bet on Buchak’s theory.

    Heads Tails EV BV EV2 BV2
    Option A £1 £1 1 1 1 1
    Option B £3 £0 1.50 0.75 1.125 0.9375
    Option C £2.15 £0.55 1.35 0.95 1.15 1.05

    And now something interesting happens. In each of the last two columns, Option C ranks highest. So arguably17, Wikolia can reason as follows: Whichever theory I use at the second order, option C is best. So I should take option C. On the other hand, Wikolia can also reason as follows. If expected value theory is correct, then I should take option B, and not take option C. And if Buchak’s theory is correct, then I should take option A, and not take option C. So either way, I should not take option C. Wikolia both should and should not take option C.

  • 17 Ironically, it isn’t at all obvious in this context that this is acceptable reasoning on Wikolia’s part. The argument by cases she goes on to give is not strictly speaking valid on Buchak’s theory, so it isn’t clear that Wikolia can treat it as valid here, given that she isn’t sure which decision theory to use. This goes to the difficulty of saying anything about what should be done without making substantive normative assumptions, a difficulty that will recur frequently in this book.

  • That doesn’t look good, but again I don’t want to overstate the difficulty for the internalist. The puzzle isn’t that internalism leads to a contradiction, as it might seem here. After all, the term ‘should’ is so slippery that we might suspect there is some kind of fallacy of equivocation going on. And so our conclusion is not really a contradiction. It really means that Wikolia should-in-some-sense take option C, and should-not-in-some-other-sense take option C. And that’s not a contradiction. But it does require some finesse for the internalist to say just what these senses are. This kind of challenge for the internalist, the puzzle of ending up with more senses of should than one would have hoped for, and needing to explain each of them, will recur a few times in the book.

    1.7 Two Recent Debates

    I think the question of whether Do the right thing or Follow your principles is more fundamental is itself an interesting question. But it has become relevant to two other debates that have become prominent in recent philosophy as well. These are debates about moral uncertainty, and about higher-order evidence.

    Many of the philosophers who have worried that Do the right thing is insufficiently guiding have looked to have a theory that makes moral uncertainty more like factual uncertainty. And since it is commonly agreed that an agent facing factual uncertainty, and only concerned with outcomes, should maximise factual uncertainty, a common conclusion has been that a morally uncertain agent should also maximise some kind of expected value. In particular, they should aim to maximise the expected moral value of their action, where probabilities about moral theories can affect the expected moral value.

    In the recent literature, we see the view that people should be sensitive to the probabilities of moral theories sometimes described as ‘moral hedging’. This terminology is used by Christian Tarsney (2017), who is fairly supportive of the idea, and Ittay Nissan-Rozen (2015), who is not. It’s not, I think, the happiest term. After all, Robespierre maximised expected moral value, at least relative to the credences that he had. And it would be very odd to describe the Reign of Terror as a kind of moral hedging.

    The disputes about moral uncertainty have largely focussed on cases where a person is torn between two (plausible) moral theories, and has to choose between a pair of actions. In one important kind of case, the first is probably marginally better, but it might be much much worse. In that case, maximising moral value may well involve taking the second option. And that’s the kind of case where it seems right to describe the view as a kind of moral hedging.

    But the general principle that one should maximise expected moral value applies in many more cases than that. It applies, for example, to people who are completely convinced that some fairly extreme moral theory is correct. And in those cases, maximising expected moral value, rather than actual moral value, could well be disastrous.

    When it is proposed that probabilities matter to a certain kind of decision, it is a useful methodology to ask what the proposal says in the cases where the probabilities are all 1 or 0. That’s what I’m doing here. If probabilities of moral theories matter, they should still matter when the probability (in the relevant sense) of some horrid theory is 1. So my investigation of Polonius’s principle will have relevance for the debate over moral uncertainty, since it will have consequences for what theories of moral uncertainty can plausibly say in extreme cases.

    There is one dispute about moral uncertainty that crucially involves intermediate probabilities. Maximising expected moral value requires putting different theories’ moral evaluations of actions on a common scale. There is no particularly good way to do this, and it has been argued that there is no possible way to do this. This is sometimes held to be a reason to reject ‘moral hedging’  (Hedden 2016). I’ll return to this question in chapter 6, offering a tentative defence of the ‘hedger’. The question of how to find this common scale is hard, but there are reasons to think it is not impossible. And what matters for the current debate is whether it is in fact impossible.

    The other recent dispute that normative externalism bears on concerns peer disagreement. Imagine that two friends, Ankita and Bojan, both regard themselves and each other as fairly knowledgable about a certain subject matter. And let p be a proposition in that subject, that they know they have equally good evidence about, and that they are antecedentl equally likely to form true beliefs about. Then it turns out that Ankita believes p, while Bojan believes ¬p. What should they do in response to this news?

    One response goes via beliefs about their own rationality. Each of them should think it is equally likely that believing p and believing ¬p is rational given their common evidence. They should think this because they have two examples of rational people, and they ended up with these two conclusions. So they should think that holding on to their current belief is at most half-likely to be rational. And it is irrational, say some theorists, to believe things that you only think are half-likely to be rational. So both of them should become completely uncertain about whether p is true.

    I’m going to argue that there are several mistakes in this reasoning. They shouldn’t always think that holding on to their current belief is half-likely to be rational. Whether they should or not depends, among other things, on why they have their current belief. But even if they should change their belief about how likely it is that their belief is rational, nothing follows about what they should do to their first-order beliefs. In some strange situations, the thing to do is to hold on to a belief, while being sceptical that it is the right belief to have. This is the key externalist insight, and it helps us resolve several puzzles about disagreement.

    1.8 Elizabeth and Descartes

    Although the name ‘normative externalism’ is new, the view is not. It will be obvious in what follows how much the arguments I have to offer are indebted to earlier work by, among others, Nomy Arpaly (2003), Timothy Schroeder  (Arpaly and Schroeder 2014), Maria Lasonen-Aarnio (2010, 2014), Miriam Schoenfield (2015) and Elizabeth Harman (2011, 2015). It might not be as obvious, because they aren’t directly cited as much, but much of the book is influenced by the pictures of normativity developed by Thomas Kelly (2005) and by Amia Srinivasan (2015).

    Many of the works just cited address just one of the two families of debates this book joins: i.e., debates about ethics and debates about epistemology. One of the nice features about taking on both of these debates at once is that it is possible to blend insights from the externalist side of each of those debates. So chapter 4, which is the main argument against normative internalism in ethics, is modelled on an argument Miriam Schoenfield (2015) develops to make a point in epistemology. And much of what I say epistemic akrasia in chapter 10 is modelled on what Nomy Arpaly (2003) says about practical akrasia.

    There are also some interesting historical references to normative externalism. I’m just going to talk about the one that is most interesting to me. In the correspondence between Descartes and Elizabeth, we see Descartes taking a surprisingly internalist view in ethics, and Elizabeth the correct externalist view.18

  • 18 All translations are from the recent edition of the correspondence by Lisa Shapiro  (Elizabeth and Descartes 2007).

  • On 15 September, 1645, Descartes wrote:

    For it is irresolution alone that causes regret and repentance.

    This had been a theme of the view he had been putting forward. The good person, according to the view Descartes put forward in the correspondence, is one who makes a good faith effort to do the best they can. Someone who does this, and who is not irresolute, has no cause to regret their actions. He makes this clear in an earlier letter, on 4 August 1645, where he is also more explicit that it is only careful and resolute actors who are immune to regret.

    But if we always do all that our reason tells us, we will never have any grounds to repent, even though events afterward make us see that we were mistaken. For our being mistaken is not our fault at all.

    Elizabeth disagrees with Descartes both about regret, and with what it shows us about the nature of virtue. She writes, on 16 August 1645,

    On these occasions regret seems to me inevitable, and the knowledge that to err is as natural to man as it is to be sick cannot protect us. For we also are not unaware that we were able to exempt ourselves of each particular fault.

    Over the course of the correspondence, Elizabeth seems to be promoting a view of virtue on which being sensible in forming intentions, and resolute in carrying them out, does not suffice for being good. One must also form the right intentions. If that is really her view, then she is a very important figure in the history of normative externalism. Indeed, if that is her view, perhaps I should be calling this book a defence of Elizabethan philosophy.

    But it would be a major diversion from the themes of this book to investigate exactly how much credit Elizabeth is due. And in any case, I don’t want to suggest that I’m defending exactly the view Elizabeth is. The point about possibility she makes in the above quote is very important. It’s possible that we ought to be good, and we can’t know just what is good, but this isn’t a violation of Ought implies can, because for any particular good thing we ought do, we can with effort come to know that that thing is good. That’s a nice problem to raise for particular internalists, but it’s not my motivation for being externalist. I don’t think it matters at all whether we know what is good, so the picture of virtue I’m working with is very different to the Stoic picture that Elizabeth has. (It’s much more like the picture that Nomy Arpaly (2003) has developed.)

    So it would be misleading to simply say this book is a work in Elizabethan philosophy. But Elizabeth is at the very least an important figure in the history of the views I’m defending, and she is to me the most fascinating of my historical predecessors.

    1.9 Why Call This Externalism?

    There are so many views already called externalist in the literature that I feel I should offer a few words of defence of my labelling my view externalist. In the existing literature I’m not sure there is any term, let alone an agreed upon term, for the view that higher-order considerations are irrelevant to both ethical and epistemological evaluation. So we needed some nice term for my view. And using ‘externalist’ suggested a useful term for the opposing view. And there is something evocative about the idea that what’s distinctive of my view is that it says that agents are answerable to standards that are genuinely external to them. More than that, it will turn out that there are similarities between the debates we’ll see here and familiar debates between internalists and externalists about both content and about the nature of epistemic norms.

    In debates about content, we should not construe the internalism/externalism debate as a debate about which of two kinds of content are, as a matter of fact, associated with our thought and talk. To set up the debate that way is to concede something that is at issue in the debate. That is, it assumes from the start that there is an internalist friendly notion of content, and it really is a kind of content. But this is part of what’s at issue. The same point is true here. I very much do not think the debate looks like this: The externalist identifies some norms, and the internalist identifies some others, and then we debate which of those norms are really our norms. At least against some internalist opponents, I deny that they have so much as identified a kind of norm that we can debate whether it is our norm.

    In debates in epistemology, there is a running concern that internalist norms are really not normative. If we identify justified belief in a way that makes it as independent of truth as the internalist wants justification to be, there is a danger that that we should not care about justification. Internalists have had interesting things to say about this danger  (Conee 1992), and I don’t want to say that that it is a compelling objection to (first-order epistemological) internalism. But it is a danger. And I will argue that it’s a danger that the normative internalist can’t avoid.

    Let’s say we can make sense of a notion that tracks what the internalist thinks is important. In section 6.1 I’ll argue that not being a hypocrite is such a notion; the internalist cares a lot about it, and it is a coherent notion. There is a further question of whether this should be relevant to our belief, our action or our evaluation of others. If someone is a knave, need we care further about whether they are a sincere or hypocritical knave? I’ll argue that at the end of the day we should not care; it isn’t worse to be hypocritical.19

  • 19 My instinct is that there is something preferable about the hypocrite compared to the person who does wrong while thinking they are doing the right thing. After all, the hypocrite has figured out a moral truth, and figuring out moral truths typically reflects well on a person. But I’m not going to try to turn this instinct into an argument in this book.

  • The debates I’m joining here have something else in common with familiar internalist/externalist debates. Many philosophers will be tempted to react to them by saying the parties are talking at cross-purposes. In fact, there are two ways that it might be thought the parties are at cross-purposes.

    First, it might be thought the parties are both right, but they are talking about different things. The normative internalist is talking about subjective normativity, and saying true things about it, while the normative externalist is talking about objective normativity, and saying true things about it. One of the running themes of this book will be that this isn’t a way of dissolving the debate, it is a way of taking the internalist’s side. Just like in debates about content, and in debates about epistemic justification, the externalist denies that there is any notion that plays the role the internalist wants their notion to play. To say the notion exists, but isn’t quite as important as the internalist says it is, is to concede the vast majority of what the externalist wants to contest.

    The second way to say that the theorists are talking at cross-purposes is to say that their differences merely turn on first-order questions about ethics and epistemology. What the internalist calls misleading evidence about morality, the externalist calls first-order reasons to act a different way. And what the internalist calls higher-order evidence, the externalist calls just more first-order evidence. This is, I’m going to argue, an externalist position, and not one that the internalist should happily sign on to. It is, very roughly, the view I want to defend in epistemology. What has been called higher-order evidence in epistemology is, when it is anything significant at all, just more first-order evidence. It is also a possible externalist view in ethics, though not one I want to defend. In particular, it is the view that misleading evidence about morality changes the objective circumstances in a way that changes what is good to do. I don’t think that’s typically true, but it is a possible externalist view.

    All that said, there are two ways in which what I’m saying differs from familiar internalist/externalist debates. One is that what I’m saying cross-cuts the existing debates within ethics and epistemology that often employ those terms. Normative externalism is compatible with an internalist theory of epistemic justification. It is consistent to hold the following two views:

    • Whether S is justified in believing p depends solely on S’s internal states.
    • There is a function from states of an agent to permissible beliefs, and whether an agent’s beliefs are justified depends solely on the nature of that function, and the agent could in principle be mistaken, and even rationally mistaken, about the nature of the function.

    The first bullet point defines a kind of internalism in epistemology. The second bullet point defines a kind of externalism about epistemic norms. But the two bullet points are compatible, as long as the function in question does not vary between agents with the same internal states. The two bullet points may appear to be in some tension, but their conjunction is more plausible than many theses that have wide philosophical acceptance. Ralph Wedgwood (2012), for example, defends the conjunction, and spends some time arguing against the idea that the conjuncts are in tension.

    And normative externalism is compatible in principle with the view in ethics that there is an internal connection between judging that something is right, and being motivated to do it. This view is sometimes called motivational internalism  (Rosati 2016). But again, there is a tension, in this case so great that it is hard to see why one would be a normative externalist and a motivational internalist. The tension is that to hold on to both normative externalism and motivational internalism simultaneously, one has to think that ‘rational’ is not an evaluative term, in the sense relevant for the definition of normative externalism. That is, one has to hold on to the following views.

    • It is irrational to believe that one is required to φ, and not be motivated to φ; that’s what motivational internalism says.
    • An epistemically good agent will follow their evidence, so if they have misleading moral evidence, they will believe that φ is required, even when it is not. The possibility of misleading moral evidence is a background assumption of the debate between normative internalists and normative externalists. And the normative externalist says that the right response to misleading evidence is to be misled.
    • An agent should be evaluated by whether they do, and are motivated to do, what is required of them, not whether they do, or are motivated to do, what they believe is required of them. Again, this is just what normative externalism says.

    Those three points are consistent, but they entail that judging someone to be irrational is not, in the relevant sense, to evaluate them. Now that’s not a literally incoherent view. It is a souped-up version of what Niko Kolodny (2005) argues for. (It isn’t Kolodny’s own view; he thinks standards of rationality are evaluative but not normative. I’m discussing the view that they are neither evaluative nor normative.) But it is a little hard to see the attraction of the view. So normative externalism goes more happily with motivational externalism.

    And that’s the common pattern. Normative externalism is a genuinely novel kind of externalism, in that it is neither entailed by, nor entails, other forms of externalism. But some of the considerations for and against it parallel considerations for and against other forms of externalism. And it sits most comfortably with other forms of externalism. So the name is a happy one.

    1.10 Plan of Book

    This book is in two parts: one about ethics, the other about epistemology.

    The ethics part starts with a discussion of the motivations of internalism in ethics. It then spends two chapters arguing against strong forms of internalism. By strong forms, I mean views where some key moral concept is identified with acting in accord with one’s own moral beliefs. So this internalist-friendly condition (I’m doing what I think I should do) is both necessary and sufficient for some moral concept to apply. After this, I spend two chapters on weak forms. In chapter 5, I discuss a view where blameworthiness requires that one not believe one was doing the wrong thing. In chapter 6, I discuss a view where doing what one thinks is wrong manifests a vice, even if the action is right. These don’t cover the field of possible views, but they are important versions of views that hold that internalist-friendly conditions have a one-way connection to key moral concepts. The internalist-friendly conditions in these cases provide either a necessary or a sufficient condition for the application of a key moral concept, but not both.

    I then turn to epistemology. The organising principle that I’ll be defending is something I’ll call Change Evidentialism: only new evidence that bears on p can compel a rational agent to change their credences in p. The forms of internalism that I’ll be opposing all turn out to reject that. And the reason they reject it is that they think a rational person can be compelled to change their credences for much more indirect reasons. In particular, the rational person could get misleading evidence that the rational attitude to take towards p is different to the attitude they currently take, and that could compel them to change their attitude towards p. I’m going to argue that this is systematically mistaken. And this has consequences for how to think about circular reasoning (it’s not as bad as you think!), epistemic akrasia (it’s not as bad as you think!), and standing one’s ground in the face of peer disagreement (it’s really not as bad as you think!).