4  A Dilemma for Internalism

In the previous chapter I argued against the idea that we should treat factual uncertainty and normative uncertainty symmetrically. In this chapter I’ll assume for the sake of the argument that the arguments of the previous chapter are unsuccessful. The upshot of that would be that we prefer theories that respect this symmetry. But this preference cannot be absolute. As with everything else in philosophy, we have to ask what the cost of satisfying this preference would be.

And in this chapter I’ll argue that the costs are not worth paying. There are three kinds of theories that are possible. There are the externalist theories that I favour, which unqualifiedly approve of doing the right thing. There are theories that adopt an unqualified version of symmetry, treating all uncertainty the same way. I’ll argue that such theories are implausibly subjective. And there are theories that adopt a half-hearted version of symmetry. I’ll argue that these theories are under-motivated. There is no theoretical advantage, I’ll argue, by incorporating a half-hearted symmetry principle. And there is much to be lost by giving up the idea that one should do the right thing.

The argument I’m offering here is based on a very similar argument that Miriam Schoenfield (2015) offers against various kinds of normative internalism in epistemology. The idea our arguments share is that the more subjective an internalism gets, the less plausible its verdicts about cases are, while the more objective it gets, the less well it is motivated by symmetry. Schoenfield primarily is interested in developing a problem for some forms of normative internalism in epistemology, but as we’ll see, the same dilemma arises for internalism in ethics.

4.1 Six Forms of Internalism

The following schema can be converted into one of six internalist theses by picking one of the 3 options on the left and one of the 2 options on the right.

  • Rightness/Praiseworthiness/Rationality is choosing an action with the highest credal/evidential expected goodness.

In every case ‘goodness’ is meant to be interpreted de dicto and not de re. That is, what has highest credal expected goodness is a function of the agent’s beliefs (or more precisely her credences) in various hypotheses about goodness. And what has highest evidential expected goodness is a function of her evidence about is and is not good. If we interpret ‘goodness’ de re, then the principle is consistent with various forms of externalism; the de dicto interpretation is what makes these internalist theses.

The six theses we generate that way are all very strong. They all offer both necessary and sufficient conditions for an interesting concept. In the next two chapters, we’ll look at internalist views that only offer necessary, or only offer sufficient, conditions for one of these. But it’s helpful to start with the strong views to see what constraints there are on a viable internalism.

And I really want the six theses to be understood in an even stronger way. They should be understood to be explanatory in a right-to-left direction. So the view in question is not just that rightness (say) is co-extensive with maximising credal expected value, but that some act is right because it maximises credal expected goodness. This is, I think, implicit in the internalists that I’ll cite below. And it makes sense given the idea that factual and normative uncertainty should be treated the same way. Orthodox decision theory doesn’t just say that rational action is co-extensive with expected utility maximisation, it says that some act is rational because no alternative has higher expected utility.

It will help to have some abbreviations for the six theories. I’ll use abbreviations for all five of the possible choices, and concatenate them to get abbreviations for the whole theory. I’ll use Ri for rightness, Pr for praiseworthiness, Ra for rationality, C for credal and E for evidential. So, for instance, here are two theses one can express using this terminology.

  • RiE - Rightness is doing the action with the highest evidential expected goodness.
  • PrC - Praiseworthiness is doing the action with the highest credal expected goodness.

I’ve picked these because they are close to two theses endorsed by Michael Zimmerman (2008). They aren’t exactly what he endorses; he leaves it open whether agents should be using expected value calculations, or some nearby variant. But they are nice, clean theories, and for that reason useful for theorising about. And Zimmerman is hardly the only theorist to endorse something in the vicinity. Andrew Sepielli (2009) endorses something like RaC, and Michael (Smith 2006, 2009) endorses something like PrC and RaC.

The short version of this chapter is that the following three theses are both true and deeply problematic for any kind of internalism.

  1. Both RiC and PrC theories make false claims about cases of what Nomy Arpaly (2003, 10) calls “inadvertent virtue” and “misguided conscience”.
  2. The E theories are unmotivated; they are a compromise between two extreme theories, but they inherit the vices and not the virtues of those extremes.
  3. The Ra theories posit an asymmetry between cases of factual and normative uncertainty that undermines another kind of symmetry the internalist takes to be intuitive.

So none of the 6 theories are true. But more than that, the way in which the 6 theories collectively fail suggests that the problem won’t be solved by adding epicycles, or weakening the theories to deal with hard cases. There is no version of normative internalism in ethics that is both motivated and plausible.

Sections 4.3–4.5 will deal with each of these theses in order. But first I need to say something about the assumptions behind the chapter. In particular, I need to say something about which possible theses are being set aside until the end of the chapter. And saying something about why we’re setting various views aside will help position this chapter in the rest of the book.

4.2 Two Difficult Cases

There are four ways one could try to motivate normative internalism: by appeal to cases, by appeal to principles about coherence, by appeal to principles about guidance, and by appeal to symmetry. The first two are notably absent in the literature on normative internalism in ethics, though they will play a major role when we turn to epistemology.

There are, to be sure, plenty of arguments that talk about cases where agents have specified credences in theories T1 or T2, but typically, these arguments will not specify what T1 and T2 are. See, for example, Gustafsson and Torpman (2014) and the papers cited therein, for instances of this phenomena.1 I don’t think these are really arguments from cases, since nothing like a case that we can have intuitions about is specified until we are told at least roughly what T1 and T2 are. If we were told that, for example, T1 is Saint-Just’s theory that the world has been empty since the Romans, and T2 is Ayn Rand’s version of egoism, we would have an example that we could have intuitions about.2 Lockhart (2000) does include some case studies where he assigns credences to particular moral theories - including Rand’s but not as it turns out Saint-Just’s. But this isn’t part of his defence of internalism, it’s in the service of arguing from his internalist theory to various claims in applied ethics.

  • 1 And, for what it’s worth, in the papers I’ve seen so far citing Gustafsson and Torpman (2014), though that may change.

  • 2 I’m being flippant in reducing Saint-Just’s moral and political theory to his aphorism about the Romans, but the details aren’t really that important for what’s going on here. See Williams (1995) for a more serious treatment of Saint-Just’s worldview, and the earlier references on Robespierre for more details on Saint-Just’s biography.

  • Now it isn’t a bad thing that internalists don’t argue from cases to theories. Indeed, there has been much criticism in the literature on philosophical methodology recently of philosophers’ reliance on cases. (See Nagel (2013) for a discussion of, and reply to, some of that criticism.) But it does reduce how much we have to discuss here.

    It will also be best to leave pure coherence based arguments until we get to epistemology. There is something intuitive about the following argument. It is incoherent to think that X is the unique right thing to do, but instead decide to do Y. Incoherence, in this sense, is a kind of irrationality. So rationality requires an internal connection between moral beliefs and action. Rather than discuss that argument directly, I’ll just note that it is no more powerful than the following argument. It is incoherent to think that p is the unique conclusion supported by a body of evidence, but nevertheless believe q on the basis of that evidence. Incoherence, in this sense, is a kind of irrationality. So rationality requires an internal connection between epistemological beliefs and, well, beliefs. That looks like a pretty good argument at first glance too. Indeed, it is hard to see why we could accept the argument about moral coherence that I opened the paragraph with and not accept this argument about epistemological coherence. Now I’ll deal with this epistemological argument at great length in part II of this book, and argue that it doesn’t work, so I’ll largely set the moral version of that argument aside for now.

    But there is one version of the coherence argument that I want to more explicitly set aside. Consider a theory that accepts all three of the following principles. (See Markovits (2014) for a sophisticated version of the kind of theory I have in mind, but note that I’m simplifying a lot here to make a methodological point.)

    • One should always do the right thing, and one should do the right thing in virtue of the right-making features of those actions, not in virtue of one’s moral beliefs.
    • Rationality requires that ones moral beliefs include all and only the true moral propositions.
    • Immoral action is irrational.

    Such a theory might agree with something like RaC. At the very least, it will say that rationality requires doing the action with the highest credal expected goodness. But that’s because rationality requires both that one give credence 1 to the true claim about which action is good to perform, and rationality requires performing the action that is good to perform.

    Is this theory internalist or externalist? I don’t think it helps to try to classify it. Just note that I’m setting it aside. More generally, I’m setting aside theories that make moral omniscience the standard for moral rationality. Rational people can make mistakes; at the very least they can fail to believe some truths. That’s true in science, it’s true in everyday life, and it’s true, I’m assuming, in ethics and epistemology.3

  • 3 This isn’t an argument for this assumption, but perhaps a quick explanation for why the assumption seems plausible to me is in order. All arguments I’ve seen for the view that rationality requires moral omniscience have some kind of enkratic principle as a premise. And for reasons I will go over in Part II of the book, I don’t think these enkratic principles are very plausible. Claire Field (forthcoming) has a very good critical discussion of the arguments for this assumption.

  • I discussed the guidance arguments earlier in the book, and argued that they only supported an implausibly subjectivist version of internalism. Not coincidentally, that’s going to be similar to what I say in this chapter about the symmetry argument But you might think there is another way to block the argument from symmetry to internalism. This chapter and the last have been focussed on the following argument.

    1. Expected utility theory provides the correct treatment of decision making under factual uncertainty.
    2. Factual uncertainty and normative uncertainty should be treated symmetrically.
    3. So some kind of internalist theory provides the correct treatment of decision making under moral uncertainty.

    That’s not valid, because a lot of the terms in it are rather vague. But I’m not going to dispute the inference here; if the premises are both true, then they will support some kind of theory that I want to reject.

    I’m also going to assume, for now, that premise 1 of this argument is basically correct. And this is a substantive assumption. There is one very important moral theory that rejects premise 1 (under one important disambiguation of it). That’s the traditional consequentialist theory that says that the moral status of an action is a function of the consequences it actually has  (Sidgwick 1874; Smart 1961). I’m simply going to assume that’s false for now, and come back to it at the end of the chapter. Note that I’m not assuming that modern consequentialist theories, like the decision-theoretic consequentialism Frank Jackson (1991) defends, are false. I’m just setting aside views on which factual uncertainty is irrelevant to the moral status of an action.

    So to recap, we’re making two large presuppositions at this stage of the dialectic. The defence of these presuppositions is largely in earlier chapters, but as noted above, some of it is to come. The presuppositions are:

    1. The best argument for normative internalism is an argument from the symmetrical treatment of factual and normative uncertainty. This is an argument for a kind of internalism because (contra traditional consequentialism) factual uncertainty matters to the moral and rational status of actions.
    2. Neither rationality nor morality requires moral omniscience, so if the morality or rationality of an action is sensitive to the actor’s actual credence in moral propositions, or to the rational credence in those propositions given their evidence, then in some sense what they should do will differ from what the true (but unknown) moral or epistemological theory says they should do.

    4.3 Inadvertent Virtue and Misguided Conscience

    The next three sections will defend the three principles from the end of 4.1. So our aim here is to defend:

    • Both RiC and PrC theories make false claims about cases of what Nomy Arpaly (2003, 10) calls “inadvertent virtue” and “misguided conscience”.

    Arpaly’s paradigm of inadvertent virtue is Huck Finn, so we’ll start with her description of his story.

    At a key point in the story, Huckleberry’s best judgment tells him that he should not help Jim escape slavery but rather turn him in at the first available opportunity. Yet when a golden opportunity comes to turn Jim in, Huckleberry discovers that he just cannot do it and fails to do what he takes to be his duty, deciding as a result that, what with morality being so hard, he will just remain a bad boy (he does not, therefore, reform his views: at the time of his narrative, he still believes that the moral thing to do would have been to turn Jim in). If one only takes actions in accordance with deliberation, or the faculty of Reason or ego-syntonic actions […], to be actions for which the agent can be morally praised, Huckleberry’s action is reduced to the status accorded by Kant to acting on “mere inclination” or by Aristotle to acting on “natural virtue.” He is no more morally praiseworthy for helping Jim than a good seeing-eye dog is praiseworthy for its helpful deeds. This is not, however, how Twain sees his character. Twain takes Huckleberry to be an ignorant boy whose decency and virtue exceed those of many older and more educated men, and his failure to turn Jim in is portrayed not as a mere lucky accident of temperament, a case of fortunate squeamishness, but as something quite different. Huckleberry’s long acquaintance with Jim makes him gradually realize that Jim is a full-fledged human being, a realization that expresses itself, for example, in Huckleberry’s finding himself, for the first time in his life, apologizing respectfully to a black man. While Huckleberry does not conceptualize his realization, it is this awareness of Jim’s humanity that causes him to become emotionally incapable of turning Jim in. To the extent that this is Huckleberry’s motive, Twain obviously sees him as praiseworthy in a way that he wouldn’t be if he were merely acting out of some atavistic mechanism or if he were reluctant to turn Jim in out of a desire to spite Miss Watson, Jim’s owner. Huckleberry Finn is not treated by his creator as if he were acting for a nonmoral motive, but rather as if he were acting for a moral motive–without knowing that it is a moral motive. (9–10)

    Here are a few basic truths about Huckleberry’s actions in helping Jim remain free.

    1. Huckleberry does the right thing.
    2. Huckleberry does not do the wrong thing.
    3. Huckleberry is praiseworthy for helping Jim remain free.
    4. Huckleberry is not blameworthy for helping Jim remain free.

    If a philosophical theory rejects any of those four claims, it is wrong. Here are two more claims that I think are true, though I’m not going to rest any argumentative weight on them, since I suspect they will strike most readers as, at best, controversial.

    1. Huckleberry’s upbringing, and in particular the testimony from his parents, friends and teachers, provides strong evidence for the false moral theory that he in fact believes, namely that morality requires him to turn Jim in, and Huckleberry’s relationship with Jim does not provide strong enough counter-evidence to make that belief irrational.
    2. Huckleberry is rational, and not irrational, to help Jim to remain free.

    If all of 1 through 6 are true, then all 6 of the theories we started with are false. Turning in Jim maximises both credal and evidential expected goodness. But helping Jim is right (1), praiseworthy (3) and rational (6). So all six theories are false.

    The argument of the last paragraph relies heavily on 5 and 6 though. If 5 is false, then the case does not show any of the E forms to be false. And if 6 is false, the story does not show either of the Ra versions to be false. So without relying on 5 and 6, and I’m not going to rely on them, we can’t argue against all forms of normative internalism using just Huckleberry Finn. But we can argue against some forms. Consider first RiC and PrC. The Huckleberry Finn case shows these to be simply false. Huck does the right thing, and is praiseworthy, although he clearly minimises credal expected goodness (at least relative to the live choices).

    Huckleberry is a case of what Arpaly calls ‘inadvertent virtue’. We can also put pressure on internalism by looking at cases of what she calls ‘misguided conscience’. I’ll use some cases described by Elizabeth Harman (2011), focussing on her examples that involve currently contested moral issues. (As Harman notes, if you don’t find these examples forceful because you don’t agree with the underlying moral theory, you could easily ‘reverse’ the cases to make a similar point.)

    Consider someone who believes abortion is wrong and who yells at women outside abortion clinics. It is wrong to yell at women outside abortion clinics: these women are already having a hard time and making their difficult decision more psychologically painful is wrong. But this person acts in a way that would be permissible if her moral views were true. Another example is someone who believes abortion is wrong and who kills an abortion doctor, in a part of the country where there is good reason to think that this doctor’s death will reduce the number of abortions. This person believes that he ought to kill abortion doctors if doing so would reduce the number of abortions that would be performed. A third example is someone who believes homosexuality is wrong who organizes a campaign against the legalization of gay marriage. He believes he is doing something morally good in organizing the campaign; in fact, in working to further oppression, he is acting wrongly. (458)

    As it stands, the various versions of the C theories say that these three actors are either acting rightly, or praiseworthily, or rationally. And again, the first two of these evaluations are wrong, at least if abortion and gay marriage really are morally permissible. Note that I’m not here claiming that the false moral beliefs involved are normatively irrelevant; it’s consistent with what I say here that the characters in Harman’s stories are blameless without being praiseworthy. I’m going to argue against that view in the next chapter, but I’ll set it aside for now. What we need to focus on first is whether their mistaken moral belief suffices for their action being praiseworthy, and it does not.

    4.4 Ethics and Epistemology

    In the previous section we looked at arguments against C theories; theories that linked normative statuses to the agent’s own credences. In this section we’ll look at E theories, with the aim being to defend this principle.

    • The E theories are unmotivated; they are a compromise between two extreme theories, but they inherit the vices and not the virtues of those extremes.

    I’m going to start by making the case against this, that the E theories are in fact well motivated. That’s partially because I think most internalists in philosophy prefer these to the C theories. And it’s partially because the E theories are an interesting attempt to solve a hard problem. But the problem they are trying to solve is really not solvable; and the attempt just inherits the vices of the positions it is trying to avoid without any offsetting virtues.

    The debate will get very theoretical very quickly, so to try to keep things a little grounded I’ll start with a fairly familiar kind of case. Zaina has been threatened by a group of determined pranksters. She is told, convincingly, that unless she pranks one innocent person, the group will prank that person and one hundred other people this week. But if she does perform the prank, the group will perform no pranks this week. And she knows that whatever happens this week will have no effect on how many pranks the group performs after this week. The prank in question is unpleasant for its victim; Zaina would not like to be the victim of such a prank. And while it might be mildly amusing for onlookers and perpetrators, Zaina knows that each performance of the prank makes the world worse.

    What Zaina doesn’t know is what the correct moral theory is. She has studied some philosophy as an undergraduate, and gives some credence to a consequentialist moral theory, according to which she should perform the prank so as to minimise prank performances, and the rest of her credence to a deontological theory, according to which it would be wrong of her to directly harm an innocent victim of her prank. And this is, we’ll assume, a perfectly reasonable reaction to the moral evidence she has been presented. (If you don’t believe this is possible, substitute some other theories in which you do think a thoughtful undergraduate could be unsure between after some kind of introductory philosophy course, and which recommend different actions in a particular puzzle case. It is a little unrealistic to think that Zaina could know that the truth is in one of these two places, and that will matter a bit below.)

    Zaina doesn’t know what she should do. But she also doesn’t know what action will maximise expected goodness. She knows that according to the consequentialist theory, performing the prank maximises goodness. She knows that according to the deontological theory, not performing the prank maximises goodness. But she needs to know a lot more than that to work out what maximises expected goodness. She needs to fill in two variables in the following table.

    Consequentialist (Pr = p) Deontologist (Pr = 1-p)
    Perform Prank -1 -v
    Don’t Perform Prank -101 0

    The expected value of not pranking is -101p. The expected value of pranking is -p - v(1-p). Figuring out which of these is larger requires solving two hard problems: exactly how likely is it that the consequentialist theory is true, and how do you put the violation of a deontological duty on the same scale as the difference between better and worse consequences.

    The latter problem is very hard, and we’ll come back to it in chapter 6. Ted Lockhart (2000) had a nice idea on how to make progress on it, but Andrew Sepielli (2009) shows that it doesn’t work. Brian Hedden (2016) uses the difficulty of this problem to argue against internalist theories generally. William MacAskill (2016) thinks that the problem is hard enough that we should respond by not trying to maximise the expected value of some random variable in cases of moral uncertainty, but instead using tools from social choice theory such as voting methods. I’m very sympathetic to MacAskill’s approach, insofar as I think that conditional on us wanting an internalist theory of action under moral uncertainty, I think using tools from social choice theory is more promising than trying to find a value for v. But if we go down this route, we’ve given up the symmetry between moral and factual uncertainty, and as I argued at the start of this chapter, without that symmetry it is very hard to motivate internalism. So I’ll assume that Zaina has to find out, or at least be sensitive to, the value of v.

    Now the normative externalist has an easy thing to say about Zaina’s case. If consequentialism is the true moral theory, then she should perform to prank to spare the other 100. If the deontological theory is true, then she should not perform the prank, since she should not commit such an immoral act. And that’s all there is to say about the case. It might help Zaina to know what the right moral theory is, but it isn’t necessary. If she performs the prank out of care for the welfare of the 100 people she is saving then, if consequentialism is true, she does the right thing for the right reasons. If she declines to perform the prank because it would disrespect the victim of the prank, then, if the deontological theory is true, she does the right thing for the right reason. Neither of the last two sentences require that Zaina know that she is doing the right thing or that her reasons are right - what’s needed at most is conformity between her motivations and the right-making features of actions.

    But the internalist tends to find this answer unsatisfactory for two reasons. The reasons tend to pull in opposite directions. The first reason is that it is in one respect too demanding. While it does not require Zaina to know something she has insufficient reason to believe, namely what the right thing to do here is, it does require her to be sensitive to some fact she is unaware of. That fact is, simply, what the right thing to do in this situation is. The second reason is that it is in a different respect too weak. Zaina could be massively incoherent, and the externalist would find nothing wrong with her. Indeed, my preferred version of externalism says Zaina should be incoherent in some respects. It says that if true moral theory says that some factor is of no significance, then Zaina should give it no weight in her calculation, even though she thinks, and should think that there is a decent probability this factor is very morally important.4 And many philosophers seem to find it extremely implausible that Zaina could be right, and rational, and praiseworthy, all without qualification, while there is a serious mismatch between her moral beliefs and her actions.

  • 4 I try to offset the oddness of this result by adopting an extremely pluralist first-order moral theory, so very few things that are plausibly of moral significance turn out to be irrelevant. But I don’t want my defence of normative externalism to turn on this pluralism.

  • 5 I’m setting aside, apart from in this footnote, two problems with this view. As Eric Schwitzgebel (2008) notes, we are often mistaken as what we believe. And thinking that Zaina‘s beliefs settle the value of v requires adopting a ’desire as belief’ view that faces various technical problems  (Lewis 1988, 1996; Russell and Hawthorne 2016).

  • So let’s try the opposite extreme, one suggested by our discussion of Descartes in chapter one. (Though what we start with will not be the view Descartes actually endorses.) What matters for morality is match between credences and action. So as long as Zaina does what she thinks is best, or perhaps what maximises expected goodness, she does the right thing. In that case she acts rightly, is praiseworthy, and is rational. While she needs to find values for p and v, she gets them by introspecting her beliefs, not by hard looking into the external world.5 And the hero of this internalist Cartesian story is bound to be coherent, at least in the sense of having their views about what to do match up with the actions that are within their control.

    But such a theory says some odd things about a different character, Antoine, who was threatened by the pranksters just last week. Antoine believes, rightly, that such a threat is a terrible affront to his dignity as a free person. He further believes, wrongly, that the only appropriate response to such an affront is to kill everyone who makes the threat. Fortuitously, Antoine is as bad at figuring out how to kill as he is at figuring out who to kill, so no one gets hurt. But we shouldn’t let this lucky break obscure the fact that what Antoine does is seriously wrong. And yet, the Cartesian internalist has a problem with this. Antoine does exactly what his conscience tells him to do. He is as resolute a person as one could look for. And he is a villain; someone to be loathed and avoided, not admired.

    So there is an easy and natural way out of the problem Antoine poses. Indeed, it is one that is entailed by the rest of what Descartes says in philosophy. Antoine does believe that killing the pranksters is moral, but this belief is extremely irrational. What he should be guided by is not his actual worldview, which is abhorrent, but the moral evidence that he has. And while we can’t say for sure how that evidence would resolve a problem like the pranksters, we know it would not endorse a massacre.

    And this is, I think, a natural motivation for the E theories. There is something intuitively appealing about trying to find a middle way between the externalist view that requires people to do the right thing without saying what that is, and the kind of subjectivism that has nothing plausible to say about Antoine.

    But there are still problems. Indeed, the problems with this kind of worldview were pointed out by Princess Elizabeth in her correspondence with Descartes. The core problem is that this ‘way out’ requires treating ethics and epistemology very differently, and there is no justification for this differential treatment.

    Antoine doesn’t just believe that the moral thing to do is to kill the pranksters. He believes that his evidence supports that conclusion. If we are to say that what he does is wrong in some respect, then we have to insist that this does not matter. What he should believe is a function of what the evidence actually supports, not what he thinks it supports.

    But now a version of the demandingness objection returns with a vengeance. The internalist thought was that it really unfair to require Zaina to be sensitive to a fact that she does not know - namely whether a consequentialist or deontological moral theory is correct. The proposed response now requires that she be sensitive to two facts that she does not know, namely which values of p and v are best supported by her evidence. And worse than that, we have replaced one yes-no question with two quantitative questions. This does not feel like progress.

    I’ve skated over a division between ways the internalist might require that Zaina be sensitive to her evidence. First, they might require that she have the beliefs that are best supported by the evidence, and then act as her beliefs maintain. This is the version of the view that requires a fairly strong form of normative externalism in epistemology. There is no guarantee that Zaina knows, or even could know, what the rational credence in consequentialism given her evidence actually is. So requiring her to have credences supported by her evidence is requiring her to follow a norm that she does not, and could not, know. And avoiding that was supposed to be a big payoff for internalism. So this way of defending the E theories seems unmotivated.

    But alternatively, the internalist here might just say that Zaina has to be sensitive to her evidence, not that she must know what her evidence supports. As far as it goes, the externalist agrees with this point. The externalist view is that the following three things are in principle separable. (In every case, read ‘believe’ as meaning ‘fully or partially believe’; this covers appropriate credences as well as appropriate full beliefs.)

    1. What Zaina should do.
    2. What Zaina should believe about what she should do.
    3. What Zaina should believe about what she should believe about what she should do.

    The E theories say that while 2 and 3 might come apart, there is a tight connection between 1 and 2. There’s nothing incoherent about that. But it is rather hard to motivate. The following situation is possible. (And thinking through this situation is helpful for getting clear on just what the E theories are saying.)

    The true moral theory is deontological, so true morality requires that Zaina not perform the prank. The rational values for p and v given Zaina’s evidence are 0.2 and and 15. That is, violating a deontological norm is (according to Zaina’s evidence) as bad as the consequentialist thinks letting 15 people be pranked is, but consequentialism is fairly unlikely to be true. So given Zaina’s evidence, it maximises expected goodness to perform the prank. But Zaina’s credal distribution over possible values of p and v is centred a little off those true values, centred on 0.15 and 20. And while this isn’t right, her margin of error in assessing what her evidence supports concerning p and v is great enough that she can’t know these are the wrong values. So given her credences, she thinks her evidence supports not performing the prank.

    Given all that, what is the sense in which she should perform the prank, in which it would be more rational, or moral, or praiseworthy, to perform the prank? It’s true that if she were better in some respect - in respect of having credences that actually tracked her evidence - then she would perform the prank. But if she were fully moral, she would not perform the prank. And if she maximised expected goodness given her perspective, she would perform the prank only if she were a little better epistemically, without being better morally. But what philosophical significance could that counterfactual have?

    While the case is artificial, it fits a natural enough pattern. Someone makes a pair of mistakes. These are mistakes - they are irrational things to do - but they are perfectly understandable since the task in question is hard. Happily, the mistakes offset, so the person ends up doing something they would do if they made neither mistake. But there is some other option that the person would take if they fixed one particular mistake. Does that fact mean that the ‘other option’ is something the person should do, or morally ought to do, or is praiseworthy for doing, or is rational for doing? It doesn’t seem like it; it seems rather that all we can say about that option is this rather technical claim that it has only one of two salient vices.

    And that’s the pattern for the E theories in general. They are unhappy half-way houses. If we want people to follow standards that they cannot know in full detail, those standards may as well be the standards of true morality. If we don’t want to require this of people, then what their evidence supports is not determinative of what we can demand of them. Just what the evidence supports is sometimes hidden too. But if we start being too permissive, we end up saying nicer things than we really want to say about Antoine. There are a lot of choice points here, but none of them lead to a viable version of normative internalism.

    4.5 Rationality and Symmetry

    In the previous two sections, I argued against five of the six theories we started with. All that is left is RaC, and that will be the focus of this section. We’ll start with a case modelled on an argument that Nomy Arpaly gives in response to a theory of Michael Smith’s  (Arpaly 2003, 36–46), then turn to the difficulties the internalist could have in motivating RaC by symmetry considerations.

    Think again about Huckleberry Finn. I said it was rational of Huckleberry to help his friend Jim. But that’s obviously controversial. One might think it is rational for Huckleberry to do what he thinks is good or right. At least, doing what Huckleberry believes to be bad and wrong seems like a kind of irrationality. If so, Huckleberry is irrational, and this might lend some support to a theory like RaC.

    The last sentence of the previous paragraph is a non-sequiter. If Huckleberry is rationally required to do what he thinks is good, it does follow that what he does is irrational. But it doesn’t follow that turning Jim in would be rational, unless the requirement to do what one believes is good is the only rational requirement there is. And that’s not true.

    Let’s leave Huckleberry for a second and think about a different character, Noah. Noah has a friend, Lachlan, who he is thinking of turning in as a runaway slave. He firmly believes that it is a moral duty to turn in runaway slaves, and that Lachlan is such a runaway slave. But both of these beliefs are absurd. Noah lives in Australia in the early 21st century, and there is no slavery. And he has been exposed to compelling reasons at school to believe that slavery is a grave wrong, and that people who helped runaway slaves were moral heroes. But Noah has somehow formed the implausible beliefs he has, and is now deciding whether to act on them.

    Noah is irrational. Noah’s beliefs that Lachlan is a runaway slave, and that turning in runaway slaves is morally required, are both irrational. If Noah attempts to turn Lachlan in though, would that be rational? I doubt it. One might say that it would be irrational to not attempt to turn Lachlan in, given Noah’s other beliefs. I rather doubt this too, but we don’t have to resolve the question. Even if not attempting to turn Lachlan in would be irrational, it might also be the case that attempting would also be irrational. There is no rule that says anyone has a rational option in any situation, no matter how many irrational things they have done to create the situation. Turning Lachlan in is a manifestation of some extremely irrational beliefs; it is irrational.

    As Arpaly points out, the only way to motivate the idea that Noah is rationally required to do what he believes is good is to impose very strong coherence constraints on rational thought and action. We have to say that rationality in action requires coherence between thought and deed, even when that clashes with doing what one’s evidence supports. But turning Lachlan in would be bad even by the standards of coherence. Such an action would not cohere at all well with the mountains of evidence Noah has about slavery.

    As I mentioned above, it is arguable that Noah’s case is a rational dilemma. Perhaps Noah is irrational if he turns Lachlan in, since he does something that he has no evidence is a good thing to do, and he is irrational if he does not, since this actions do not cohere with his judgments. But even saying that Noah faces a rational dilemma does not help the internalist here. For if Noah is in a rational dilemma, that’s still a way of saying that rationality does not line up with maximising expected goodness. After all, maximisation norms never, on their own, lead to dilemmas.

    We will have much more to say about the possibility of dilemmas in cases like this in subsequent chapters. But perhaps it is useful to note here that even if Noah is in a dilemma, it is an extremely asymmetric one. Even if you think it is somewhat irrational to act against his best judgment and fail to turn Lachlan in, it is much more irrational to act on no evidence whatsoever, and actually turn him in. So RaC doesn’t even provide a way to track what is most rational, or least irrational.

    So RaC is false. It isn’t only one’s belief in what is good that is relevant to what it is rational to do, one’s basis for that belief matters as well. But there’s another reason to be suspicious of the Ra versions of the principles. Recall Cressida, our example of a reckless driver. What she does is irrational. But that’s not all that’s true of her actions. What she does is blameworthy and wrong. If we want to accept the internalist’s symmetry principle, we have to say that whatever is true of Cressida is true of Huckleberry Finn. Saying that Cressida and Huck are alike in one respect, namely that they are both irrational, isn’t a way of endorsing symmetry.

    In fact, thinking about the analogy with Cressida gives us a reason to think that Huckleberry really is rational in what he does. Assume, for reductio, that Huck‘s action is irrational, in the way that Cressida’s driving is irrational. Cressida, of course, also acts wrongly. What is the relationship between the irrationality of Cressida’s action, and its wrongness? If the irrationality wholly explains the wrongness, then the irrationality of Huck’s action should ’explain’ the wrongness of it. But that can’t be right, since Huck’s action isn’t wrong. If the wrongness wholly explains the irrationality, then there is no argument from symmetry for thinking Huck’s action is irrational, since there is no underlying wrongness to explain the irrationality. More likely, the wrongness of Cressida’s driving and its irrationality are connected without one wholly explaining the other. Now the externalist has a simple explanation of that connection; Cressida’s knowledge of the risks imposed by driving as she does explains both the irrationality and the wrongness. But that kind of explanation clearly does not generalise to Huck’s case. Huck’s evidence clearly does not explain both the wrongness and the irrationality of his action, since it isn’t in fact wrong.

    Put another way, the defender of RaC can either try and defend their view with a narrowly tailored symmetry thesis, one that just applies to rationality, or with the broader symmetry thesis that would apply to rightness and praiseworthiness too. If we use the broader symmetry thesis, then Cressida’s and Huckleberry’s actions are alike in rationality iff they are alike in rightness. But they are not alike in rightness, so they are not alike in rationality. So RaC fails, since they are clearly alike in rationality according to RaC. So the defender of RaC is forced to use a narrow symmetry thesis. But it is hard to see the motivation for the narrowly tailored thesis. Once we allow that people can be wrong about normative facts, and so can violate a norm while believing they are following it, it seems plausible that one could be wrong about rationality norms, and so could be irrational while believing one is rational.

    4.6 Conclusion

    So far I have argued against six forms of internalism. As I noted at the start, internalism is not committed to the disjunction of these six forms, so there is yet no full argument against internalism. So it might be hoped that some form of internalism can be found that is not committed to any of the six theses that have so far been undermined.

    Hopefully though, it should be clear why the argument so far generalises to other forms of internalism. If the motivations for internalism can be used to support anything, they can be used to support a kind of radical subjectivism. According to this radical subjectivism, rightness, praiseworthiness and rationality are all matters of conformity to one’s own views. And conformity, in the relevant sense, is also to be understood subjectively; to conform to one’s views in the relevant sense is to meet one’s own standards for conformity. Such a view has to say implausible things about cases of misguided conscience like Antoine, so can be seen to be false.

    This completes the arc of chapters 2 through 4. In chapter 2 I discussed some reasons for thinking that our theory should treat moral uncertainty the same way that it treats factual uncertainty, and how this idea has motivated a number of recent versions of normative internalism about ethics. In chapter 3, I argued that this symmetry idea was not as intuitively plausible as it first seemed, and that there were in principle reasons to think that moral uncertainty, and constitutive uncertainty more generally, should be treated differently to the way we treat factual uncertainty. In this chapter, I argued that even if those arguments worked, and a symmetric treatment of factual and moral uncertainty is a theoretical desideratum, we should reject symmetry because it leads to implausible subjectivism. The only way to really respect symmetry is to have a radical subjectivism, and that is implausible.

    This gets to the heart of what I find unsettling about internalism. We start out with three classes of facts:

    • Moral facts, e.g., genocide is wrong.
    • Epistemic facts, e.g., it is irrational, given current evidence, to have a low credence that carbon emissions from human activity are causing global warming.
    • Coherence facts, e.g., it is incoherent to prefer A to B, and D to C in the main example in Allais (1953).

    It is easy to feel that one should have something to say about agents who are unaware of all the moral facts. And that can push one towards a theory where the moral facts themselves don’t play a substantial role in evaluating agents, rather something that is more accessible plays that role. But what could that be? If we say it is evidential probabilities of moral claims, then we are left saying that some facts that are beyond some agents’ ken, i.e., facts about what is evidence for what, are evaluatively significant. Moreover, this kind of view will have strange things to say about cases of inadvertent virtue in agents whose credences track their evidence. So we might want to say something else. If we say it is not evidence but credence that matters, we are left saying that our most important criteria of evaluation turn solely on the agent’s coherence. And again, we can ask whether by ‘coherence’ here we mean actual coherence, or coherence as it strikes the agent. Facts about coherence are not obvious. It is incoherent to believe the naive comprehension axiom. It is incoherent to have the usual preferences in the Allais paradox. Some people think it is incoherent to will something that one could not will to be universally endorsed. Some people think it is incoherent to believe there are discontinuous functions on the reals. If we judge agents by how well their actions, beliefs and evidence actually cohere, then we are judging them by a standard that could well be beyond their knowledge. If we judge agents by how well the think their actions, beliefs and evidence cohere, we’ll be back to saying that Antoine is a hero. Assuming we want to avoid that, we have to apply some standards beyond what the agent accepts, and probably beyond what they could rationally accept.

    The business we’re in here is trying to work out how to evaluate agents and their actions. To evaluate is to impose a standard on the agent, one that they may not accept, and may even lack good reason to accept. That’s the crucial externalist insight. We don’t escape that conclusion by making the standard epistemic rather than moral. Agents can disagree with their evaluators about epistemic matters. And we don’t escape that conclusion by making the standard simply one of internal coherence. Agents can disagree with their evaluators about what is and is not coherent. That the correct standards of coherence are arguably a priori knowable isn’t relevant here; arguably the correct standards in ethics and epistemology are a priori knowable too. It is plausible that the correct standards of coherence are somehow true in virtue of their form, but it isn’t at all clear what the normative significance of that is. Disputes about whether there can be discontinuous functions or contingently existing objects turn on principles that are true (if true) in virtue of their form, but nothing follows from that about whether one could rationally have anything other than a firm true belief concerning the correct resolution of such a dispute.

    It is natural to think that we should try to find something relatively easy to use as our initial evaluation of agents. If one thought ethics is hard, but epistemology is easy, it would be natural to think that we should use epistemic considerations as our starting point. But epistemology isn’t easy. Or, at least, it isn’t the case that all epistemic questions are easier than all ethical questions. If one thought coherence questions were easy while ethical questions were hard, it would be natural to think that we should use coherence considerations as our starting point. But coherence questions aren’t easy either. It’s epistemically worse to believe that torturing babies is morally good than it is to believe naive comprehension. Normative internalism is a search for what Williamson (2000) calls a cognitive home, but no such home exists.

    There is one loose end to tidy up. Perhaps there is another way to respect symmetry. We could respect symmetry by having a much more radical objectivism. If we agreed with classical consequentialists such as Sidgwick (1874) and Smart (1961) that the right thing to do is what produces the best consequences, irrespective of the agent’s evidence or beliefs, we could respect symmetry without getting Huckleberry Finn’s case wrong. I’m not going to have anything original to say about this kind of consequentialism, but I wanted to briefly rehearse the reasons I don’t think this is a good way to save symmetry. (For much more, see Slote (1992, Ch. 15).)

    This kind of actualist consequentialism gets the case of Cressida the reckless driver wrong. And the moves that consequentialists make in response to Cressida’s case do not seem particularly helpful in Jackson cases, as Jackson himself emphasises.  (Jackson 1991) And actualist consequentialism combined with symmetry can’t handle the cases of Prasad and Archie. That combination implies that the parents should have the same attitude towards their past actions, and they should not.

    None of what I have to say about actualist consequentialism is at all original, which is why I’ve left it to the end. And of course this view is externalist, even more externalist in a sense than my own view. That’s why the objections of this chapter do not really touch it. The reason the normative externalist is not forced into actualist consequentialism is that symmetry fails, as was shown in the previous chapter. It’s true that if the arguments of that chapter fail completely, then a new argument could open up against normative externalism, as follows.

    1. If normative externalism is true, then actualist consequentialism is true.
    2. Actualist consequentialism is not true.
    3. So, normative externalism is not true.

    But give what we saw in the previous chapter, we should already reject premise 1. And the arguments of this chapter, showing that symmetry will have one or other kind of implausible consequence, provides another reason to reject premise 1.

    Most discussions of normative internalism in the ethics literature to date have revolved around symmetry. But there are considerations other than symmetry that may seem to motivate a variety of internalism, and in the next two chapters I’ll discuss them.