2  All About Internalism

This chapter has two related aims. The first is to clarify, and classify, the range of internalist positions that are available. The second is to set out more carefully the reasons for adopting one or other of these positions. We’ll end by putting the two parts together, seeing which motivations push towards what kinds of internalism. These themes were all introduced briefly in the introduction, but they need fuller treatment before we proceed.

It is always good practice to state as carefully and as persuasively as possible the view one means to oppose. But there is a particular reason to adopt that general practice here. Some of the appeal of internalism comes from sliding between different versions of the view. Once we get some key distinctions on the table, we get a better look at which versions are defensible.

The conclusion of the chapter will be that the best arguments for normative internalism in ethics make heavy use of the idea that moral uncertainty and factual uncertainty should be treated symmetrically. So to get started, we’ll look at how factual uncertainty matters morally.

2.1 Some Distinctions

It helps to have some mildly technical language on the table to begin with. The terminology I’ll use here is standard enough. But the terms are somewhat ambiguous, and theoretically loaded. I want to stipulate away some possible ambiguities, and simultaneously avoid at least some theoretical disputes. So take the following elucidations of the distinctions to be definitional of the bolded terms as they’ll be used here.

  • Useful vs Harmful Outcomes. Some outcomes involve more welfare, others involve less. I’ll say an action is more useful to the extent that it involves more welfare, and harmful to the extent it involves less.1
  • Good vs Bad Outcomes. Some outcomes are better, all things considered, than others. I’ll use good and bad as predicates of outcomes, ones that track whether the outcome is better or worse. It is common enough to talk about good and bad actions, and good and bad agents, but I’ll treat those usages as derivative. What’s primary is whether outcomes are good or bad. I will not assume that the goodness of an outcome is agent-independent. Perhaps an outcome where a person lies to prevent a great harm is bad relative to that person, since they have violated a categorical moral imperative. That is consistent with saying the lie was very useful, and even that it was good relative to other people.
  • Right vs Wrong Actions. Unlike good and bad, I’ll use right and wrong exclusively as predicates of actions.
  • Rational vs Irrational Actions and States. This is a bit of a stretch of ordinary usage, but I’ll talk both about mental states (beliefs, intentions, etc.) being rational or irrational, and the actions that issue from these states being rational or irrational. So it is both irrational to believe that the moon is made of green cheese, and to bet that it is.
  • Praiseworthy vs Blameworthy Agents. Again, there is an ordinary usage where actions are praiseworthy or blameworthy. But I’ll treat that as derivative. What’s primary is that an agent is praiseworthy or blameworthy, perhaps in virtue of having performed a particular action.
  • 1 I’m going to stay neutral about just what outcomes are. I prefer to think of them as possible worlds, but there are many other choices that would do just as well for current purposes.

  • In conditions of full knowledge, it is very plausible that there are close connections between these five distinctions. There is a natural form of consequentialism that says the five are co-extensive under conditions of full knowledge. A good outcome just is a useful one; a right action is one that promotes the good; it is rational to promote the good, and blameworthy to do not so. Those who are not sympathetic to classical consequentialism will not be happy with this equation between the good and the useful, but they might support many of the other equations. Michael Smith (2006, 2009) for example, has argued that if we allow goodness to be agent-relative, then even non-consequentialists can allow that, under conditions of full knowledge, right actions are those that maximise the good. Smith’s argument is not uncontroversial. Campbell Brown (2011) notes there will be problems with this attempt to ‘consequentialize’ a theory that allows for moral dilemmas. But I’m going to set that issue aside.

    Under conditions of uncertainty, the connections between the distinctions becomes much more murky, even for a consequentialist. There are cases where the useful comes apart from the right, the rational, and the praiseworthy. Here are two such cases.

    Cressida is going to visit her grandmother, who is unwell, and who would like a visit from her granddaughter. She knows the more time she spends with her grandmother, the better things will be. So she drives as fast as she can to get there, not worrying about traffic lights or any other kind of traffic regulation. Normally this kind of driving would lead to several serious injuries, and possibly to fatalities, but by sheer good fortune, no one is harmed by Cressida’s driving. And her grandmother does get some enjoyment from spending a few more minutes with her granddaughter.

    Botum is the chief executive of a good, well-run, charity. She has just been given a £10,000 donation, in cash. She is walking home her normal way, through the casino. As she is walking past the roulette table, it occurs to her that if she put the £10,000 on the right number, she could turn it into £360,000, which would do much more good for the charity. She has 38 choices: Do nothing, bet on 0, bet on 1, …, bet on 36. Of these, she knows the one with the most useful outcome will be one of the last 37. But she keeps the money in her pocket, and deposits it with the charity’s bank account the next morning.

    Cressida acts wrongly, and is seriously blameworthy for her driving. That’s even though the outcome is the best possible outcome. So there’s no simple connection, given uncertainty, between usefulness and rightness.

    But in some ways the case of Cressida is simple. After all, it is very improbable that driving this way will be useful. We might think that there is still a duty to maximise the probability of being maximally useful. The case of Botum shows this isn’t true. She does the one thing she knows cannot be maximally useful. But that one thing is the one and only right thing for her to do. All the other alternatives are both wrong and blameworthy, and that includes the one very useful one.

    This way of talking about right and wrong is not universally adopted. In part this is an unimportant matter of terminological regimentation, but I suspect in part it reflects a deeper disagreement. Here’s the kind of case that motivates the way of talking I’m not going to use.

    Adelajda is a doctor, and Francesc her patient. Francesc is in a lot of pain, so Adelajda provides pain medication to Francesc. Unfortunately, someone wants to kill Francesc, so the pain medication has been adulterated. In fact, when Adelajda gives Francesc this medicine, she kills him.

    A common verdict on this kind of case is that Adelajda acts wrongly, since she kills someone, but blamelessly, since she was ignorant of what she was injecting Francesc with  (Rosen 2008; Graham 2014; Harman 2015). The picture seems to be that an action is wrong if it brings about a bad outcome, and considerations of what was known are irrelevant to the wrongnes of the act. So Adelajda’s act is wrong because it is a killing, independent of her knowledge.

    I think this is at best an unhelpful way to think about Adelajda. In any case, I’m not going to use ‘right’ and ‘wrong’ in that way. On my preferred picture, Adelajda’s ignorance doesn’t provide her an excuse, because she didn’t do anything wrong. (I follow orthodoxy in thinking that excuses are what make wrong actions less blameworthy.) I think the picture where Adelajda doesn’t do anything wrong makes best sense of cases like Botum’s. I’m here following Frank Jackson (1991), who supports this conclusion with a case like this one.

    Billie is a doctor, and Jack her patient. Jack has a very serious disease. He is suffering severe stomach pains, and the disease will soon kill him if untreated. There are three drugs that would cure the disease, A, B and C. One of A and B would stop Jack’s pain immediately, and cure the disease with no side effects. The other would have side effects so severe they would kill Jack. Billie has no idea which is which, and it would take two days of tests to figure out which to use, during which time Jack would suffer greatly. Drug C would cure the disease, but cause Jack to have one day of severe headaches, which would be just as painful as the stomach pains he now has.

    The thing for Billie to do is to give Jack drug C. (I’m saying ‘thing to do’ rather than using a term like ‘good or ’right’ because what’s at issue is figuring out what’s good and right.) Giving Jack drug A or B would be a horribly reckless act. Waiting to find out which of them would have no side effect would needlessly prolong Jack’s suffering. So the thing to do is give him drug C.

    But now consider things from the perspective of someone with full knowledge. (Maybe we could call that the objective perspective, but I suspect the terminology of ‘objective’ and ‘subjective’ obscures more than it reveals here.) Billie directly causes Jack to have severe headaches for a day. This was avoidable; there was a drug that would have cured the disease with no side effects at all. Given full knowledge, we can see that Billie caused someone in her care severe pain, when this wasn’t needed to bring about the desired result. This seems very bad.

    And things get worse. We can imagine Billie knows everything I’ve said so far about A, B and C. So she knows, or at least could easily figure out, that providing drug C would be the wrong thing to do if she had full knowledge. So unlike Adelajda, we can’t use her ignorance as an excuse. She is ignorant of something all right, namely whether A or B is the right drug to use. But she isn’t ignorant of the fact that providing C is wrong given full information. Now assume that we should say what Adelajda does is wrong (since harmful), but excusable (because she does not and could not know it is wrong). It follows that what Billie does is also wrong (since harmful) but not excused (since she does know it is wrong).

    This all feels like a reductio of that picture of wrongness and excuse. The full knowledge perspective, independent of all considerations about individual ignorance, is not constitutive of right or wrong. Something can be the right thing to do even if one knows it will produce a sub-optimal outcome. So it can’t be ignorance of the effects of one action provides an excuse which makes a wrong action blameless. Billie needs no excuse, even though she needlessly causes Jack pain. That’s because Billie does nothing wrong in providing drug C. Similarly, Adelajda does nothing wrong in providing the pain medication. In both cases the outcome is unfortunate, extremely unfortunate in Adelajda’s case. But this doesn’t show that their actions need excusing, and doesn’t show that what they are doing is wrong.

    The natural solution here is to say that what is right for Botum or Billie to do is not to maximise the probability of a useful outcome, but to maximise something like expected utility. It won’t matter for current purposes whether we think Botum should maximise expected utility itself, or some other risk-adjusted value, along the lines suggested by John Quiggin (1982) or Lara Buchak (2013). The point is, we can come up with a ‘subjective’ version of usefulness, and this should not be identified with the probability of being useful. We’ll call cases like Botum and Billie’s, where what’s right comes apart from even the probability of being best, Jackson cases, and return to them frequently in what follows.2

  • 2 Similar cases were discussed by Donald Regan (1980) and Derek Parfit (1984). But I’m using the terminology ‘Jackson case’ since my use of the cases most closely resembles Jackson’s, and because the term ‘Jackson case’ is already in the literature.

  • Expected values are only defined relative to a probability function. So when we ask which action maximises expected value, the question only has a clear answer if we make clear which probability functions we are talking about. Two probability functions in particular will be relevant going forward. One is the ‘subjective’ probability defined by the agent’s credences. The other is the ‘evidential’ probability that tracks how strongly the agent’s evidence supports one proposition or another. These will generate subjective expected values, and evidential expected values, for each possible action. And both values will have a role to play in later discussion.

    2.2 Two Ways of Maximising Expected Goodness

    So far we have only looked at agents who are uncertain about a factual question. Cressida does not know who she will harm by driving as she does, Botum does not know which number will come up on the roulette wheel, and Adelajda and Billie are ignorant of the effects of some medication. But we could also imagine that agents are uncertain about normative questions.

    Deorsa is deciding whether to have steak or tofu for dinner. He is a remarkably well informed eater, and so he knows a lot about the process that goes into producing a steak. But try as he might, he can’t form an opinion on the moral appropriateness of eating meat. He thinks meat eating results in outcomes that are probably not bad, but like many carnivores, he has his doubts.

    To simplify the story, I’m going to make three assumptions. The first assumption is that Deòrsa is actually in a world where meat eating is not bad. The second assumption is that Deòrsa is perfectly reasonable in having a high, but not maximal, credence in meat eating not being bad. You may think that this requires Deòrsa to live in a world very unlike this one, or even an impossible world. But that’s OK for the story I’m telling; I just need Deòrsa’s situation to be conceivable. (We will spend a lot of time thinking about impossible worlds as this book goes on, so it’s useful to warm up with one that might be impossible now.) And the third assumption is that there is a large asymmetry between Deòrsa’s choices. If meat eating is not bad, it would be ever so slightly better for Deòrsa to have the steak, since he would get some enjoyment from it, and it wouldn’t be bad in any other respect. But if meat eating is bad, then having the steak would be a much much worse outcome, since it would involve Deòrsa in an unjustified killing.

    Which action, having the steak or having the tofu maximises expected goodness? That question is ambiguous. In one sense the answer is tofu. After all, there is a non-trivial probability that having the steak leads to a disasterous outcome. In another sense, the answer is steak. After all, there is a thing goodness, and Deòrsa knows enough to know of it that it is maximised by steak eating. Since Deòrsa is to some extent morally ignorant, he doesn’t know what goodness is, so he thinks goodness might be something else, something that is not maximised by steak eating. But given his (perfectly reasonable, rational) credences, the thing that is goodness has its expected (and actual) value maximised by steak eating.

    We might put the distinction in the previous paragraph by saying that the action that maximises the expected value of goodness de re, that is, of the thing that is goodness, is different from the action that maximises the expected value of goodness de dicto, that is, of whatever it is that goodness turns out to be. And using the de dicto/de re terminology, we can see that this distinction applies across a lot of realms. Here are two more examples where we can use it.

    Monserrat is playing the board game Settlers of Catan. She has to decide between two moves. She is uncertain how the moves will affect the later game play. This is reasonable, since the game play includes dice rolls that she couldn’t possibly predict. But she’s also forgotten what the victory condition is. She can’t remember if it is first to 10 points wins, or first to 12 points. The standard is 10, but some games are played under special house rules that change this. In Monserrat’s game, there aren’t any special house rules, so it is actually 10 points that wins. Call the moves that she is choosing between A and B. If she plays A, she has a 30% chance of being first to 10 points, and a 50% chance of being first to 12 points. If she plays B, she has a 40% chance of being first to 10 points, but only a 10% chance of being first to 12 points. She thinks it is 60% likely that the winner is the first to 10, and 40% likely that the winner is the first to 12. So playing B maximises the probability of winning de re. That is, it maximises the probability of doing the thing that is actually winning, i.e., being first to 10. But playing A maximises the probability of winning de dicto. Given Monserrat’s uncertainty about the victory conditions, she thinks her probability of winning is 38% if she plays A, and only 34% if she plays B.

    A professor is deciding which music to put on. She would prefer lowbrow, trashy music. But, suffering from a common enough kind of false consciousness, she thinks she would prefer highbrow, classy music. So playing the lowbrow music would maximise expected preference satisfaction de re. That is, it would maximise the expected value of the satisfaction level of the preferences she actually has. But playing the classy music would maximise expected preference satisfaction de dicto. That is, given her beliefs about her preferences, it seems that the classy music would do a better job at satisfying her preferences.

    The key internalist idea is that in situations that call for maximising expected goodness (or utility, or anything else), it is the de dicto version, not the de re version, that matters. The key externalist idea is that it is the de re version that matters. For the rest of this chapter, while I’m setting up and motivating internalism, I’ll leave it tacit that we are talking about expected values de dicto.

    2.3 Varieties of Internalism

    The chapter started with a five-way distinction between the useful, the good, the right, the rational and the praiseworthy. And we noted that for each of those, there were three separate questions we can ask in any practical situation. First, we can ask what action would be most useful/good/right/rational/praiseworthy. Second, we can ask what action has the highest expected usefulness/goodness/rightness/rationality/praiseworthiness given the credences of the agent. Third, we can ask that same question, but relativise the answer to the agent’s evidence, not the agent’s credences. Multiplying the five way distinction by the three types of question gives us fifteen questions. And each of those fifteen questions picks out a kind of standard. It is an interesting feature of a possible choice that it actually is the rational one, or that it maximises credal expected praiseworthiness, or evidential expected usefulness. For now, call the questions about what actually is most useful etc objective questions, and the standard that an action or choice meets in virtue of being the answer to such a question an objective standard. (This is just to distinguish the first class of questions from the credal and evidential questions.)

    Having these fifteen standards in mind, the five objective standards, the five credal standards and the five evidential standards, we have the resources to formulate a number of interesting internalist theses. The theses I have in mind are of the form:

    • X objectively meets normative standard N1 when she meets credal/evidential standard N2.

    Philosophers who endorse these theses usually take it that the explanatory direction here goes from right-to-left. It is because the agent meets credal/evidential standard N2 that she objectively meets standard N1. But my primary focus will be on the truth of these claims, and not yet the claims about explanatory priority.

    Michael Zimmerman (2008) endorses the following two theses, which exemplify this schema.

    • An action is right when it maximises evidentially expected goodness, and it is wrong when it does not.
    • A person is praiseworthy for maximising credally expected goodness, and blameworthy for not doing so.

    Michael Smith (2006, 2009) argues (against the arguments from Jackson I gave above) that right action is just action that maximises the good. But what an agent is responsible for is whether they maximise evidential expected goodness de dicto. Indeed, what they should do, in ‘the sense most relevant for action’, is maximise evidential expected goodness de dicto  (Smith 2006, 144). Moreover, this is what rationality requires  (Smith 2009).

    There are obviously a lot of other possibilities for N1 and N2 that we could use, and that gives us a lot of internalist theses. Before we go on, three clarifications on what I am, and what I am not, counting as an internalist thesis.

    First, I’ve put the statements above in ways that are naturally interpreted as universal quantifications. That makes them very strong, perhaps implausibly strong. A view that said that theses like the above held ceteris paribus, or held subject to side constraints, or held in a well defined range of cases, would still be internalist in the sense I’m interested in.

    Second, the theses listed above are biconditionals. We could weaken them to one-way conditionals, and still get something recognisably internalist, as long as we think that the conditional is still somewhat explanatory. For instance, a view that said an agent is blameless for what they do if they maximise evidential expected goodness would be internalist, even if it didn’t give necessary and sufficient conditions for blamelessness. Such a view might also add some externalist conditions to blamelessness; perhaps it would go on to say that someone is blameless as long as they actually don’t make things worse, or actually do anything wrong. It’s a matter of terminological preference whether we count these hybrid views as internalist or externalist, but since I plan to argue against them, I’m counting them as internalist. (Chapters 5 and 6 will be dedicated to a discussion of some such views.)

    Third, I’m not counting a view as internalist unless both N1 and N2 are person-evaluative. What I mean by saying a term is person-evaluative is that it is a term we use for evaluations that essentially apply to persons, or actions or states of persons. So truth is not person-evaluative, since we can ask whether the output of a measuring device is true, and harmfulness is not person-evaluative, since earthquakes and volcanoes are harmful. But rationality, praiseworthiness, moral goodness, and moral rightness are person-evaluative (at least if they are evaluative).

    So the view Jackson (1991) defends, where rightness is a matter of maximising expected benefits, is not internalist in my sense, because being a benefit is not a person-evaluative notion. Put another way, we don’t positively evaluate Cressida the reckless driver, even if we note that her actions actually had a small benefit to the world.

    A harder case to judge is whether this should count as an internalist thesis.

    • It is a requirement of rationality that one does the thing that maximises expected goodness (de dicto).

    Is that an internalist thesis, or not? It depends on what one thinks about rationality. Is rationality person-evaluative. Well, it essentially applies to people. (If we judge a machine is thinking rationally, and not just accurately, we are treating it as a person.) But is it evaluative? It’s easy to think this question is easy. Ideal agents are rational, and it is good to be like ideal agents, so of course it is good to be rational. But that’s too quick. An ideal taker of a logic quiz would make an even number of errors, since they would make 0 errors, and 0 is even. But that doesn’t mean the property of making an even number of errors is an evaluative notion in any sense. We shouldn’t say, “Good for you, you made an even number of errors.” Making an even number of errors seems completely epiphenomenal from an evaluative standpoint. And it would be an absurd thing to aim at, as such. It’s surprising how common it is that properties of the ideal are actually bad to aim at, since they often make things worse in the absence of other features of the ideal  (Lipsey and Lancaster 1956-1957). If one thinks being rational is like possessing the property makes an even number of mistakes, then one could agree that rationality involves maximising expected goodness, without thereby disagreeing with externalism.

    Now as a matter of fact, I personally think rationality is evaluative, and is not a matter of maximising expected goodness. So I think the thesis is internalist, and is false. But the classificatory question is still important. After all, this thesis is certainly true:

    • An action maximises expected goodness iff it maximises expected goodness.

    This looks like it has the structure of my canonical internalist theses, with N1 being maximises expected goodness and N2 being goodness. So doesn’t this show that some internalist theses are true? No, I say. This isn’t internalist because maximising expected goodness, where this is understood de dicto and not de re, is not a positive feature of a person. It is a feature that ideal agents have, but it is also a feature that political fanatics like Robespierre have. And it isn’t a good-making feature in either of them. Rather, it is like making an even number of errors; something that can be instantiated in very good ways, or very bad ways.

    2.4 An Initial Constraint

    The internalist schema above has some interesting instances when N1 = N2. For instance, we could consider the following theories, where we use the same kind of evaluation on both sides of the biconditional.

    • It is right to maximise the expected rightness of one’s actions, and wrong to do otherwise.
    • It is blameworthy to do what is most probably blameworthy.

    But there is a quick argument that all such principles are mistaken. The brief version of the argument is that no such principle is compatible with the conjunction of knowledge of one’s own mental states, plus uncertainty about what I’ll call morally asymmetric choices. But there is nothing wrong with knowing one’s own mental states when faced with a morally asymmetric choice, so the principles must be wrong.

    A morally asymmetric choice is where we know that one side of the choice is not in any way morally problematic. A simple case, for most people, is the choice between meat eating and vegetarianism. Very few people would think that it is immoral, bad, wrong, or blameworthy to be vegetarian on ethical grounds. On the other hand, it is easy to feel some qualms about eating meat. So it looks like this is a choice where all the moral risk falls on one side.

    (I’m more interested in the general principle than the particular case, but let me note two quick complications before moving on. It’s imaginable that there is a person who puts either their own health or, if they are pregnant or nursing, their child’s health, at risk by not eating any meat. In the situations most readers of this book find themselves, those situations will be vanishingly rare since there are so many meat alternatives available. But it’s at least conceivable. In the cases I’m discussing I want it to be explicitly part of the case that the person making the choice faces no health complications from being vegetarian. Second, I’m ignoring the possibility that denying oneself pleasures for spurious reasons is immoral. It would merely complicate, but not overturn, the argument to allow for that possibility.)

    Now let’s think about the first bulleted principle above, which I’ll call ProbWrong. And consider an agent who is deciding between steak and tofu for dinner. Imagine that she has the following mental states:

    1. She is sure that ProbWrong is true.
    2. She is almost, but not completely, sure that eating meat is not wrong in her exact circumstances.
    3. She is sure that eating vegetables is not wrong in her exact circumstances.
    4. She is sure that she has states 1–3.

    A little reflection shows that this is an incoherent set of states. Given ProbWrong, it is simply wrong for someone with states 2 and 3 to eat meat. And the agent knows that she has states 2 and 3. So she can deduce from her other commitments and mental states that eating meat is, right now, wrong. So she shouldn’t be almost sure that eating meat is not wrong; she should be sure that it is wrong.

    This argument generalises. If 1, 3 and 4 are true of any agent, the only ways to maintain coherence are to be completely certain that meat eating is not wrong, or completely certain that it is wrong. But that is absurd; these are hard questions, and it is perfectly reasonable to be uncertain about them. At least, there is nothing incoherent about being uncertain about them. But ProbWrong implies that this kind of uncertainty is incoherent, at least for believers in the truth of ProbWrong itself. Indeed, it implies that in any asymmetric moral risk case, an agent who knows the truth of ProbWrong and is aware of her own mental states cannot have any attitude between certainty that both options are not wrong, and certainty that the risky action is not, in her exact circumstances, wrong. That is absurd.

    I conclude that any version of the normative internalist thesis where N1 = N2 is also absurd. Happily, that view seems to be shared by existing defenders of internalism, who usually defend versions where N1 \(\neq\) N2. So I’ll set the N1 = N2 versions of internalism aside and focus just on the versions where they come apart.

    2.5 Motivation One: Guidance

    The externalist offers a fairly simple piece of advice to people facing a moral challenge: Do the right thing. But as a general piece of advice, Do the right thing might sound not much more helpful than Buy low, sell high. We need, it might be thought, more helpful advice.

    That kind of consideration plays a big role in our thinking about factual uncertainty. Think again about Botum the charity director. The best outcome for her, and for the cause she is working for, would be for her to bet the £10,000 on the number that will actually win. But we don’t think she’s under an obligation to do that. Indeed, we think she is under an obligation to not even try to do that. One reason for that, arguably, is that the strategy Bet on the winning number is not one she is in a position to carry out.

    Now the externalist does think that agents should carry out the strategy Do the right thing. But in cases where the moral evidence is murky, arguably this is no more a reasonable demand than the demand that Botum bet on the winning number. Here is how Michael Smith puts the point. He has just rehearsed Frank Jackson’s argument, involving cases like Billie, for the conclusion that right action does not involve maximising the probability of the best outcome, but maximising expected value.

    Indeed, anyone impressed by Jackson’s argument on the non-evaluative facts side of things should surely suppose that an equally impressive argument could be made for the conclusion that right action consists not in the maximization of expected value, but rather in the the maximization of expected value-as-the-agent-sees-things. For no mere exercise of such capacities as an agent has looks like it will ensure that what is really valuable will manifest itself to her either. There are, after all, cultural circumstances in which it would be wildly optimistic to suppose that agents could, merely through the exercise of their own rational capacities, come to judge to be valuable what’s really valuable … If this is right, however, then it seems that the most that we could ever expect of a normal agent … is that they form their evaluative commitments in a way that is sensitive to such evidence as is available to them and that they form their desires in a way that is sensitive to their evaluative commitments.  (Smith 2006, 143)

    Andrew Sepielli expresses a similar sentiment.

    The problem is that we cannot base our actions on the correct normative standards; our relationship to such standards is limited to mere conformity to them. This follows from a quite general point—that we cannot guide ourselves by the way the world is, but only by our representations of the world.  (Sepielli 2009, 8)

    And we saw in the previous chapter that similar sentiments are expressed by Ted Lockhart (2000, 8–9), William MacAskill (2014, 7) and by Hillary Greaves and Toby Ord (2017). We might try to turn this idea into an argument for internalism as follows.

    1. Our most important norms should be sources of usable advice.
    2. If normative externalism is true, our norms are not sources of usable advice.
    3. If normative internalism is true, our norms are sources of usable advice.
    4. So normative externalism is false, and we have a reason to believe normative internalism is true.

    Note that I’m not here assuming that normative externalism and normative internalism are contradictories; there are positions that might best be classified as falling into neither camp. If they were contradictories, the second conjunction of the conclusion would be highly redundant.

    One problem for this argument is that it relies on a slippery notion of usability. If we have rather generous standards for what counts as a usable norm, then premise 2 of the argument is false. After all, we can often tell what is the right thing to do. If we have rather strict standards, then premise 1 is false, since it amounts to the claim that the application conditions for the most important norms must be luminous. (A norm is luminous if whenever it applies, it is possible to know that it applies.) But Timothy Williamson (2000) has shown that nothing interesting is luminous, and our most important norms are interesting. I suspect that there is no reading of ‘usable’ that makes both premises 1 and 2 true.

    The slipperiness also extends to premise 3. The internalist needs standards that are usable, in their preferred sense, and which Robespierre violates. (Unless they are happy saying that Robespierre did well, in the sense that’s most important to them.) But they need that sense of usability to be one in which Do the right thing is not usable. And it is hard to see what that sense could be.

    The regress arguments that will recur throughout this book are designed, in part, to back up this conclusion. (See particularly the discussion of inter-theoretic value comparisons in section 6.2.) I’m going to be arguing that everyone except the most radical subjectivist will be have to acknowledge standards for evaluating agents that those very agents are not in a position to accept. The only options, I’ll argue, are radical subjectivism, and norms that are not guaranteed to be able to usable in the internalist’s preferred sense. That is, the norms will only be usable in the sense that Do the right thing is usable. Since this radical subjectivism is false, some monsters really do well by their own lights, the connection between evaluation and guidance must be more tenuous than the internalist assumes.

    2.6 Motivation Two: Recklessness

    A different argument against externalism is that it licences a form of moral recklessness. And this kind of moral recklessness should not be licenced, says the objector, it should be condemned.

    To see the problem, start with the example of Deòrsa, the uncertain carnivore. (This case is discussed by Guerrero (2007), who uses it in mounting an attack on moral recklessness.) And let’s assume that Deòrsa does end up deciding that he will eat meat. Deòrsa knows that the moral risks are largely, if not universally, on one side. He knows that eating meat provides him with just a small benefit, but puts him at risk of being a moral monster. And yet he does it.

    Now by hypothesis, a fully informed agent in Deòrsa’s position would do the same thing. And yet it is easy to feel some unease with the externalist verdict that Deòrsa’s actions are right, rational and blameless. There is a whiff of recklessness about Deòrsa’s actions, and this kind of recklessness may seem to be a moral vice.

    We can make this whiff stronger by tightening the analogy with reckless action. For example, imagine that Deòrsa isn’t mostly certain that meat eating is acceptable. In fact, in the revised case he is very confident that meat eating is wrong. And yet, he eats meat anyway. The analogies between Deòrsa and Cressida, the reckless driver, start to feel compelling at this point. And yet the externalist says that Deòrsa is not doing anything wrong, or irrational, or blameworthy. (This variant, and its importance, was suggested by Andy Egan.)

    Or perhaps we can build the analogy directly into Deòrsa’s case. Imagine that as well as choosing what to eat, Deòrsa is choosing how to cook it. Deòrsa is considering trying out a new technique from a modernist cookbook. He knows that a side effect of this technique is that a distinctive kind of chemical is released into his building’s ventilation. This chemical will build up in large quantities in his apartment and the apartment next door. The chemical is odorless, and harmless to everyone who doesn’t have a particular allergy. But the quantities Deòrsa would release would be fatal to anyone with the allergy. And Deòrsa knows the boy in the next apartment has some kind of rare allergy, though he can never remember which one it is. He thinks it is probably some other allergy the boy has, and in fact he is right. So he cooks the meat using the modernist technique.

    To make the analogy explicit, assume that Deòrsa has equal credence in these two propositions.

    1. Meat eating is morally acceptable.
    2. The boy in the next apartment will not have a fatal reaction to the chemical that will be released by the modernist cooking technique.

    In each case, this credence is high, but far from maximal. Unless Deòrsa knows that 2 is true, what he does is horribly reckless. It’s not worth risking killing one of your neighbours to get the benefits of a new method of meat preparation. Similarly, says the internalist, the gustatory benefits of meat aren’t worth the risk that goes along with joining the meat-eating team.

    D. Moller (2011) similarly argues that internalism is motivated by considerations about recklessness. I’ll respond to Moller’s own example at more length below, so let me start with my own variant of the kind of case that motivates his position. Two CEOs are trying to choose between more aggressive and more conservative business strategies. Each commissions internal inquiries to determine some properties of the aggressive strategy. (They know how conservative strategies work, since those strategies are familiar.) One of the CEOs doesn’t know exactly what the practical consequences of the aggressive strategy will be, so she commisions an inquiry into those practical consequences. And the other CEO doesn’t know what the right moral evaluation of the aggressive strategy is, so he commissions an inquiry into its moral evaluation. Both inquiries come back with a 3–2 split. In the first case, all five agree the aggressive strategy will slightly raise profits relative to the conservative strategy. But two members think that a side effect will be that ten people in nearby communities fall sick and die as a consequence of the company’s operations. In the second case, all five think the conservative strategy is morally acceptable. But three think the aggressive strategy is good enough, while the other two think it is as bad as being responsible for ten avoidable deaths. In each case, it turns out, the majority members of the committee are right, though the CEO has no extra evidence for that. The intuition these cases seem to support is that neither CEO should carry out the aggressive strategy. Indeed one might hold (though Moller, interestingly, does not) that we should think of the CEO’s who carry out these strategies as being equally culpable for their recklessness.

    2.7 Motivation Three: Symmetry

    Both the guidance considerations and the recklessness considerations push one towards thinking that factual uncertainty and moral uncertainty should be treated symmetrically, or at least as symmetrically as possible. I briefly mentioned that Moller expressly rejects the symmetry claim, and the failure of N1=N2 versions of internalism make it hard, at least for non-consequentialists, to endorse perfect symmetry. But there is something to the idea that moral uncertainty and factual uncertainty should get very similar theoretical treatments, and the externalist offers very different theoretical treatment of them.

    We could get to this idea in a few ways. We could try to argue that it follows from considerations about guidance or recklessness. We could try to argue that it best explains intuitions about guidance or recklessness. Or we could just argue for it directly, either my appeal to the intuitive plausibility of the symmetry claim, or the intuitive plausibility of what it says about a number of cases. For instance, we could just argue that it is plausible that whatever negative attitude we have towards Cressida’s actions, and to Cressida, we should have towards Deòrsa’s actions, and to Deòrsa. And we could argue that whatever positive attitude we have towards Billie’s actions, and to Billie, we should have to the person who successfully manages to maximise evidential expected goodness. In short, we should have symmetric attitudes about the philosophical significance of normative uncertainty and factual uncertainty.

    Ths idea that symmetry (or near-symmetry) should be built into our theories will not, I suspect, strike most people as absurd. Indeed, I suspect it strikes many people as so plausible it barely needs defence. It certainly does a lot of work, without much argument, in works by Jacob Ross (2006) and Michael Zimmerman (2008). If the symmetry thesis is both intuitive and true, there’s nothing wrong with this approach. And I concede it is, at least prima facie, highly intuitive. But I don’t think it is true. Indeed, I don’t think it is even particularly intuitive, once we reflect on it in more detail.

    But it is intuitive enough to use as the foundation for discussions of internalism. And while I’ll cycle back around to other motivations for internalism, I’ll use symmetry-based considerations as the main focus of discussion. That’s because the symmetry-based considerations do such a good job of both being independently intuitive, and capturing what is best worth capturing in the other arguments.

    So in the next chapter I’ll push back against the intuitiveness of this symmetry claim, arguing that the closer we look at it, the less similar factual and moral uncertainty seem. And in the chapter after that, I’ll argue that even if symmetry is plausible it should be rejected, for it leads to unacceptable regresses.