3  Against Symmetry

In the previous chapter, I suggested that one of the key motivations for normative internalism that it allows for a symmetry between the way we treat factual uncertainty and ignorance, and the way we might think about treating normative uncertainty and ignorance. Some writers have found it so obvious that these cases should be treated symmetrically that they have simply incorporated this symmetric treatment into their theory without arguing for it. Those who have argued for it have usually found the symmetry very intuitive.

In this chapter, I’ll try to undermine that intuitive symmetry. The first three sections will introduce three considerations that undermine the idea that the factual and normative uncertainty should be treated symmetrically, and the last three sections deal with some complications that the first three sections introduce. In the next chapter, I’ll argue that even if we found the symmetry intuitive, we should ultimately reject it, because there is no way to incorporate it into a theory that is even remotely plausible. That is, I’ll argue that any internalist theory that can handle even very simple cases has to reject the symmetry thesis, and so cannot be motivated by symmetry considerations.

3.1 Guilt and Shame

If normative and factual uncertainty have the same normative implications, then we should feel similarly about our own past actions that were done due to factual ignorance, and those that were done due to moral ignorance. But this doesn’t seem to be how we do, or should, feel. We can see this by comparing a pair of cases. The second of the cases is a minor modification of a case Elizabeth Harman (2015) uses in making a similar argument to the one I’m presenting in this section.

Prasad is a father of two children, an older daughter and a younger son. In the division of parental labour in his house, teaching the children to read is primarily his responsibility. He takes this very seriously, and reads the latest studies on which techniques are most effective at teaching reading. He doesn’t have a strong enough background in statistics to be able to evaluate many of the papers he reads, but he can tell what techniques are being approved by the leading figures in the field, and those are the techniques he uses in teaching his children to read.

Unfortunately, the relevant science around here moves slowly and fitfully. The technique that Prasad followed when his daughter was learning to read was soon shown to be mostly ineffective. It was better than not spending time on reading, but wasn’t any better than unstructured reading time. By the time his son was learning to read, educational science had advanced substantially, and Prasad was able to use a technique that led to his son learning to read relatively quickly. This gave his son an advantage that persisted throughout his schooling, and led to him being admitted to an exclusive college, and subsequently earning much more than he would have without the benefit of early reading. Prasad’s daughter did well at school, as you’d expect with this level of parental attention, but would have been even better off had been trained to read the way her brother was trained.

Archie is a 1950s father who, like many other 1950s fathers, thinks it is more important to look after his son’s interests than his daughter’s. So while he puts aside a substantial college fund for his son, he puts aside less for his daughter. As a consequence, his daughter cannot afford to go to as good a college as his son goes to, and subsequently is materially less well off throughout her life than Archie’s son.

Prasad was mistaken about a matter of fact; about which techniques are most effective at teaching a child to read. Archie was mistaken about a moral matter; whether one should treat one’s sons and daughters equally. Now consider what happens when both see the error of their ways. Prasad may feel bad for his son, but there is no need for any kind of self-reproach. It’s hard to imagine he would feel ashamed for what he did. And there’s no obligation for him to feel guilty, though it’s easier to imagine him feeling guilty than feeling ashamed. Archie, on the other hand, should feel both ashamed and guilty. And it’s natural that a father who realised too late that he had been guilty of this kind of sexism would in fact feel the shame and guilt he should feel. The fact that his earlier sexist attitudes were widely shared, and firmly and sincerely held, simply seems irrelevant here.

If the symmetry thesis were correct, there should not be any difference in Prasad and Archie’s attitudes. Both of them behaved in just the way we should expect, given their factual and normative beliefs. And both of them had beliefs that were sincere, and widely shared in their community. But there is still a difference between the two of them, as revealed by the emotional reactions they both do and should have.

3.2 Jackson Cases

As Zimmerman (2008) argues, the kinds of cases discussed by Jackson (1991) are important for seeing how factual uncertainty is normatively significant. It isn’t just that when an agent doesn’t know what is true, and so doesn’t know which action produces the best outcome, she thereby doesn’t know what is right to do. In some cases of decision making under uncertainty, the thing that is clearly right to do is the one thing she knows will not produce the best outcome. Gambling the charitable donation on the roulette wheel is wrong, although the best outcome would be to gamble on the number that will actually come up. In the previous chapter we dubbed cases like this, where the right thing to do is something one knows will not produce the best outcome, Jackson cases. Jackson cases are ubiquitous when making decisions under factual uncertainty.

If we should treat factual uncertainty and moral uncertainty symmetrically, then Jackson cases for moral uncertainty would be easy to find. But it is far from clear that there are any such cases. That is, it is far from clear that there are cases where we want to say anything positive about an agent who hedges their moral bets.

A simple way to generate Jackson cases is to set up a decision problem with the following features:

  • There are three option: A, B and C;
  • There are two epistemic possibilities, w1 and w2, the agent knows that precisely one of them is realised, and the she reasonably thinks each is fairly likely.
  • In w1, A is optimal, C is a little worse, and B is a catastrophe.
  • In w2, B is optimal, C is a little worse, and A is a catastrophe.

If the agent’s uncertainty about w1 or w2 is grounded in a straightforwardly factual uncertainty, it seems the agent should do C. Just what that ‘should’ amounts to is up for debate, but there is something awful about doing A or B - even if it produces the optimal outcome.

What happens, though, if w1 and w2 are factually alike, but differ in the correct moral theory? (As has come up a few times, it is unlikely that both w1 and w2 will be possible worlds in this case, but I don’t think this matters for current purposes.) Well, let’s look at some cases and see.

3.2.1 Case One - Abortion

Marilou is 12 weeks pregnant, and lives in a state where abortion is criminalised and, on occasion, heavily punished. Marilou deeply desires to have an abortion. Marilou is reasonably well off, and as is the norm in states that criminalise abortion, reasonably well off people are able to obtain abortions with a little assistance. Marilou asks her friend Shila for such assistance. Shila now has to make a choice. Shila is torn between two moral views about abortions 12 weeks into pregnancy. According to one, the potential that the fetus has to develop into a fully functioning human being means that aborting it is the moral equivalent of murder. According to another, the fetus has little or no moral standing on its own, the importance of Marilou’s autonomy means that Marilou should be able to get an abortion, and her friends should assist her in avoiding the oppressive laws against abortion. Shila now has three choices.

  1. Assist Marilou in getting the abortion, which is either a way of respecting Marilou’s autonomy and honouring their friendship, or is a way of being an accomplice to murder.
  2. Report Marilou’s plans to the authorities, which is either horribly disrespectful to Marilou and a gross violation of their friendship, or bravely preventing a murder. (Assume that Shila knows that although the authorities aren’t maximally vigilant about preventing abortions, they are obliged to act on incriminating information, so this tip-off will lead to Marilou’s imprisonment.)
  3. Do nothing, suspecting that without her help, Marilou will carry the child to term and quietly adopt it out.

In either w1, the world where abortion is permissible, or w2, the world where it is not, C is bad. In w1, Shila is a bad friend, and is tacitly collaborating in state oppression. In w2, Shila is not taking simple steps that would remove the mortal danger facing an innocent human. But option C isn’t catastrophic in either world. In w1, Shila is not personally stopping Marilou get an abortion, she just isn’t helping Marilou break the law. (You can be a good enough friend and still draw the line between helping one move houses and helping one move bodies.) And in w2, she’s not killing anyone, or even letting someone be killed, just not being maximally vigilent in preventing a killing. So the case has the structure of a Jackson case.

And yet there is little to be said for C. The situation calls for moral bravery, one way or the other. (I think in the direction of A, but it doesn’t matter for these purposes whether you agree with that.) And C is moral cowardice. Unlike in the cases involving factual uncertainty, it doesn’t seem at all like the safe, prudent, commendable option.

3.2.2 Case Two - Theft

Eurydice and Pandora are acquaintances, and they are planning to go to a party. Eurydice is worried because Pandora plans to wear some very expensive jewellery, and the party features a number of thieves, several of whom are Eurydice’s friends. Eurydice tells Pandora this, but Pandora is unmoved, and insists she won’t be deterred from living her life the way she wants by the existence of petty criminals. Eurydice is much more observant than Pandora, and knows that if someone tries to steal the jewellery, she’ll be able to prevent them, but only by using a non-trivial amount of physical force. For example, she could punch the would-be thief hard in the jaw while he was making his escape, revealing his thievery. (Realistically, she can’t know exactly how she would prevent a theft, but assume that’s the level of force that would be needed.)

Eurydice is torn between two moral theories. One of them is a fairly mainstream view on which a moderate amount of physical force is warranted if it is the only way to prevent the theft of expensive goods. On the other moral theory, the demands of friendship and bodily autonomy completely outweigh considerations arising from property, so punching a friendly thief to prevent a theft would be a completely unjustified assault. Given all this, Eurydice has three options.

  1. Go to the party and plan to prevent (using violence if necessary) any theft of Pandora’s jewellery.
  2. Go to the party and plan to refrain from any violence, even if this means standing by while a theft occurs.
  3. Prevent Pandora going to the party. The most morally acceptable way to do that, Eurydice thinks, would be to tell Pandora a small lie that leads to Pandora going on a wild goose chase for half the night, leaving it impossible to go to the party.

Again, this feels like a Jackson case. C is a moral misdemeanour - you shouldn’t lie to people for the purpose of distracting them away from a party they have every right to be at. But it’s worse to stand by and watch a theft take place that you could easily (and properly) prevent, or to unjustifiedly punch a friend in the jaw.

Yet again it seems like C would be a terrible option to take. Either the amount of violence needed to apprehend the thief would be justified or it wouldn’t be. In neither case does it seem like sending Pandora on a wild goose chase to prevent the theft would be a good way to prevent the problem arising. This seems true even though it would guarantee that things don’t go badly morally wrong, while either alternative runs a substantial moral risk.

3.2.3 An Asymmetry

When welfare is on the line, it is not just acceptable, but laudable, to sacrifice the chance of the best outcome for a certainty of a very good outcome. But it isn’t at all clear that this is true when virtue is on the line. Committing a moral misdemeanour because you don’t know which of the other options is a moral felony and which is the right thing to do is, still, committing a moral misdemeanour.

3.3 Motivation

Moral uncertainty, at least of the kind I’m focussing on, is a kind of constitutive uncertainty. An agent who is morally uncertain is uncertain about what kind of things constitute goodness, rightness, praiseworthiness, and so on. It’s very plausible that these are indeed constituted by something else. It’s hard to imagine that rightness is a free-floating feature of reality.

Cases of constitutive uncertainty are useful test cases for thinking about what’s really valuable. If we know that A constitutes B, and hence have equally strong desires for A and for B, it isn’t always easy to tell which of these desires is more fundamental, and which is derived. Of course, neither of the desires will be an instrumental desire, since getting A isn’t a means to getting B. But one of them could be derivative on the other.

And the simplest way to tell which is which, is to look to people who do not know that A constitutes B, and see what makes sense from their perspective. Think again about Monserrat, who has forgotten the victory conditions for her game. We know that being first to 10 points constitutes winning. But she doesn’t. What action makes sense for her to do? I think it is doing the thing that maximises her probability of winning, given her credal distribution. It turns out that isn’t the thing that maximises her probability of being first to 10 points, which is what actually amounts to winning. But she has no motivation to be first to 10 points, unless that amounts to winning. Or, at least, she has no such motivation on the most natural telling of the story. Perhaps she has an odd psychological tick that means she always values being first to n figures in points in any game she plays. But the more natural story is that she wants to win, and she should do the thing that maximises the probability of winning.

Things are rather different when it comes to moral uncertainty. There it seems that agents should be moved to produce the outcome that actually constitutes goodness or rightness, not the thing that maximises expected goodness or rightness. This is a point well made by Michael Smith. He compared the person who desires to do what is actually right, as he put it, desires the right de re, with the person who desires to do what is right whatever that turns out to be, as he put it, desires the right de dicto.

Good people care non-derivatively about honesty, the weal and woe of their children and friends, the well-being of their fellows, people getting what they deserve, justice, equality, and the like, not just one thing: doing what they believe to be right, where this is read de dicto and not de re. Indeed, commonsense tells us that being so motivated is a fetish or moral vice, not the one and only moral virtue.  (Smith 1994, 75)

I think that’s all true. A good person will dive into a river to rescue a drowning child. (Assuming that is that it is safe enough to do so; it’s wrong to create more rescue work for onlookers.) And she won’t do so because it’s the right thing to do. She’ll do it because there’s a child who needs to be rescued, and that child is valuable.

Not everyone agrees with Smith that commonsense has this verdict about moral motivation. It helps to see the point made less abstractly, about a particular case. Here is the initial description of Saint-Just from Palmer’s classic study of the Committee of Public Safety, Twelve Who Ruled.

Saint-Just was an idea energised by a passion. All that was abstract, absolute and ideological in the Revolution was embodied in his slender figure and written upon his youthful face, and was made terrible by the unceasing drive of his almost demonic energy. He was a Rousseauist, but what he shared with Rousseau was the Spartan rigor of the Social Contract, not the soft day-dreaming of the Nouvelle Héloïse, still less the self-pity of the Confessions. He was no lover of blood, as Collot d’Herbois seems to have become. Blood to him simply did not matter. The individual was irrelevant to his picture of the world. The hot temperament that had disturbed his adolescence now blazed beneath the calm exterior of the political fanatic.  (Palmer 1941, 74, emphasis added)

That’s what someone who is only motivated by the good, as such, looks like. And it’s terrifying. Commonsense morality prefers a view where blood matters, and the individual is relevant, and where all of Rousseau’s works have something to teach us about how to live. 1

  • 1 I’ve mentioned Robespierre a few times in this context, so it’s interesting to note that Palmer thinks Robespierre is not as extreme as Saint-Just. He compares the two in the paragraph preceding this one, mostly saying that Saint-Just is a more extreme version of Robespierre. Saint-Just is similar to his hero, but “without the saving elements of kindness and sincerity”. I think ‘saving’ is a little strong, but otherwise that judgment seems right. Collot positively desired actually bad things, Robespierre cared insufficiently about actually good things, and Saint-Just simply did not care about anything beyond ideology.

  • We need to distinguish here two theses one might have about moral motivation. One is that the good, as such, should not be one’s only motivation. That’s what Smith says commonsense says, and it’s what the example of Saint-Just supports. Another is that the good, as such, should not be among one’s motivations. I think this latter claim is mostly true as well. But I’ll come back to that; for now I want to spell out the consequences of the weaker claim, that the good should not be one’s only motivation.

    This claim already makes trouble for normative internalists, including Smith himself. It makes trouble because it offers us a nice explanation of why there should be the kind of asymmetry between factual and normative uncertainty that we see in cases like Shila’s. Think again about the situation she is facing. She has to choose between respecting Marilou’s autonomy, and respecting the foetus’s life. And she doesn’t know what to do, in no small part because she doesn’t know which form of respect constitutes moral rightness. But one thing she does know is that the moderate option maximises expected goodness. If we thought that this was an important motivation, we presumably should think it could be decisive in some cases, and Shila might take that moderate option. But intuitively she should never do that, and not have any motivation to do that. A pro-choice theorist may think Shila should believe that respecting autonomy is the right thing to do, and so Shila should be motivated to do what’s right because she’s motivated to respect autonomy. A pro-life theorist may think Shila should believe that respecting life is the right thing to do, and so Shila should be motivated to do what’s right because she’s motivated to respect life. But neither will hold that Shila should have a motivation to do what’s right that floats free of her motivation to respect autonomy, and to respect life.

    This way of thinking about Shila‘s case suggests a prediction, one that is borne out by the cases. It isn’t always the case that moral ’hedging’, of the kind I’ve been criticising since the start of section 3.2, is bad. Imagine an agent faces a choice between competing values, both of which are values that she holds dear. For instance, consider an administrator who faces a student in a somewhat unusual situation. (The point of it being unusual is to ensure there is no clear precedent for what to do in such cases.) The administrator has to choose between being compassionate to the person in front of her, and doing the thing she thinks would best treat the case in front of her like previous cases. She may well care both about compassion and equality, and in such a case, it would make sense to look for a way to minimise the distance between how she treats this case and how she has treated past cases, while also being highly compassionate to the person in front of her. And that is true even if the outcome she comes up with is neither the most compassionate thing she can do, nor the most respecting of her desire to treat like cases alike. The reason this makes sense is that the administrator doesn’t think rightness is either exclusively constituted by compassion, or by treating like cases as alike as possible. Rather, she has plural values, like most of us do. And plural values, as opposed to uncertainty about what is the one true value, can produce moral Jackson cases.

    What is the difference between Monserrat’s case and Shila’s? Why should Monserrat aim for what maximises the constituted quantity, while Shila aims for what maximises (or perhaps best respects) the constituting quantity? The answer comes from what it means for something to be right. It just is for it to be valuable. One of the striking things about games is that they turn something otherwise pointless, like being first to 10 points, into something that rational people can value. But morality isn’t like that. It can’t make value out of something that wasn’t valuable, because if it wasn’t valuable, it wouldn’t be fit to constitute rightness. So whatever rightness is, be it respecting autonomy or maximising welfare or whatever, must be something already valuable. And it is hard to see how having the property of being most valuable can be more valuable than the valuable thing itself.

    So we get an explanation of Smith’s observation. (And here I’m not saying anything that hasn’t been said before, by for example Nomy Arpaly (2003) and Julia Markovits (2010).) It is good to aim at what is actually right and good, not at rightness and goodness themselves, because the constitutors are where the value lies. But that means moral uncertainty should not affect our motivations. And that’s a striking asymmetry with factual uncertainty, which quite clearly should affect our motivations.

    3.4 Welfare and Motivation

    Smith’s insight, that there is something wrong about being motivated to do what’s good as such, generalises. There are plenty of other things where we do and should care about their constituents, but we should not (and typically do not) care about them as such. Welfare, for instance, is like this.

    It’s plausible that deliberately undermining your own welfare, for no gain of any kind to anyone, is irrational. Indeed, it may be the paradigmatic form of irrationality. There is a radically Humean view that says that welfare just consists of preference satisfaction, and rationality is just a matter of means-end reasoning. If that’s right then what I just said is plausible is not only true, but almost definitional of rationality. You don’t have to be that radical a Humean, or really any kind of Humean at all, to think there is a connection between welfare and rationality. But if rationality is connected to welfare, it is because it is connected to the constituents of welfare, not to welfare as such. To see this, consider two examples, Bruce and Oberon.

    Bruce has thought a bit about philosophical views on welfare. In particular, he has spent a lot of time arguing with a colleague who has the G. E. Moore-inspired view that all that matters to welfare is the appreciation of beauty, and personal love.2 Bruce is pretty sure this isn’t right, but he isn’t certain, since he has a lot of respect for both his colleague and for Moore.

  • 2 It would be a bit of a stretch to say this is Moore’s own view, but you can see how a philosopher might get from Moore (1903) to here. Appreciation of beauty is one of the constituents of welfare in the objective list theory of welfare put forward by John Finnis (2011, 87–88).

  • Bruce also doesn’t care much for visual arts. He thinks that art is something he should learn something about, both because of the value other people get from art, and because of what you can learn about the human condition from it. And while he’s grateful for what he learned while trying to inculcate an appreciation of art, and he has become a much more reliable judge of what’s beautiful and what isn’t, the art itself just leaves him cold. I suspect most of us are like Bruce about some fields of art; there are genres that we feel have at best a kind of sterile beauty. That’s how Bruce feels about visual art in general. This is unfortunate; we should feel sorry for Bruce that he doesn’t get as much pleasure from great art as we do. But it doesn’t make Bruce irrational, just unlucky.

    Finally, we will suppose, Bruce is right to reject his colleague’s Moorean view on welfare. Appreciation of beauty isn’t a constituent of welfare. We’ll for the sake of the example that welfare is a matter of health, happiness and friendship. That is, a fairly restricted version of an objective list theory of welfare is correct in Bruce’s world. And for people who like art, appreciating art can produce a lot of goods. Some of these are direct - art can make you happy. And some are indirect - art can teach you things and that learning can contribute to your welfare down the line. But if the art doesn’t make you happy, as it doesn’t make Bruce happy, and one has learned all one can from a genre, as Bruce has, there is no welfare gain from going to see art. It doesn’t in itself make you better off, in the way that Bruce’s Moorean colleague thinks it does.

    Now Bruce has to decide whether to spend some time at an art gallery on his way home. He knows the art there will be beautiful, and he knows it will leave him cold. There isn’t any cost to going, but there isn’t anything else he’ll gain by going either. Still, Bruce decides it isn’t worth the trouble, and stays out. He doesn’t have anything else to do, so he simply takes a slightly more direct walk home, which (as he knows) makes at best a trifling gain to his welfare.

    Bruce is perfectly rational to do this. He doesn’t stand to gain anything at all from going to the gallery. In fact, it would be a little perverse, in a sense we’ll return to, if he did go.

    Oberon is also almost, but not completely certain, that health, happiness and friendship are the sole constituents of welfare.3 But he worries that this is undervaluing art. He isn’t so worried by the Moorean considerations of Bruce’s colleagues. But he fears there is something to the Millian distinction between higher and lower pleasures, and thinks that perhaps higher pleasures contribute more to welfare than lower pleasures. Now most of Oberon’s credence goes to alternative views. He is mostly confident that people think higher pleasures are more valuable than lower pleasures because they are confusing causation and constitution. It’s true that experiencing higher pleasures will, typically, be part of experiences with more downstream benefits than experiences of lower pleasures. But that’s the only difference between the two that’s prudentially relevant. (Oberon also suspects the Millian view goes along with a pernicious conservatism that values the pop culture of the past over the pop culture of the present solely because it is past. But that’s not central to his theory of welfare.) And like Bruce, we’ll assume Oberon is right about the theory of welfare in the world of the example.

  • 3 Thanks to Julia Markovits for suggesting the central idea behind the Oberon example, and to Jill North for some comments that showed the need for it.

  • Now Oberon can also go to the art gallery. And, unlike Bruce, he will like doing so. But going to it will mean he has to miss a night playing video games that he often goes to. Oberon knows he will enjoy the video games more. And since playing video games with friends helps strengthen friendships, he has a further reason to skip the gallery and play games. Like Bruce, Oberon knows that there can be very good consequences of seeing great art. But also like Bruce, Oberon knows that none of that relevant here. Given Oberon’s background knowledge, he will have fun at the exhibition, but won’t learn anything significant.

    Still, Oberon worries that he should take a slightly smaller amount of higher pleasure rather than a slightly larger amount of lower pleasure. And he’s worried about this even though he doesn’t give a lot of credence to the whole theory of higher and lower pleasures. But he doesn’t go to the gallery. He simply decides to act on the basis of his preferred theory of welfare, and since that theory of welfare is correct, he maximises his welfare by doing this.

    Now distinguish the following two claims about welfare and rationality. The first of these claims is plausibly true; the second is false.

    • A person’s welfare is such that it is irrational for them to do something that might undermine it for no compensating gain.
    • It is irrational for a person to do something that might undermine their welfare, whatever that turns out to be, for no compensating gain.

    If welfare turns out to be health, happiness and learning, then the first claim says that it is irrational to risk undermining one’s health, happiness and learning for no compensating gain. And that is correct. But the second claim says that for any thing, if that thing might be welfare, and an action might undermine it, it is irrational to perform the action without a compensating gain. That’s a much stronger, and a much less plausible, claim. The examples of Bruce and of Oberon show that it is false; they act rationally even though they do things that might undermine what welfare turns out to be.

    One caveat to all this. On some theories of welfare, it will not be obvious that even the first claim is right. Consider a view (standard among economists) that welfare is preference satisfaction. Now you might think that even the first claim is ambiguous, between a claim that one’s preferences are such that it is irrational to undermine them (plausibly true), and a claim that it is irrational to undermine one’s preference satisfaction. The latter claim is not true. If someone offers a person a pill that will make her have preferences for things that are sure to come out true (she wants the USA to stay being more populous than Monaco, she wants to have fewer than ten limbs; etc.), it is rational to refuse it. And that’s true even though taking the pill will ensure that she has a lot of satisfied preferences. What matters is that taking the pill does not satisfy her actual preferences. If she prefers X to Y, she should aim to bring about X. But she shouldn’t aim to bring about a state of having satisfied preferences; that could lead to rather perverse behaviour, like taking this pill.

    3.5 Motivation, Virtues and Vices

    So far in this chapter I have relied heavily on Michael Smith’s principle that a certain kind of motivation would be unreasonably fetishistic. In this section I’m going to defend Smith’s principle in more detail. Since Smith’s principle has been extensively discussed, I’m going to spend some time on the existing literature. But one key point of this section will be that I need a much weaker principle for my broader conclusion than Smith needs for his. So even if the existing objections to Smith are correct, and I will concede at least one has some force against the strong principle Smith defends, they may not affect my argument for externalism.

    That Smith and I need different versions of the principle should not be too surprising. As we saw in chapter 2, Smith defends some of the internalist principles I’m arguing against. Since we have different conclusions, one might hope we had different premises. The passage from Smith I quoted about moral fetishism is in defence of his motivational internalism. As I noted in chapter 1, the different theses called internalism are dissociable, but they do have some affinities. Motivational internalism is consistent with normative externalism, but is in some tension with it. So again, it isn’t surprising that I’ll be using Smith’s idea in a slightly different way.

    Let’s start by setting out three theses that one might try to draw from considerations starting from Smith’s reflections.

    Weak Motivation Principle (WMP)
    In equilibrium, it is permissible to not be intrinsically motivated by maximally thin moral properties de dicto.
    Strong Motivation Principle (SMP)
    In most circumstances, it is impermissible to be at all intrinsically motivated by moderately thin (or thinner) moral properties de dicto.
    Ideal Motivation Principle (IMP)
    In all circumstances, it is impermissible to be at all intrinsically motivated by maximally thin moral properties de dicto.

    The SMP and IMP are both stronger than the WMP, though neither is stronger than the other. As I read him, Smith needs the IMP to get his argument for motivational internalism to work. Since I’m not interested in that, I’ll set it aside from now on.

    In the next section I’ll discuss the WMP, with a focus on clarifying the term ‘equilibrium’. The aim is to argue that it is is true, and that if it is true, there is an asymmetry between factual and moral uncertainty.

    After that, I’ll discuss the SMP. I also think the SMP is true, and if it is true, then there is a huge asymmetry between factual and moral uncertainty. But I need to stress at this point that defending the SMP isn’t strictly necessary for the major argument of the chapter; the WMP is enough to raise problems.

    After that, I’ll discuss a few examples that help clarify the boundaries of the two principles, and which I think provide some argument for the principles. But I’m discussing them at the end, because I don’t really want the case for or against the principles to rest on intuitions about disputed examples like the ones I’ll bring up.

    The principles appeal to the notion of ‘intrinsic motivation’, and it’s worth spending a few words on that. Just about everything I say here is drawn from Arpaly and Schroeder (2014, 6–14), and they go into more detail than I do about some of the important distinctions.

    There is a distinction in everyday English between ends and means. And to a first approximation, to desire something as an end is to desire it intrinsically, and to desire it as a means it to desire it instrumentally. But here we need to make a slightly finer distinction than that.

    Parents typically desire that their children be well educated. For some people this will be an instrumental desire; they want their children to be, say, very rich, and think that education is a means to wealth. But for others it will be intrinsic; a good education is part of what is good for their children.

    Now consider the desire (again widely held among parents) that one’s children be well educated in arithmetic. How does this relate to the general desire that they be well educated? It isn’t exactly a means to that end. It is part of what it is to be well educated. To desire that a child be well educated, and to know what it is to be well educated, just means that you desire that the child be well educated in arithmetic. Call desires like this, ones which have a constitutive rather than causal connection to intrinsic desires, realizer desires.

    The most obvious cases of realizer desires are when the intrinsic desire is more general, and the realizer desire is more specific. But we can go the other way around too. Consider again the perfectly normal parent who wants their child to be well educated, to be healthy, to be happy, to have lots of friendships, and generally wants all the things that make up a good life for their child. That parent will want their child to have a good life. This might be an intrinsic desire; maybe all those other desires are realizers of it. It might even be an instrumental desire, though this would be a little perverse. Or it might be a realizer desire, and I think this is the most natural case. If one wants the child to be happy, healthy, befriended, educated, etc, and one has a sensible balance between those desires, then in virtue of all that, one has the desire that the child have a good life. To desire all these things just is to desire the child have a good life. It’s a very different way of desiring that the child have a good life than having that desire instrumentally, as one might if one wanted the child to have a good life solely so one would be rewarded in the afterlife. And it is a somewhat different way of desiring that the child have a good life than having that desire intrinsically. The difference shows up in two ways. One concerns the order of explanation: does one want the child to have a good life in virtue of wanting the child to be happy, healthy etc, or is it the other way around? The other concerns how one’s desires for the child change when one’s conception of the good life changes.

    So the SMP and WMP concern themselves neither with instrumental desires nor with realizer desires. A good person will typically desire that they do the right thing, but they will desire that because the things they desire are actually the right thing to do, and they will (typically) know this. The principles say that the desires to do things that are actually right could be, or in the case of the latter two principles should be, explanatorily prior to the desire to do the right thing as such.

    3.6 The Weak Motivation Principle (WMP)

    3.6.1 Equilibrium

    The WMP is restricted to equilibrium states. This restriction is there to deal with an important class of cases that Sigrún Svavarsdóttir (1999) discusses.

    [Smith argues that] the externalist account “re-describe[s] familiar psychological processes in ways that depart radically from the descriptions that we would ordinarily give of them”  (Smith 1996, 180) … Smith tells a story of a friend (let’s call himMike) who has radically changed his moral view over the years from act-utilitarianism to a view that sanctions, in some instances, favoring family and friends, even when this cannot be given utilitarian justification. Since Mike is a moralist, his motivational dispositions have changed correspondingly … I would like to offer an illustration of what sort of description externalists might give of Mike’s mental states before, during, and after his two moral conversions. I venture the following speculation: Mike has always had some inclination to favor family and friends, but at one point he developed strong inhibitions against acting on these inclinations. These inhibitions were largely the result of being convinced that act-utilitarianism specifies the correct criterion for moral rightness. Having a strong desire to do the right thing and a rigid temperament, Mike quickly developed an avid interest in maximizing total happiness in the world, taking the interest of each person equally into account. In due time, his desire to maximize happiness actually started to dominate all other desires to the point that his friends thought of him as a utilitarian monster. But slowly doubts started to emerge as a result of exposure to arguments against utilitarianism. By and by Mike’s conviction eroded and in the end he accepted a moral view according to which it is often right to be partial to family and friends, even when doing so cannot be given a utilitarian justification. At the same time, he came to see himself as a utilitarian monster, ever ready to sacrifice the interests of friends and family for the utilitarian project. Motivational dispositions he formerly took pride in having developed now became distasteful to him. However, since his desire to do the right thing has continued to be operative in his psyche, these dispositions are slowly eroding and the inhibitions on his inclinations to favor family and friends are undergoing radical change. They are gradually falling in line with his view of when it is right to give extra benefits to family and friends.  (Svavarsdóttir 1999, 208–10)

    Smith had argued that it is always a bad thing to be moved by the desire to do the right thing, as such. Svavarsdóttir’s reply here is that this isn’t bad at the very moment of major change in one’s moral outlook. (Since this was the very example that Smith used against the motivational externalist, such examples were rather relevant to her debate with Smith.) Adopting a moral theory wholeheartedly requires adjusting one’s motivations to align with it. But this need not be an instantaneous process; it can take time and effort. And the motivation to engage in this process of adjustment may come from a desire to do the right thing.

    The defender of the WMP can concede all this. What the defender says is that Mike, in Svavarsdóttir’s example, is not in equilibrium. What do we mean here by being in equilibrium?

    For current purposes, it means having fairly settled moral views, and having had enough time and space since one’s views became settled to make suitable adjustments in the rest of one’s mind. Equilibrium requires the absence of felt pressure to change one’s desires in light of changes to one’s moral outlook.

    Here are two cases that I take to not be in equilibrium, in the sense relevant to the WMP.

    • Our hero faces a choice between competing values, and is torn about how to resolve them. She does not know which value is stronger, and she either lacks a clear disposition to resolve the tension in one particular way, or has such a disposition but does not trust it.
    • Our hero systematically does not do what they believe to be best, and is trying to change their attitudes and behaviour to conform to their beliefs about the good.

    On the other hand, the following two cases are cases of equilibrium in the relevant sense, albeit highly imperfect equilibrium.

    • Our hero does not do what they believe to be best, but they have learned to live with this, perhaps feeling guilty about the gap between their thoughts and their deeds.
    • Our hero is disposed to act one way, but would change their disposition if the reasons for acting a different way, reasons they already possess, were made salient to them.

    In all four cases, the person already possesses something like reasons to change. But what makes for being in disequilibrium is the feeling that things must and will change.

    Our ultimate interest here is in cases where moral beliefs do or don’t line up with action, but we can come up with mundane, non-moral, illustrations of each of them. Here’s a (schematic) illustration of the fourth kind of case.

    I have a particular route I usually use going from B to C. I have a different route I use going from A to C. That route goes via B, but it does not take the usual route I use from B to C. This can’t be optimal; if there is a best way to get from B to C, I should use it in parts of journeys as well as wholes. I could, nevertheless, be in equilibrium, even if a small suggestion (hey, why don’t you do something different for the second part of the A-C route?) would push me to change my behaviour. The point is that equilibrium in the relevant sense just requires that the agent isn’t trying to change, and isn’t feeling pressure to change, even if they possess perfectly good reasons to change, and could easily be changed.

    But in Svavarsdóttir’s example, we do not have someone in equilibrium even in this weak sense. Mike wants to change his dispositions to line up with his moral theory, and he is making progress at this, but he still isn’t there. The WMP does not deny that in cases like this, it is permissible to have goodness itself as a motivation.

    3.6.2 Why Engage in Moral Reflection?

    The following kind of consideration is sometimes advanced as a reason to be motivated by goodness as such. Sometimes people engage in practically directed moral reflection. That is, they think hard about what is the right thing to do, and the intended result of that thinking is that they do the thing they think is right. The most obvious analysis of what’s going on in these cases is that the people involved want to do the right thing, and the point of engaging in reflection and acting on it is to bring it about that they do the right thing. And at least in cases where this leads to the thinker acting well, it seems this kind of moral reflection is a very good thing to engage in.

    In the next section I’m going to say a lot more about this kind of case, because the SMP has to give a very different analysis of what is going on in moral reflection. But the defender of the WMP does not need to say much about these cases because they can simply endorse the ‘obvious analysis’. The defender of the WMP can say that it is good, even optimal, to engage in moral reflection, motivated by the desire to do the right thing, when not in equilibrium.

    The WMP is only making the following claim. When the storm is over and the seas are flat, a good person may be motivated by the things that make their actions right, not by the rightness itself. People who don’t know what to do, and are torn between competing values, could not be a counterexample to such a principle.

    3.6.3 The WMP and Two Kinds of Motivation Gaps

    But why should we believe the WMP? I think the best reason is the simple intuition that Smith put forward: good people are motivated by things around them in the world, not by abstract notions of virtue and rightness. Another reason comes from reflection on fanatics like Robespierre and Saint-Just. But not everyone accepts those reasons. So let’s look at a pair of cases that need explaining, and which the WMP can explain.

    The first case is a petty crook who won’t cross certain lines. In particular, while he’ll steal anything from anyone, he won’t engage in violence. This isn’t just because he is scared of getting punished for violent acts. He has a kind of moral objection to violence. Perhaps speaking loosely, let’s say that he has no respect for property rights, but a fitting and proper respect for rights involving bodily autonomy.

    The thief’s colleagues are planning a violent robbery. Feeling uncomfortable with this turn of events, the thief informs the police, who prevent the violence. This was a right and praiseworthy action by the thief. But what could make it right and praiseworthy? Not that he was trying to do the right thing - he’s a thief who would have happily gone along with a non-violent plan to steal the goods. What makes his actions right and praiseworthy is that his motivation, prevention of violence against (relative) innocents, was good. There is nothing mysterious, and nothing wrong, with having this motivation without having a general motivation to be moral.

    The second case is a person who has a desire to do what’s right, but no underlying motivations. There are a couple of interesting variants of this case. Nomy Arpaly (2003) spends some time on examples of ‘misguided conscience’; people who want to do the right thing and are wrong about what it is. But we can also imagine someone who does want to do the right thing, and is broadly correct about what is right, but lacks any direct desire to do the thing that’s actually right. Let’s think about such a case for a bit.

    Our protagonist, call him Rowly, was brought up well enough that he knows it is wrong to use violence to get things you want. And a desire to avoid wrongdoing was inculcated at a young age. So when Rowly wants a beer, but could only get one by punching someone, he declines to take the opportunity. But he is upset by this; he has no desire to avoid violence, or to avoid causing suffering, and wishes it was not wrong to punch someone to get a beer.

    There is something deeply wrong with Rowly. We can see this by thinking about our interpretative practices. When someone says they did something because “it was the right thing to do”, we do not normally interpret them as having no other-directed desires other than the desire to avoid wrong-doing. We do not normally think of such a person as being like Rowly. Someone who has to be taught what’s right and wrong, and who has this belief as the only barrier stopping serious wrongdoing, is a deeply flawed human being. Even when people are too inarticulate to say what desires they have beyond a desire to do the right thing, we normally interpret this as inarticulateness, not a lack of respect for others, nor a lack of desire that others not suffer. This inarticulateness is not surprising; it’s really hard to describe what makes actions right or wrong. But not wishing well for others is surprising; it’s a serious character flaw.

    So a desire to do the right thing is, in equilibrium, either unnecessary or insufficient. If one wants to prevent suffering to others, and acts on this, that’s great, and it makes the desire to do the right thing unnecessary. If one lacks a desire to prevent (causing) suffering, then it is perhaps fortunate to have a desire to do the right thing, but that is insufficient for virtue.

    Since a desire to do the right thing seems so useless, at least in equilibrium and in the presence of other good desires, it seems permissible to not have such a desire. And that’s all WMP says.

    3.6.4 Against Symmetry

    I’ve argued so far that the WMP is true. I’m now going to argue that, assuming the WMP is true, there is an asymmetry between factual and moral uncertainty. The role the WMP plays is to block one of three possible routes out of a problem facing the defender of symmetry.

    We know that having the probability of some factual proposition move from 0% to 5% can (rationally) change behaviour. If I think the probability of rain is 0%, I don’t have to check whether there is an umbrella in the car. If I think it is 5%, I will check the trunk to see the umbrella is still there before heading out. If symmetry holds, then changing the probability of a moral proposition from 0% to 5% should also change behaviour. And it is hard to see how that could happen.

    I’m going to mostly assume here a broadly Humean picture of motivation: people do things that promote their desires assuming their beliefs are true. The relevant contrast here is with the view that beliefs, or at least belief-like states, can promote action without an underlying desire. So the Humean thinks I pack the umbrella because I believe it prevents me getting wet, and I have a desire to avoid getting wet, while the anti-Human thinks I pack it because I believe it prevents me getting wet, and I believe that it is good to avoid getting wet (or something similar).

    I’m assuming the Humean view partially because it is implicit in our best formal models, partially because it seems intuitive, and partially because there are technical problems with the anti-Human view. David (Lewis 1988, 1996) showed that the view that beliefs about the good played the role of values in expected value theory led to problems with updating mental states. Recently Jeffrey Sanford Russell and John Hawthorne (2016) have shown that these results rely on much weaker premises, and apply much more broadly, than a casual reading of Lewis’s papers would suggest. Anyone who thinks that belief-like states alone can drive action has to adopt a rather implausible seeming picture of how beliefs are updated.

    So I think rejecting belief-desire psychology is a high price to pay. But let’s note it is one way out of the argument I’m about to give. I’ll call it Option One for the symmetry defender.

    If we don’t take option one, then the symmetry defender must say which desires interact with a change in credence to produce a change in action. An obvious choice is to say that it is a desire to do the right thing. But that’s blocked by the WMP. If symmetry is true, then there are times when a change in credence from 0% to 5% makes it compulsory to change actions. And it is not compulsory to have a desire to do the right thing. So that won’t work. For the record, Option Two for the symmetry defender is to reject the WMP, but that’s also a bad move.

    What the symmetry defender needs is to identify desires, other than desires to do the right thing, that can generate the action. These will be tricky to find. If someone thinks that it is 0% likely that doing X is wrong, then presumably it is completely rational to have no desire to avoid X, or avoid what X involves. So it looks like this route won’t work either.

    But that’s too quick. All the symmetry defender needs is that after the change in credence, there is a desire that drives the change in action. Perhaps a change in credence could be correlated with a change in desires that produced, via orthodox belief-desire reasoning, the outcome the internalist wants.

    But thinking there will always be such a change in desires is too much to hope for. Indeed, in some cases having such a change would be bad, as we can see using an example from Lara Buchak (2014).

    Malai has a good friend, who she has known since childhood, and she values the friendship highly4. Then Malai learns that someone committed a horrible crime, and there is some very weak evidence that it was her friend. It’s reasonable for Malai to have a slightly greater than zero credence that it was her friend who committed the crime, while not changing at all how much she values the friendship. Indeed, if the evidence is strong enough to move her credence, but not much more, it would be bad to have any other attitude. It’s wrong to devalue friendships because you get some almost certainly misleading evidence about your friend. It’s true the expected value of the friendship goes down when the evidence comes in, and if the friendship had only instrumental value, then that’s a reason to devalue it. If Malai’s only interest was in, say, getting to heaven, and she only valued the friendship insofar as she thought it likely it was a friendship with a good person, and that’s the kind of thing that helps get you to heaven, then she should reduce how much she values the friendship. But most of us do not have quite that transactional an attitudes towards our friends or our friendships. Malai should have just as strong a desire to respect her friend and promote her friend’s interests, and to respect and promote the friendship, as she had before getting the evidence. The evidence should not make her value the friendship less, and that’s because friendships are intrinsically valuable, and how much something is intrinsically valued is not proportionate to one’s credence that it is intrinsically valuable.

  • 4 I’m assuming throughout this paragraph that to value the friendship is a matter of having the right desires concerning the friend and the friendship, not having beliefs about the value of the friend or friendship.

  • The same goes at the other end of the valuing scale. If one thinks that, for example, there is a 5% chance that purity is intrinsically valuable, it doesn’t follow that one needs to (intrinsically) value purity at all. Nor does it follow that one needs to be motivated, at all, by considerations of purity.

    I’ll call Option Three the rejection of all that’s been said in the last three paragraphs, and the insistence that changes in moral credences must occasion changes in desires. The examples involving Malai and involving purity make this option very unattractive.

    Ultimately, I think this is the deepest problem for the symmetry view. Factual uncertainty changes our actions, and it does so rationally because it changes which factual uncertainty changes the expected value of different actions. For moral uncertainty to have the same effect, either we have to have a false view of the role of desire in action (Option One), or have to reject the WMP (Option Two), or have to adopt an implausible and unattractive view of how desires change when credences change (Option Three). None of these are correct, so symmetry fails.

    3.7 The Strong Motivation Principle (SMP)

    It is easy to imagine very good characters who are not motivated by the good as such; instead they are directly motivated by things that are actually good. Indeed, if one’s motivations are fully in line with the good, it isn’t clear what extra there is to be gained by also being motivated to be good. At worst, this motivation seems like either a distraction, or impermissibly self-centered. As Michael Smith puts it, people with this motivation “seem precious, overly concerned with the moral standing of their acts when they should instead be concerned with the features in virtue of which their acts have the moral standing that they have.”  (Smith 1996, 183)

    There is something disturbing about a person who does not find the fact that a certain act is, say, a torture of a child to be sufficient motivation to not do it, and needs the extra motivation that it would be wrong. And the same goes for any other wrong act. Nothing is wrong as a matter of brute fact; there is always some explanation for why it is wrong. And that explanation always provides a motivation that would prevent a good person from doing the action. Anyone who needs some further motivation is in some way deficient.

    That is the intuitive argument for the SMP. And it seems to me compelling. But we can say more to motivate, and justify, the SMP. I’ll start with a discussion of a central objection to the SMP; that it doesn’t allow a special role for moral reflection. Then I’ll discuss another reason to support the SMP; it avoids a certain kind of danger, one that we see manifest in history. And I’ll close with a sketch of what a proponent of the SMP thinks the good person is like.

    3.7.1 How to Explain Reflection

    We typically think the following kind of activity is good. A person is faced with a difficult moral question, or with a question that she thought was easy, but which it turns out people she respects take a different view on. She reflects on what morality requires in such a situation. Upon coming to believe that morality requires of her something different than her current practices, she changes her behaviour to match with her new moral beliefs.

    Such a character seems to pose a problem for the SMP. At first glance, it seems like a motivation to do good, or at least avoid doing bad, plays a central role. It is, apparently, the agent’s change in her moral beliefs that triggers a change in action. And a change in a belief about what is X can only make a difference in action if X enters into one’s motivational set in the right way. Since our agent seems to be a good person, it seems like good people should have thin moral motivations.5

  • 5 In the previous section I noted that the proponent of the WMP has an easy explanation of the appeal of moral reflection, since the agent who is motivated to engage in moral reflection is not in equilibrium. Since the SMP is not restricted to agents in equilibrium states, such an appeal will not work in defence of it.

  • My response to this kind of case will be very similar to what Arpaly and Schroeder (2014, 185ff) say about moral reflection. When our agent tries to figure out what morality requires of her, she won’t start with highly abstract theorising. She will start with her concrete commitments concerning how she should engage with the world around her, and work out how those commitments apply to difficult or contested cases. As Michael Smith puts the point

    [N]ot only is it a platitude that rightness is a property that we can discover to be instantiated by engaging in rational argument, it is also a platitude that such arguments have a certain characteristic coherentist form.  (Smith 1994, 40)

    When good people use thin moral concepts in their reasoning, it is not because they are aiming at the good as such, but because these concepts are useful tools to use in sorting and clarifying their commitments, and making sure that they promote and respect the things they actually care about. We see this in other walks of life too. A competitor in a sporting event may steer their strategy towards moves that maximise expected returns. That’s not because they care about expected returns; they want to win. It is because using the concept of an expected return is a good way to manage your thoughts when you want to think about how to win. And, in practice, this is often a very good way to manage your thoughts, so good strategists will use the concept. Similarly, it may turn out to be useful to use the concepts of goodness and rightness when trying to promote and respect the things that really matter, and so it isn’t a surprise that we see good people using them.

    3.7.2 Against Motivation by Morality

    If moral concepts are useful tools for good people to use in promoting and respecting good aims, then we should expect that, like all tools, they have their limits. And indeed those limits are not hard to find. Moral reasoning is a kind of equilibrium reasoning. And equilibrium reasoning has clear strengths and weaknesses. There are cases when it is essential. Trying to work out the effect of a natural disaster on the market for widgets is practically impossible without doing at least some equilibrium reasoning. But there are also cases when it can go badly awry if not used extremely carefully, and in which very small errors in the inputs can lead to very large errors in the outputs. This is particularly the case when there are large feedback effects around. It is hard to use equilibrium reasoning to work out the effect of a rise in the price of labour, because changing the price of labour changes the demand curve for all goods, and hence raising the demand for labour. This isn’t an insuperable modelling difficulty; but it means that it will take more than the back of a napkin to work out even approximately what will happen when the price of labour changes. Similarly, weather forecasting using equilibrium models is possible, but has to be done very carefully because very small errors in the initial inputs can push the modeller to an equilibrium that is far removed from reality.

    We see the same problems when reasoning about morality. The method of reflective equilibrium, that characteristic coherentist form of reasoning, is the best method we’ve got for working out what is right and wrong. And it is very powerful. But it is an equilibrium method, and we are in a territory where there are very strong feedback effects. Whether one things X’s treatment of Y is right or wrong will depend a lot on other moral judgments. If X is imprisoning Y, then that is probably very seriously wrong, unless Y has themselves done something seriously wrong, and X has been empowered (preferably by a good set of institutions) to deal with that kind of wrongdoing. Given there are this many feedback effects, we should expect that whether moral reflection leads people closer to, or away from, the truth is in part a function of how close they start to the moral truth. And this is, I think, what we see. To the extent moral reflection strikes us as a basically good practice, it is because we imagine it being used by people who have basically good motivations to start with. But in those cases moral reasoning will help smooth out the rough edges; it won’t correct major faults.

    And this suggests a problem with having morality itself as one of one’s motivations: it is dangerous. Unless one starts with basically good motivations, thinking about the good and aiming for it could very well make things worse; perhaps catastrophically worse. We should acknowledge that in the hands of good people, moral reasoning can be a useful tool. The person who doesn’t use that tool will almost certainly fail to optimise unless they have the sentiments of a saint. But someone whose aims include respect for others and their rights, freeing people from deprivation, promoting friendship and education, and being honest in their dealings, will usually act fairly well, even if they never engage in moral reflection. They may get the balance between these aims wrong from time to time, sometimes in ways that moral reflection would prevent. But they will typically avoid moral disaster. The person who aims for the good, as such, is more likely to land in disaster. One of the most dangerous things in the world is a wrongdoer with the courage of their convictions. Thinking about how and why equilibrium analyses can fail reinforces how dangerous this trap is.

    But it’s not just theory that tells us this is dangerous. The fanatic who thinks the individual is irrelevant, who will sacrifice any number of individuals to an idea, who will destroy villages in order to save them, is a recurring character in history. In some cases they are tragic figures; people who really did start out with praiseworthy aims but who refused to compromise when it turned out that those aims couldn’t be realised without much suffering. And sometimes they are self-centred jerks, who feel empty unless they are trying to steer the whole world to their vision, whatever the costs. But what all of them teach us is that aiming for the good, and just the good, can go terribly, horribly, wrong.

    3.7.3 Back to Symmetry, and Moral Uncertainty

    Let’s turn away from these ideologues, and towards a positive picture of what a good but flawed person should look like. Our hero will mostly desire things that are actually valuable, and by and large desire them to the extent that they are actually valuable. They will have a well-functioning belief-desire psychology, so they will act so as to promote or respect those valuable things they desire. They will, from time to time, think about what is good and what is valuable, and form largely true beliefs about the good and the valuable. But since we are not supposing they are perfect, we will not assume these beliefs are inevitably true. And these moral beliefs, even the true ones, will not necessarily lead to much change in their action, because they don’t connect up with any desire in the right kind of way. It is normal for a mismatch between desires and moral beliefs to lead to some unease, and to think that it might be wise to reform one’s beliefs or one’s desires. But depending on how deep the disagreement is, this reform program need not be a particularly high priority. And when it is carried out, there is no guarantee that the two will be brought into line by changing desires, as opposed to by changing beliefs. What there is a guarantee of is that if the moral beliefs conflict with other first order desires that the hero has, such as a desire that mass killings not happen, those other first order desires will play a powerful role in stopping the moral beliefs from taking control.

    It is a thought almost as old as European philosophy that there is a good analogy between the well functioning polis and the well functioning mind. Although it is much less old, it is by now a venerable idea that the well functioning polis includes a separation of powers. And one of the virtues of such a separation of powers is that it limits the damage that can be done by a sudden swing in opinion among the powers that be. This is not a panacea; some states are rotten to the core, and no amount of institutional design will help. But it will prevent, or at least moderate, certain kinds of wrong. To put it in late 18th Century terms, the Alien and Sedition Acts were bad; the Reign of Terror was worse. It’s worth thinking about what checks and balances in moral psychology would be, and more generally what a Madisonian moral psychology would look like.

    My best guess is that competing desires, such as desires to promote welfare and alleviate suffering, and desires to keep promises and respect rights, are the appropriate kinds of balance to each other. But for current purposes it doesn’t matter exactly how one ought implement checks and balances, only that it is good that there are some. Because if moral uncertainty should be treated the same way as factual uncertainty, then there will be no checks and balances at all. When we firmly believe that some fact is true, then the thing to do is simply act as if that’s true. We only hedge against the possibility that something is false when there is a possibility that it is false; not when we are certain that it is true. The symmetry view says that we should do the same with moral (un)certainty. But if that’s the case, then there is no space for any check or balance on our moral views at all; when we are certain of them, they are guiding. That is wrong, and dangerous, so the symmetry view is also wrong.

    Sometimes good people get the moral facts wrong. Perhaps they get bad advice, or bad evidence. Perhaps they start just a little wrong and equilibrium reasoning takes them to a place that is very wrong. When that happens, they have mechanisms to stop them acting seriously wrongly. I’ve been arguing that the moral mistakes shouldn’t have any direct effect on action, because they won’t aim at the good. But as I’ve noted already, I don’t need anything that strong for the main argument of this book. What I need is that there should be some other forces that prevent action from lining up perfectly with moral belief when moral belief is seriously mistaken. A natural suggestion is that desires for things that are actually good can be that force. But even if that suggestion is wrong, as long as there should be some other force, then the symmetry claim fails.

    3.8 Motivation Through Thick and Thin

    In this section I’m going to run through some interesting test cases for WMP and SMP. I have two aims here. First, I want to strengthen the case for WMP. Second, I want to raise some cases that are useful intuition checks for testing the plausibility of the SMP. I know from talking to many people about the cases that I have different views about them to most people. So while I think the cases are evidence for a fairly strong version of the SMP, I know that they won’t strike many people that way. Still, I hope the cases are useful ones for thinking about what’s at issue in debating the SMP, and in particular thinking about how we should interpret the phrase ‘moderately thin’ in it if we want the principle to be plausible. But let’s start with a case purely about maximally thin moral properties.

    Milan is torn between two theories, and two actions. He gives some credence to an agent-neutral form of consequentialism, and some credence to a Kantian ethical theory. And he is torn between making a moderate donation to charity, one of 3% of his income, and a much larger donation to charity, one of 30% of his income (which is all he can reasonably afford). He thinks that if the Kantian theory is true, then he isn’t obliged to give more than 3%, and really doesn’t want to give any more than he has to give. But he knows that if the consequentialist theory is true, then he is obliged to give (at least) the much larger amount.

    Now Milan thinks most of the arguments favour the Kantian theory. But he has one remaining worry. He knows that the theory relies on having a workable notion of what it is for different people to do the same thing. And he worries that we don’t have such a workable notion, for reasons familiar from philosophy  (Goodman 1955) and game theory  (Cho and Kreps 1987). So he sets out to do some philosophical research, reading about work on the notion of same action, and thinking about whether any such notion can generate a version of the categorical imperative that agrees with its intuitive content, and is not trivial. As often happens when working through a philosophical problem, his views on which side is stronger changes frequently. All the time, he has a web browser open getting ready to hit send on a donation. And as he changes his mind on whether the grue paradox ultimately defeats Kant’s theory, he keeps adding and deleting a final zero from the amount in the box saying how much he will donate.

    The WMP says that moral agents are not obliged to be like Milan. They don’t have to have their charitable actions be sensitive to their beliefs about technical problems for Kantian ethics. It is, I think, reasonable to have one’s credence in the correctness of Kantian ethics turn on beliefs about relatively technical problems. (For what it’s worth, I think the kind of problem Milan is worrying about is a genuine problem for some kinds of Kantian theory, particularly those that think the formality of the theory is an important virtue of it.) But an agent who is being epistemically reasonable need not have their actions be sensitive to their technical worries. And that’s because the agent need not be motivated by rightness as such.

    If we change the case a little, we get an interesting test for SMP. Unlike Milan, Torin is convinced that some kind of Kantian theory is true. He also thinks there are technical problems with getting the formulation of the categorical imperative right. But he also thinks, sensibly enough, that these kind of technical problems are challenges, not reasons to reject the theory. Still, the way to solve the challenge will be to formulate different versions of the categorical imperative, and test them. And these different versions will have different consequences for which actions are required in certain circumstances. Is it reasonable for Torin to be differently motivated when he changes his views about which is quite the right formulation of the categorical imperative? I don’t feel that it is, but I can imagine that different people have different views here.

    A slightly more natural case seems even trickier to come to a firm judgment about.Florentina is trying to figure out what to do in a case where there are competing reasons in favour of two incompatible actions. She feels rather torn, but can’t settle on a particular choice. Then she notices something: one of the choices, but not the other, is incompatible with the categorical imperative. Is it reasonable for her to be now more motivated to do the one that is consistent? I think this is a somewhat strange mindset, but I suspect many will disagree. What makes this case tricky is that we have to distinguish two situations that are rather hard to keep apart. We aren’t interested in the case where Florentina sees that a choice is incompatible with the categorical imperative, and by seeing this sees that she had been overvaluing its strengths or undervaluing its weaknesses. Rather, we are interested in the case where this fact about the categorical imperative is itself a new motivation, alongside all the old motivations, to not do a particular action. To the extent I can keep a clear grip on the case, I think this is not a reasonable stance for Florentina to take. And that’s why I think that it is wrong to be motivated by an action’s compatibility or otherwise with the categorical imperative. What is reasonable is to see incompatibility with the categorical imperative as a reason for thinking there is something else wrong with the action, perhaps something we haven’t yet seen.

    Florentina’s case is interesting even if you think that basing a whole moral theory around the categorical imperative is implausible. You can think that such a theory is surely wrong, but also think that Kant was nevertheless on to something important. Whether one could rationally will that everyone does X could be a factor in determining whether X is right or wrong, even if it is a long way from being a central factor. My default view in first-order ethics is a kind of muddy pluralism, which acknowledges that many distinct moral traditions have important insights into the nature of rightness and goodness, but which rejects any claim to comprehensiveness these theories may make. Florentina’s case suggests that even if you have such a kind of pluralist view, you still could reject the view that conformity with the categorical imperative is a good motivation.

    Let’s move to some cases that seem a little easier. (I owe the following case to discussions with Scott Hershowitz.) Mercurius is a professor in a large university. As with most professorial positions, Mercurius has a fair amount of control over how much work he does. Some of his colleagues do more for the department than anyone could reasonably require, some do less than anyone could think was reasonable. Mercurius is a reasonable department citizen, handling a perfectly fair share of the workload, but only just as much as fairness requires. Today, as sometimes happens, a request comes around from the chair for volunteers for an unexpected task. Mercurius does not find the task intrinsically interesting, but he knows that none of his colleagues will feel any differently. He knows he will feel a bit bad for whoever ends up shouldering the task, but will feel worse if it ends up being him. Still, he is worried he hasn’t done his fair share of the work. This is wrong, as I said he has done enough, but it isn’t an irrational belief since it is such a close call. So he volunteers, being motivated by a desire to do his fair share of the collective work.

    This strikes me, and most people I’ve spoken about the case with, as a perfectly reasonable motivation. There is nothing objectionably fetishistic about being motivated to do one’s share of a task one values. And Mercurius does value the good functioning of his department, and knows that it requires that the members collectively take on some unpleasant tasks. So he acquires a motivation to take on this particular unpleasant task.

    It isn’t easy to classify Mercurius’s desire using the terminology we discussed in the previous section. He certainly doesn’t have an intrinsic desire to do the unpleasant task. And it isn’t strictly speaking an instrumental desire. We can imagine that Mercurius knows that one of the usual suspects, the people who already do more than their fair share, will take on this unpleasant task if no one else does. And we don’t have to imagine that Mercurius values their time more than his. Nor is it quite right to say that Mercurius’s desire to do this job is a realizer desire of his desire that the department runs well. After all, if he had just taken on a similar task the previous week, he would not desire to take on this one, although its relationship to the good functioning of the department would be unchanged. The best thing to say is that Mercurius has an intrinsic desire to do his fair share of collective projects that he has joined, and given his (false) beliefs about his past actions, this creates a realizer desire to do this unpleasant task.

    So that puts an upper bound on the extension of ‘moderately thin’ in SMP. There isn’t anything wrong with having a desire to do one’s fair share, i.e., being motivated by properties like fairness. But on the other hand, thinking about these ‘fair share’ or ‘good teammate’ motivations helps explain some otherwise tricky cases. Indeed, my suspicion is that most intuitive counterexamples to the WMP, or even the SMP, can be helpfully thought of as cases where the agent has some independent motivation for joining a team or a project, and then a desire to be a good member of that team or project.

    That’s what I want to say about, for example, this case from Hallvard Lillehammer (1997).

    Consider next the case of the father who discovers that his son is a murderer, and who knows that if he does not go to the police the boy will get away with it, whereas if he does go to the police the boy will go to the gas-chamber. The father judges that it is right to go to the police, and does so. In this case it is not a platitude that a desire to do what is right, where this is read de re, is the mark of moral goodness. If what moves the father to inform on his son is a standing desire to do what is right, where this is read de dicto, then this could be as much of a saving grace as a moral failing. Why should it be an a priori demand that someone should have an underived desire to send his son to death?  (Lillehammer 1997, 192)

    A well functioning justice system is a very valuable thing to have. There is nothing at all fetishistic about desiring that one’s state have such a system, and that it be maintained. Yet a well functioning justice system requires collective action, and this generates issues about whether one is doing one’s fair share. As noted above, it can be reasonable, and not at all inconsistent with WMP, to desire to do one’s fair share of a group project. Here the father who informs on his son should be motivated not be a desire to do what’s right as such, but by a desire to do one’s fair share of maintaining a good justice system.

    If that’s the right analysis of the case, then the father should be less motivated the less difference his informing will make to whether the state has a well functioning justice system. We see this already in Lillehammer’s version of the case; the injustice of capital punishment is a reason for thinking that informing is not really a way of doing one’s share in maintaining a system of justice. But similarly, if the family lives in a state where justice is very much the exception, it’s reasonable to be less motivated to inform on one’s son. By analogy, if tasks like the one Mercurius is considering routinely go undone, so there is no good functioning to maintain, that’s a reason to be less motivated to take on this task.

    Finally, consider a case about welfare, which has interesting lessons for moral motivations. Xue believes that human welfare is entirely constituted by health, happiness and friendship. And she is strongly motivated to promote her own health, happiness and friendships, which is natural enough given that belief. She is also motivated to help others–she is no moral monster–but for now we’re just interested in her prudential reasoning.

    Xue is told that bushwalking is good for your welfare, though she isn’t told whether it makes you healthier, happier or have better friendships. But the source of this information is very reliable, so Xue forms a desire to do more bushwalking. And this seems reasonable enough. Is this a case where Xue is motivated by welfare as such, and reasonably so?

    I think it isn’t. We have to distinguish three possible states.

    1. Xue is motivated to do things that have the property promote my health, and is motivated to do things that have the property promote my happiness, and is motivated to do things that have the property promote my friendships.
    2. Xue is motivated to do things that have the disjunctive property either promote my health, or promote my happiness, or promote my friendships.
    3. Xue is motivated to do things that have the property promote my welfare.

    Assuming fairly minimal coherence, we can’t tell the difference between 1 and 2 by just looking at Xue’s actions. Whether 1 or 2 were correct, she would do the same things in almost all circumstances. Perhaps she would say different things if the issue of whether she had disjunctive or non-disjunctive motivations arose in conversation. But we need not assume she has any interests in such a question, or even a pre-existing disposition as to how she would answer it. But that doesn’t mean that there is no difference between the states. It is, in general, better practice to attribute non-disjunctive attitudes to agents rather than disjunctive ones  (Lewis 1994; Weatherson 2013). So we should think that we are in state 1 rather than state 2.

    Similarly, given her beliefs about the nature of welfare, there won’t be much difference between the actions she is motivated to perform in state 1 and in state 3. So the fact that she responds to the information that bushwalking is good for her welfare by developing a desire for bushwalking is no evidence that we are in state 3. It might just be that we are in state 1. Since there is independent intutive reason to think it would be unreasonable for her to be in state 3, and her desire for bushwalking in this case is reasonable, we should think that we’re actually in state 1. In general, we should prefer to attribute a plurality of underlying motivations to agents, rather than disjunctive motivations (as in state 2), or higher-order motivations (as in state 3).

    3.9 Moller’s Example

    I’ll end this chapter by discussing an analogy D. Moller (2011) offers to motivate something like symmetry.6

  • 6 Though note that Moller’s own position is more moderate than the genuinely symmetric position; he thinks moral risk should play a role in reasoning, but not necessarily as strong as non–moral risk plays. In contrast, I’m advocating what he calls the “extreme view, [that] we never need to take moral risk into account; it is always permissible to take moral risks.” (435).

  • Suppose Frank is the dean of a large medical school. Because his work often involves ethical complications touching on issues like medical experimentation and intellectual property, Frank has an ethical advisory committee consisting of 10 members that helps him make difficult decisions. One day Frank must decide whether to pursue important research for the company in one of two ways: plan A and plan B would both accomplish the necessary research, and seem to differ only to the trivial extent that plan A would involve slightly less paperwork for Frank. But then Frank consults the ethics committee, which tells him that although everyone on the committee is absolutely convinced that plan B is morally permissible, a significant minority - four of the members - feel that plan A is a moral catastrophe. So the majority of the committee thinks that the evidence favors believing that both plans are permissible, but a significant minority is confident that one of the plans would be a moral abomination, and there are practically no costs attached to avoiding that possibility. Let’s assume that Frank himself cannot investigate the moral issues involved - doing so would involve neglecting his other responsibilities. Let’s also assume that Frank generally trusts the members of the committee and has no special reason to disregard certain members’ opinions. Suppose that Frank decides to go ahead with plan A, which creates slightly less paperwork for him, even though, as he acknowledges, there seems to be a pretty significant chance that enacting that plan will result in doing something very deeply wrong and he has a virtually cost-free alternative.  (Moller 2011, 436)

    The intuitions are supposed to be that this is a very bad thing for Frank to do, and that this illustrates that there’s something very wrong with ignoring moral risk. But once we fill in the details of the case, this can’t be the right diagnosis.

    The first thing to note is that there is something special about decision making as the head of an organization. Frank doesn’t just have a duty to do what he thinks is best. He has a duty to reflect his school’s policies and viewpoints. A dean is not a dictator, not even an enlightened, benevolent one. Not considering an advisory committee’s report is bad practice qua dean of the medical school, whether or not Frank’s own decisions should be guided by moral risk.

    We aren’t told whether A or B are moral catastrophes. If B is a moral catastrophe, and A isn’t, there’s something good about what Frank does. Of course, he does it for the wrong reasons, and that might undercut our admiration of him. But it does seem relevant to our assessment to know whether A or B are actually permissible.

    Assuming that B is actually permissible, the most natural reading of the case is that Frank shouldn’t do A. Or, at least, that he shouldn’t do A for the reason he does. But that doesn’t mean he should be sensitive to moral risk. Unless the four members who think that A is a moral catastrophe are crazy, there must be some non-moral facts that make A morally risky. If Frank doesn’t know what those facts are, then he isn’t just making a decision under moral risk, he’s making a decision involving physical risk. And that’s clearly a bad thing to do.

    If Frank does know why the committee members think that the plan is a moral catastrophe, his action is worse. Authorising a particular kind of medical experimentation, when you know what effects it will have on people, and where intelligent people think this is morally impermissible, on the basis of convenience seems to show a striking lack of character and judgment. Even if Frank doesn’t have the time to work through all the ins and outs of the case, it doesn’t follow that it is permissible to make decisions based on convenience, rather than based on some (probably incomplete) assessment of the costs and benefits of the program. (I’ll expand on this point in section 6.1, when I discuss in more detail what a normative externalist should say about hypocrisy.)

    But having said all that, there’s one variant of this case, perhaps somewhat implausible, where it doesn’t seem that Frank should listen to the committee at all. Assume that both Frank and the committee have a fairly thick understanding of what’s involved in doing A and B. They know which actions maximise expected utility, they know that which acts are consistent with the categorical imperative, they know which people affected by the acts would be entitled to complain about our performance, or non-performance, of each act, they know which acts are such that everyone could rationally will it to be true that everyone believes those acts to be morally permitted, and so on. What they disagree about is what rightness and wrongness consist in. What’s common knowledge between Frank, the majority and the minority is that both A and B pass all these tests, with one exception: A is not consistent with the categorical imperative. And the minority members of the committee are committed Kantians, who think that they have a response to the best recent anti-Kantian arguments.

    It seems to me, intuitively, that this shouldn’t matter one whit. I’m not resting the arguments of this book on the intuitiveness of my views. That’s in part due to doubts about the usefulness of intuition, but more due to how unintuitive normative externalism often is. But it is worth noting how counterintuitive the opposing internalist view is in this extreme case. A moral agent making a practical deliberation simply won’t care what the latest journal articles have been saying about the pros and cons of Kantianism. It’s possible (though personally I doubt it), that learning of an action that it violates the categorical imperative would be relevant to one’s motivations. It’s not possible that learning that some people you admire think the categorical imperative is central to morality could change one’s motivation to perform, or not perform, actions one knew all along violated the categorical imperative. At least that’s not possible without falling into the bad kind of moral fetishism that Smith rightly decries.

    So here’s my general response to analogies of this kind, one that should not be surprising given the previous sections. Assuming the minority committee members are rational, either they know some facts about the impacts of A and B that Frank is unaware of, or they hold some philosophical theory that Frank doesn’t. If it’s the former, Frank should take their concerns into account; but that’s not because he should be sensitive to moral risk, it’s because he should be sensitive to non-moral risk. If it’s the latter, Frank shouldn’t take their concerns into account; that would be moral fetishism.